Integrating linear equations using power series. Integration of differential equations using power series. Power series. Properties of power series

power series.

Using power series it is possible to integrate differential equations.

Consider a linear differential equation of the form:

If all the coefficients and the right-hand side of this equation are expanded into power series converging in a certain interval, then there is a solution to this equation in some small neighborhood of the zero point that satisfies the initial conditions.

This solution can be represented by a power series:

To find a solution, it remains to determine the unknown constants c i.

This problem can be solved method of comparison of uncertain coefficients. We substitute the written expression for the desired function into the original differential equation, performing all the necessary operations with power series (differentiation, addition, subtraction, multiplication, etc.)

Then we equate the coefficients at the same degrees X on the left and right sides of the equation. As a result, taking into account the initial conditions, we obtain a system of equations from which we successively determine the coefficients c i.

Note that this method is also applicable to nonlinear differential equations.

Example. Find a solution to the equation with initial conditions y(0)=1, y’(0)=0.

We will look for a solution to the equation in the form

We substitute the resulting expressions into the original equation:

From here we get:

………………

We obtain by substituting the initial conditions into the expressions for the desired function and its first derivative:

Finally we get:

There is another method for solving differential equations using series. It's called method of sequential differentiation.

Let's look at the same example. We will look for a solution to the differential equation in the form of an expansion of the unknown function in a Maclaurin series.

If the given initial conditions y(0)=1, y’(0)=0 substitute into the original differential equation, we get that

After substituting the obtained values ​​we get:

Fourier series.

(Jean Baptiste Joseph Fourier (1768 – 1830) – French mathematician)

Trigonometric series.

Definition. Trigonometric series called a series of the form:

or, in short,

Real numbers a i, b i are called coefficients of the trigonometric series.

If a series of the type presented above converges, then its sum is a periodic function with period 2p, because functions sin nx and cos nx also periodic functions with period 2p.

Let the trigonometric series converge uniformly on the segment [-p; p], and therefore on any segment due to periodicity, and its sum is equal to f(x).


Let us determine the coefficients of this series.

To solve this problem we use the following equalities:

The validity of these equalities follows from the application of trigonometric formulas to the integrand. See Integrating Trigonometric Functions for more information.

Because function f(x) is continuous on the interval [-p; p], then there is an integral

This result is obtained as a result of the fact that.

From here we get:

Similarly, we multiply the expression for the series expansion of a function by sin nx and integrate in the range from -p to p.

We get:

Expression for the coefficient a 0 is a special case for expressing coefficients a n.

Thus, if the function f(x)– any periodic function of period 2p, continuous on the interval [-p; p] or having a finite number of discontinuity points of the first kind on this segment, then the coefficients

exist and are called Fourier coefficients for function f(x).

Definition. Near Fourier for function f(x) is called a trigonometric series whose coefficients are Fourier coefficients. If the Fourier series of the function f(x) converges to it at all its points of continuity, then we say that the function f(x) expands into a Fourier series.

Sufficient signs of decomposability in a Fourier series.

Theorem. (Dirichlet's theorem) If the function f(x) has a period of 2p and on the segment

[-p;p] is continuous or has a finite number of discontinuity points of the first kind, and the segment

[-p;p] can be divided into a finite number of segments so that inside each of them the function f(x) is monotonic, then the Fourier series for the function f(x) converges for all values ​​of x, and at points of continuity of the function f(x) its sum is equal to f(x), and at the points of discontinuity its sum is equal to , i.e. the arithmetic mean of the limit values ​​on the left and right. In this case, the Fourier series of the function f(x) converges uniformly on any segment that belongs to the continuity interval of the function f(x).

A function f(x) for which the conditions of Dirichlet’s theorem are satisfied is called piecewise monotonic on the segment [-p;p].

Theorem. If the function f(x) has a period of 2p, in addition, f(x) and its derivative f'(x) are continuous functions on the interval [-p;p] or have a finite number of discontinuity points of the first kind on this interval, then the series The Fourier function f(x) converges for all values ​​of x, and at points of continuity its sum is equal to f(x), and at points of discontinuity it is equal to . In this case, the Fourier series of the function f(x) converges uniformly on any segment that belongs to the continuity interval of the function f(x).

A function that satisfies the conditions of this theorem is called piecewise – smooth on the segment [-p;p].

Fourier series expansion of a non-periodic function.

The problem of expanding a non-periodic function into a Fourier series is, in principle, no different from expanding a periodic function into a Fourier series.

Let's say the function f(x) is given on an interval and is piecewise monotonic on this interval. Consider an arbitrary periodic piecewise monotonic function f 1 (x) with period 2T ³ ïb-aï, coinciding with the function f(x) on the segment .

a - 2T a a b a+2T a + 4T x

So the function f(x) has been added. Now the function f 1 (x) expands into a Fourier series. The sum of this series at all points of the segment coincides with the function f(x), those. we can assume that the function f(x) expanded into a Fourier series on the segment .

Thus, if the function f(x) is given on an interval equal to 2p, it is no different from the series expansion of a periodic function. If the segment on which the function is given is less than 2p, then the function is extended to the interval (b, a + 2p) so that the conditions for expansion into a Fourier series are preserved.

Generally speaking, in this case, the continuation of a given function onto a segment (interval) of length 2p can be performed in an infinite number of ways, so the sums of the resulting series will be different, but they will coincide with the given function f(x) on the segment.

Fourier series for even and odd functions.

Let us note the following properties of even and odd functions:

2) The product of two even and odd functions is an even function.

3) The product of even and odd functions is an odd function.

The validity of these properties can be easily proven based on the definition of even and odd functions.

If f(x) is an even periodic function with period 2p, satisfying the conditions of expansion in a Fourier series, then we can write:

Thus, for an even function the Fourier series is written:

Similarly, we obtain the Fourier series expansion for an odd function:

Example. Expand into a Fourier series a periodic function with period T = 2p on the interval [-p;p].

The given function is odd, therefore, we look for the Fourier coefficients in the form:

Definition. Fourier series on an orthogonal system of functions j 1 (x), j 2 (x), …,jn(x) is called a series of the form:

whose coefficients are determined by the formula:

Where f(x)= is the sum of a series uniformly converging on a segment along an orthogonal system of functions. f(x) – any function that is continuous or has a finite number of discontinuity points of the first kind on the segment.

In the case of an orthonormal system of functions, the coefficients are determined:

When using the computer version “ Higher mathematics course” it is possible to run a program that expands an arbitrary function into a Fourier series.

Using power series it is possible to integrate differential equations.

Consider a linear differential equation of the form:

If all the coefficients and the right-hand side of this equation are expanded into power series converging in a certain interval, then there is a solution to this equation in some small neighborhood of the zero point that satisfies the initial conditions.

This solution can be represented by a power series:

To find a solution, it remains to determine the unknown constants c i .

This problem can be solved method of comparison of uncertain coefficients. We substitute the written expression for the desired function into the original differential equation, performing all the necessary operations with power series (differentiation, addition, subtraction, multiplication, etc.)

Then we equate the coefficients at the same degrees X on the left and right sides of the equation. As a result, taking into account the initial conditions, we obtain a system of equations from which we successively determine the coefficients c i .

Note that this method is also applicable to nonlinear differential equations.

Example. Find the solution to the equation
with initial conditions y(0)=1, y’(0)=0.

We will look for a solution to the equation in the form

We substitute the resulting expressions into the original equation:

From here we get:

………………

We obtain by substituting the initial conditions into the expressions for the desired function and its first derivative:

Finally we get:

Total:

There is another method for solving differential equations using series. It's called sequential differentiation method.

Let's look at the same example. We will look for a solution to the differential equation in the form of an expansion of the unknown function in a Maclaurin series.

If the given initial conditions y(0)=1, y’(0)=0 substitute into the original differential equation, we get that

Next, we write the differential equation in the form
and we will sequentially differentiate it by X.

After substituting the obtained values ​​we get:

Cauchy criterion.

(necessary and sufficient conditions for the convergence of the series)

In order for the sequence
was convergent, it is necessary and sufficient that for any
there was such a number
N, that atn > Nand anyp> 0, where p is an integer, the following inequality would hold:

.

Proof. (necessity)

Let
, then for any number
there is a number N such that the inequality

is fulfilled when n>N. For n>N and any integer p>0 the inequality also holds
. Taking into account both inequalities, we obtain:

The need has been proven. We will not consider the proof of sufficiency.

Let us formulate the Cauchy criterion for the series.

In order for the series
was convergent, it is necessary and sufficient that for any
there was a number
Nsuch that atn> Nand anyp>0 the inequality would hold

.

However, in practice, using the Cauchy criterion directly is not very convenient. Therefore, as a rule, simpler convergence tests are used:

Consequence. If f(x) And (X)– continuous functions on the interval (a, b] and
then the integrals
And
behave identically in terms of convergence.

How to find a particular solution of a DE approximately using a series?

Continuing to study the practical applications of series theory, let's consider another common problem, the name of which you see in the title. And, in order not to feel like a lawnmower throughout the lesson, let's immediately understand the essence of the task. Three questions and three answers:

What do you need to find? Particular solution of a differential equation. A hint between the lines whispers that by this moment it is advisable to at least understand what it is differential equation and what is his solution.

HOW is this solution required? Approximately - using a series.

And the third logical question: why approximately? I already covered this question in class. Euler and Runge-Kutta methods, but repetition won't hurt. Being a supporter of specifics, I will return to the simplest differential equation. During the first lecture on diffusers, we found its general solution (set of exponentials) and a particular solution corresponding to the initial condition. The graph of a function is the most common line that is easy to depict in a drawing.

But this is an elementary case. In practice, there are a great many differential equations that cannot be solved analytically exactly (at least by currently known methods). In other words, no matter how you twist such an equation, it will not be possible to integrate it. And the catch is that a general solution (a family of lines on a plane) may exist. And then the methods of computational mathematics come to the rescue.

Let's meet our joy!

A typical problem is formulated as follows:

, satisfying the initial condition, in the form of three (less often – four or five) non-zero terms Taylor series.

The required particular solution is expanded into this series according to the well-known formula:

The only thing is that instead of the letter “ef”, “igrek” is used here (it just so happens).

The idea and meaning are also familiar: for some diffusers and under certain conditions (we will not go into theory) built the power series will converge to the desired particular solution. That is, the more terms of the series we consider, the more accurately the graph of the corresponding polynomial will approximate the graph of the function.

It should be noted that the above applies to the simplest cases. Let's conduct a simple children's study on the same potty:

Example 1

Find an approximately partial solution to the differential equation that satisfies the initial condition in the form of the first four nonzero terms of the Taylor series.

Solution: in the conditions of this problem, therefore the general Taylor formula is transformed into a special case Maclaurin series expansion:

Looking ahead a little, I will say that in practical tasks this more compact series is much more common.

Enter both working formulas into your reference book.

Let's understand the meanings. It is convenient to number the stages of the solution:

0) At step zero, we write down the value, which is always known from the condition. In the notebook, it is advisable to circle the final results of the points so that they are clearly visible and do not get lost in the solution. For technical reasons, it is more convenient for me to highlight them in bold. Besides, note that this value is not zero! After all, the condition requires finding four non-zero members of the series.

1) Let's calculate . To do this, substitute the known value into the right side of the original equation instead of the “y”:

2) Let's calculate . First we find second derivative:

We substitute the value found in the previous paragraph into the right side:

We already have three non-zero terms of the expansion, we need one more:

Example 2

Find an approximately partial solution to the differential equation , satisfying the initial condition in the form of the first three nonzero terms of the Taylor series.

Solution begins with a standard phrase:

In this problem, therefore:

Now we sequentially find the values ​​- until three are obtained non-zero result. If you're lucky, they will be different from zero – this is an ideal case with a minimum amount of work.

Let's cut down the solution points:

0) By condition. Here is the first success.

1) Let's calculate . First, let's solve the original equation with respect to the first derivative, that is, we express . Let's substitute known values ​​into the right side:

We received a steering wheel and this is not good, since we are interested in non-zero meanings. However, zero - same result, which we don’t forget to circle or highlight in some other way.

2) Find the second derivative and substitute the known values ​​into the right side:

The second one is “not zero”.

3) Find the derivative of the second derivative:

In general, the task is somewhat reminiscent of the Tale of the Turnip, when a grandfather, grandmother and granddaughter call a bug, a cat, etc. for help. And in fact, each subsequent derivative is expressed through its “predecessors”.

Let's substitute known values ​​into the right side:

The third non-zero value. They pulled out the turnip.

Carefully and carefully substitute the “bold” numbers into our formula:

Answer: the desired approximate expansion of the particular solution:

In the example considered, there was only one zero in second place, and this is not so bad. In general, zeros can occur as many as you like and anywhere. I repeat, it is very important to highlight them along with non-zero results, so as not to get confused in substitutions at the final stage.

Here you go - the bagel is in first place:

Example 3

Find an approximately partial solution of the differential equation corresponding to the initial condition, in the form of the first three nonzero terms of the Taylor series.

An approximate example of a task at the end of the lesson. The points of the algorithm may not be numbered (leaving, for example, empty lines between steps), but I recommend that beginners adhere to a strict template.

The task under consideration requires increased attention - if you make a mistake at any step, then everything else will also be wrong! Therefore, your clear head should work like clockwork. Alas, this is not integrals or diffusers, which can be reliably solved even in a fatigued state, since they allow an effective check to be carried out.

In practice it is much more common Maclaurin series expansion:

Example 4

Solution: in principle, you can immediately write down Maclaurin expansion, but it is more academic to start formalizing the problem with the general case:

The expansion of a particular solution of a differential equation under the initial condition has the form:

In this case, therefore:

0) By condition.

Well what can you do... Let's hope there are fewer zeros.

1) Let's calculate . The first derivative is already ready for use. Let's substitute the values:

2) Let's find the second derivative:

And let's substitute into it:

Things went well!

3) Find . I'll write it down in great detail:

Note that the usual algebraic rules apply to derivatives: bringing similar terms at the last step and writing the product as a power: (ibid.).

Let’s substitute in everything that has been acquired through backbreaking labor:

Three non-zero values ​​are born.

We substitute the “bold” numbers into the Maclaurin formula, thereby obtaining an approximate expansion of the particular solution:

Answer:

To solve it yourself:

Example 5

Present approximately a particular solution of the differential equation corresponding to the given initial condition as the sum of the first three nonzero terms of the power series.

A sample design at the end of the lesson.

As you can see, the problem with a particular expansion in Maclaurin series turned out to be even more difficult than the general case. The complexity of the task under consideration, as we have just seen, lies not so much in the decomposition itself, but in the difficulties of differentiation. Moreover, sometimes you have to find 5-6 derivatives (or even more), which increases the risk of error. And at the end of the lesson, I offer a couple of tasks of increased complexity:

Example 6

Solve the differential equation approximately using the expansion of a particular solution into a Maclaurin series, limiting ourselves to the first three non-zero terms of the series

Solution: we have a diffur of the second order, but this practically does not change the matter. According to the condition, we are immediately asked to use the Maclaurin series, which we will not fail to use. Let's write down the familiar expansion, taking more terms just in case:

The algorithm works exactly the same:

0) – by condition.

1) – according to the condition.

2) Let's resolve the original equation with respect to the second derivative: .

And let's substitute:

First non-zero value

Click on derivatives and perform substitutions:

Let's substitute and:

Let's substitute:

The second non-zero value.

5) – along the way we present similar derivatives.

Let's substitute:

Let's substitute:

Finally. However, it can be worse.

Thus, the approximate expansion of the desired particular solution is:

Taylor series. Maclaurin series

Let be a function differentiable an infinite number of times in the neighborhood of a point, i.e. has derivatives of any order. The Taylor series of a function at a point is a power series

In the special case of series (1.8) is called the Maclaurin series:

The question arises: In what cases does the Taylor series for a function differentiated an infinite number of times in a neighborhood of a point coincide with the function?

There may be cases when the Taylor series of a function converges, but its sum is not equal

Let us present a sufficient condition for the convergence of the Taylor series of a function to this function.

Theorem 1.4: if in an interval a function has derivatives of any order and all of them are limited in absolute value by the same number, i.e. then the Taylor series of this function converges to for any of this interval, i.e. there is equality

Separate studies are required to determine whether this equality holds at the ends of the convergence interval.

It should be noted that if a function is expanded into a power series, then this series is the Taylor (Maclaurin) series of this function, and this expansion is unique.

Differential equations

An ordinary differential equation of the nth order for an argument function is a relation of the form

where is a given function of its arguments.

In the name of this class of mathematical equations, the term “differential” emphasizes that they include derivatives (functions formed as a result of differentiation); the term “ordinary” indicates that the desired function depends on only one real argument.

An ordinary differential equation may not explicitly contain the argument of the desired function and any of its derivatives, but the highest derivative must be included in the nth order equation.

For example,

A) - first order equation;

B) - third order equation.

When writing ordinary differential equations, the notation for derivatives in terms of differentials is often used:

B) - second order equation;

D) - a first-order equation that, after dividing by an equivalent form, forms the following equation:

A function is called a solution to an ordinary differential equation if, when substituted into it, it turns into an identity.

Finding by one method or another, for example, selection, one function that satisfies the equation does not mean solving it. Solving an ordinary differential equation means finding all the functions that form an identity when substituted into the equation. For equation (1.10), a family of such functions is formed using arbitrary constants and is called the general solution of an ordinary differential equation of the nth order, and the number of constants coincides with the order of the equation: The general solution may not be explicitly resolved with respect to In this case, the solution is usually called general integral of equation (1.10).

By assigning some admissible values ​​to all arbitrary constants in the general solution or in the general integral, we obtain a certain function that no longer contains arbitrary constants. This function is called a partial solution or partial integral of equation (1.10). To find the values ​​of arbitrary constants, and therefore a particular solution, various additional conditions to equation (1.10) are used. For example, the so-called initial conditions can be specified at:

On the right-hand sides of the initial conditions (1.11) the numerical values ​​of the function and derivatives are specified, and the total number of initial conditions is equal to the number of defined arbitrary constants.

The problem of finding a particular solution to equation (1.10) based on the initial conditions is called the Cauchy problem.

Integrating Differential Equations Using Series

In the general case, finding an exact solution to a first-order ordinary differential equation (ODE) by integrating it is impossible. Moreover, this is not feasible for an ODE system. This circumstance led to the creation of a large number of approximate methods for solving ODEs and their systems. Among the approximate methods, three groups can be distinguished: analytical, graphical and numerical. Of course, such a classification is to a certain extent arbitrary. For example, the graphical method of Euler's broken lines underlies one of the methods for numerically solving a differential equation.

Integrating ODEs using power series is an approximate analytical method, usually applied to linear equations of at least second order. For simplicity, we limit ourselves to considering a linear homogeneous second-order ODE with variable coefficients

Note: a fairly wide class of functions can be represented in the form

where are some constants. This expression is called a power series.

Let us assume that the functions can be expanded into series converging in the interval:

The following theorem holds (omitting the proof, we present only its formulation).

Theorem 1.5: if the functions have the form (1.13), then any solution to the ODE (1.12) can be represented as a power series converging at:

This theorem not only makes it possible to represent the solution in the form of a power series, but, most importantly, it justifies the convergence of the series (1.14). For simplicity, we put in (1.13) and (1.14) and look for a solution to ODE (1.12) in the form

Substituting (1.15) into (1.12), we obtain the equality

To fulfill (1.16), it is necessary that the coefficient for each degree be equal to zero.

From this condition we obtain an infinite system of linear algebraic equations

from which one can successively find if one sets the values ​​and (in the case of the Cauchy problem for ODE (1.12), they are included in the initial conditions).

If the functions are rational, i.e.

where are polynomials, then in the vicinity of points at which either a solution in the form of a power series may not exist, and if it does exist, it may diverge everywhere except for the point. This circumstance was known to L. Euler, who considered the first-order equation

This equation is satisfied by the power series

It is not difficult, however, to see that this series diverges for any

The solution of an ODE in the form of a divergent power series is called formal.

0

Ministry of Education of the Republic of Belarus

Educational institution

"Mogilev State University named after A.A. Kuleshova"

Department of MAiVT

Constructing solutions to differential equations using series

Course work

Completed by: 3rd year group B student

Faculty of Physics and Mathematics

Yuskaeva Alexandra Maratovna

Scientific adviser:

Morozov Nikolay Porfirievich

MOGILEV, 2010

Introduction

1. Differential equations of higher orders

1.1. The concept of a linear differential equation of nth order

2. Integration of differential equations using series

2.1. Integration of differential equations using power series.

2.2. Integration of differential equations using generalized power series.

3. Special cases of using generalized power series when integrating differential equations.

3.1. Bessel's equation.

3.2. Hypergeometric equation or Gaussian equation.

4. Application of the method of integrating ordinary differential equations using series in practice.

Conclusion

Literature

Introduction

In the general case, finding an exact solution to a first-order ordinary differential equation by integrating it is impossible. Moreover, this is not feasible for a system of ordinary differential equations. This circumstance led to the creation of a large number of approximate methods for solving ordinary differential equations and their systems. Among the approximate methods, three groups can be distinguished: analytical, graphical and numerical. Of course, such a classification is to a certain extent arbitrary. For example, the graphical method of Euler's broken lines underlies one of the methods for numerically solving a differential equation.

Integration of ordinary differential equations using power series is an approximate analytical method, usually applied to linear equations of at least second order.

Analytical methods are found in the course on differential equations. For first-order equations (with separable variables, homogeneous, linear, etc.), as well as for some types of higher-order equations (for example, linear with constant coefficients), it is possible to obtain solutions in the form of formulas through analytical transformations.

The purpose of the work is to analyze one of the approximate analytical methods, such as the integration of ordinary differential equations using series, and their application in solving differential equations.

  1. Higher order differential equations

An ordinary differential equation of the nth order is a relation of the form

where F is a known function of its arguments, defined in a certain domain;

x - independent variable;

y is a function of the variable x to be determined;

y’, y”, …, y (n) - derivatives of the function y.

In this case, it is assumed that y (n) is actually included in the differential equation. Any of the other arguments of the function F may not explicitly participate in this relationship.

Any function that satisfies a given differential equation is called its solution, or integral. Solving a differential equation means finding all its solutions. If for the required function y it is possible to obtain a formula that gives all the solutions of a given differential equation and only them, then we say that we have found its general solution, or general integral.

The general solution of an nth order differential equation contains n arbitrary constants c 1, c 2,..., c n and has the form.

1.1. The concept of a linear differential equationn-th order

A differential equation of the nth order is called linear if it is of the first degree with respect to the set of quantities y, y’, ..., y (n). Thus, the linear differential equation of the nth order has the form:

where are known continuous functions of x.

This equation is called an inhomogeneous linear equation or an equation with a right-hand side. If the right side of the equation is identically equal to zero, then the linear equation is called a homogeneous differential linear equation and has the form

If n is equal to 2, then we obtain a linear equation of the second order, which will be written as: Just like a linear equation of the nth order, a second-order equation can be homogeneous () and inhomogeneous.

  1. Integration of differential equations using series.

Solutions of an ordinary differential equation above first order with variable coefficients are not always expressed in terms of elementary functions, and the integration of such an equation is rarely reduced to quadratures.

2.1. Integration of differential equations using power series.

The most common method of integrating these equations is to present the desired solution in the form of a power series. Consider second order equations with variable coefficients

Note1. A fairly wide class of functions can be represented in the form

where, are some constants. This expression is called a power series. If its values ​​are equal to the corresponding values ​​of the function for any x from the interval (x 0 - T; x 0 + T), then such a series is called convergent in this interval.

Let us assume that the functions a(x), b(x) are analytic functions of equation (2.1) on the interval (x 0 - T; x 0 + T), T > 0, i.e. are expanded into power series:

The following theorem holds (omitting the proof, we present only its formulation).

Theorem_1. If the functions a(x), b(x) have the form (2.2), then any solution y(x) of the ordinary differential equation (2.1) can be represented as converging as |x - x 0 |< Т степенного ряда:

This theorem not only makes it possible to represent the solution in the form of a power series, but also, most importantly, it justifies the convergence of series (2.3).

The algorithm for such a representation is as follows. For convenience, let us put x 0 = 0 in (2.2) and (2.3) and look for a solution to the ordinary differential equation (2.1) in the form

Substituting (2.4) into (2.1), we obtain the equality

To fulfill (2.5), it is necessary that the coefficient for each power x be equal to zero. From this condition we obtain an infinite system of linear algebraic equations

………………………………………….

…………………………………………………………………. .

From the resulting infinite system of linear algebraic equations, one can successively find, ..., if one sets the values ​​and (in the case of the Cauchy problem for the ordinary differential equation (2.1), one can introduce the initial conditions = , =).

If the functions a(x), b(x) are rational, i.e. , b , where are polynomials, then in the vicinity of points at which or, a solution in the form of a power series may not exist, and if it does exist, it may diverge everywhere except for the point x = 0. This circumstance was known to L. Euler , who considered the first order equation

This equation is satisfied by the power series

It is not difficult, however, to see that this series diverges for any. The solution of an ordinary differential equation in the form of a divergent power series is called formal.

One of the most striking and understandable examples of the use of this method of integration is the Airy equations or

All solutions to this equation are entire functions of x. Then we will look for a solution to the Airy equation in the form of a power series (2.4). Then equality (2.5) takes the form

Let us set the coefficient at each power x equal to zero. We have

……………………………

The coefficient for the zero degree of x is equal to 2y 2. Consequently, y 2 = 0. Then from the equality of the coefficient to zero we find = . The coefficient is equal to. From here.

From this formula we get

The odds remain uncertain. To find the fundamental system of solutions, we first set = 1, = 0, and then vice versa. In the first case we have

and in the second

Based on Theorem 1, these series are convergent everywhere on the number line.

The functions and are called Airy functions. For large values ​​of x, the asymptotic behavior of these functions is described by the following formulas and.

The graphs of these functions are shown in Fig. 2.1. We find that with an unlimited increase in x, the zeros of any solution to the Airy equation get closer together indefinitely, which is also evident from the asymptotic representation of these solutions, but is not at all obvious from the representation of the Airy functions in the form of convergent power series. It follows that the method of searching for a solution to an ordinary differential equation using a series is, generally speaking, of little use in solving applied problems, and the very representation of the solution in the form of a series makes it difficult to analyze the qualitative properties of the resulting solution.

2.2. Integration of differential equations using generalized power series.

So, if in equation (2.1) the functions a(x), b(x) are rational, then the points at which or are called singular points of equation (2.1).

For a second order equation

in which a(x), b(x) are analytic functions in the interval |x - x 0 |< а, точка х = 0 является особой точкой, лишь только один из коэффициентов а 0 или b 0 в разложении функций а(х) и b(х) в степенной ряд отличен от нуля. Это пример простейшей особой точки, так называемой регулярной особой точки (или особой точки первого рода).

In the vicinity of the singular point x = x 0, solutions in the form of a power series may not exist; in this case, solutions must be sought in the form of a generalized power series:

where λ and, …, () are to be determined.

Theorem_2. In order for equation (2.6) to have at least one particular solution in the form of a generalized power series (2.7) in the neighborhood of the singular point x = x 0, it is sufficient that this equation has the form

These are convergent power series, and the coefficients are not equal to zero at the same time, because otherwise the point x = x 0 is not a special point and there are two linearly independent solutions, holomorphic at the point x = x 0 . Moreover, if the series (2.7”) included in the coefficients of equation (2.7’) converge in the region | x - x 0 |< R, то и ряд, входящий в решение (2.7), заведомо сходится в той же области.

Consider equation (2.6) for x > 0. Substituting expression (2.7) for x 0 = 0 into this equation, we have

Equating the coefficients at powers of x to zero, we obtain a recurrent system of equations:

……..........................……………………………………………. (2.8)

where indicated

Since, then λ must satisfy the equation

which is called the defining equation. Let be the roots of this equation. If the difference is not an integer, then for any integer k > 0, which means that using the indicated method it is possible to construct two linearly independent solutions to equation (2.6):

If the difference is an integer, then using the above method you can construct one solution in the form of a generalized series. Knowing this solution, using the Liouville-Ostrogradsky formula, you can find the second linearly independent solution:

From the same formula it follows that the solution can be sought in the form

(the number A may be equal to zero).

  1. Special cases of using generalized power series when integrating differential equations.

3.1. Bessel's equation.

The Bessel equation is one of the most important differential equations in mathematics and its applications. The solutions to the Bessel equation, which make up its fundamental system of functions, are not elementary functions. But they are expanded into power series, the coefficients of which are calculated quite simply.

Let us consider the Bessel equation in general form:

Many problems of mathematical physics are reduced to this equation.

Since the equation does not change when replacing x with -x, it is sufficient to consider non-negative values ​​of x. The only singular point is x=0. The defining equation corresponding to x=0 is, . If 0, then the defining equation has two roots: and. Let us find the solution to this equation in the form of a generalized power series

then, substituting y, y" and y" into the original equation, we get

Hence, reducing by, we have

For this equality to hold identically, the coefficients must satisfy the equations

Let us find the solution corresponding to the root of the defining equation λ = n. Substituting λ = n into the last equalities, we see that we can take any number other than zero, number = 0, and for k = 2, 3, ... we have

Hence, for all m = 0, 1, 2, … .

Thus, all coefficients have been found, which means that the solution to equation (3.1) will be written in the form

Let's introduce the function

called Euler's gamma function. Considering what and what for integers, and also choosing an arbitrary constant, it will be written in the form

is called the Bessel function of the first kind of nth order.

The second particular solution of the Bessel equation, linearly independent, looking for in the form

The equations for determining at have the form

Assuming we find

By convention, n is not an integer, so all coefficients with even numbers are uniquely expressed through:

Thus,

Assuming we represent y 2 (x) in the form

is called a Bessel function of the first kind with a negative index.

Thus, if n is not an integer, then all solutions to the original Bessel equation are linear combinations of the Bessel function and: .

3.2. Hypergeometric equation or Gaussian equation.

A hypergeometric equation (or Gaussian equation) is an equation of the form

where α, β, γ are real numbers.

The points are singular points of the equation. Both of them are regular, since in the neighborhood of these points the coefficients of the Gauss equation written in normal form

can be represented as a generalized power series.

Let's make sure of this for a point. Indeed, noticing that

equation (3.2) can be written as

This equation is a special case of the equation

and here, so the point x=0 is a regular singular point of the Gauss equation.

Let us construct a fundamental system of solutions to the Gauss equation in the vicinity of the singular point x=0.

The defining equation corresponding to the point x=0 has the form

Its roots, and their difference is not an integer.

Therefore, in the vicinity of the singular point x=0, it is possible to construct a fundamental system of solutions in the form of generalized power series

the first of which corresponds to the zero root of the defining equation and is an ordinary power series, so that the solution is holomorphic in the neighborhood of the singular point x=0. The second solution is obviously non-holomorphic at the point x=0. Let us first construct a particular solution corresponding to the zero root of the defining equation.

So, we will look for a particular solution to equation (3.2) in the form

Substituting (3.3) into (3.2), we get

Equating the free term to zero, we get.

Let it be, then we get it.

Equating the coefficient at to zero, we find:

Therefore, the required particular solution has the form:

The series on the right is called a hypergeometric series, since when α=1, β=γ it turns into a geometric progression

According to Theorem_2, series (3.4) converges as |x|<1, так же как и ряд (3.5), и, следовательно, представляет в этом интервале решение уравнения (3.2).

The second particular solution has the form:

Instead of finding the method of undetermined coefficients, we will replace the desired function in the Gauss equation using the formula

We obtain the Gauss equation

in which the role of parameters α, β and γ is played by and.

Therefore, by constructing a partial solution of this equation corresponding to the zero root of the defining equation and substituting it into (3.6), we obtain the second partial solution of this Gauss equation in the form:

The general solution to the Gauss equation (3.2) will be:

Using the constructed fundamental system of solutions to the Gauss equation in the neighborhood of the singular point x=0, one can easily construct a fundamental system of solutions to this equation in the neighborhood of the singular point x=1, which is also a regular singular point.

For this purpose, we will transfer the singular point x = 1 of interest to us to the point t = 0 and together with it the singular point x = 0 to the point t = 1 using a linear replacement of the independent variable x = 1 - t.

Carrying out this substitution in this Gauss equation, we obtain

This is the Gaussian equation with parameters. It has in the neighborhood |t|<1 особой точки t = 0 фундаментальную систему решений

Returning to the variable x, i.e., setting t = 1 - x, we obtain a fundamental system of solutions to the original Gauss equation in the vicinity of the point | x - 1|< 1 особой точки х = 1

The general solution of the Gauss equation (3.2) in the region will be

  1. Application of the method of integrating ordinary differential equations using series in practice.

Example_1. (No. 691) Calculate the first few coefficients of the series (up to the coefficient at x 4 inclusive) with initial conditions

From the initial conditions it follows that Now let’s find the remaining coefficients:

Example_2. (No. 696) Calculate the first few coefficients of the series (up to the coefficient at x 4 inclusive) with initial conditions

Solution: We will look for a solution to the equation in the form

We substitute the resulting expressions into the original equation:

Representing the right side in the form of a power series and equating the coefficients for the same powers of x in both sides of the equation, we obtain:

Since according to the condition it is necessary to calculate the coefficients of the series up to the coefficient at x 4 inclusive, it is enough to calculate the coefficients.

From the initial conditions it follows that and 2. Now let’s find the remaining coefficients:

Consequently, the solution to the equation will be written in the form

Example_3. (No. 700) Find linearly independent solutions in the form of power series of the equation. If possible, express the sum of the resulting series using elementary functions.

Solution. We will look for a solution to the equation in the form of a series

Differentiating this series twice and substituting it into this equation, we have

Let us write out the first few terms of the series in the resulting equation:

Equating the coefficients at equal powers of x to zero, we obtain a system of equations for determining:

………………………………….

From these equations we find

Let us assume that then only the coefficients will be different from zero. We get that

One solution of the equation has been constructed

We obtain the second solution, linearly independent of the one found, by assuming. Then only the coefficients will be different from zero:

The series representing and converge for any value of x and are analytic functions. Thus, all solutions to the original equation are analytic functions for all values ​​of x. All solutions are expressed by the formula, where C 1, C 2 are arbitrary constants:

Since the sum of the resulting series can be easily expressed using elementary functions, it will be written as:

Example_4. (No. 711) Solve the equation 2x 2 y" + (3x - 2x 2)y" - (x + 1)y = 0.

Solution. The point x = 0 is a regular singular point of this equation. We compose the defining equation: Its roots are λ 1 = 1/2 and λ 2 = - 1. We look for the solution to the original equation corresponding to the root λ = λ 1 in the form

Substituting and into the original equation, we have

From here, reducing by, we get

Equating the coefficients at the same powers of x, we have equations for determining:

Setting y 0 = 1, we find

Thus,

We look for the solution to the original equation corresponding to the root λ = λ 2 in the form

Substituting this expression into the original equation and equating the coefficients at the same powers of x, we get or Putting y 0 = 1, we find

We write the general solution of the original equation in the form where and are arbitrary constants.

Conclusion

Solving equations containing unknown functions and their derivatives to powers higher than the first or in some more complex way is often very difficult.

In recent years, such differential equations have attracted increasing attention. Since solutions to equations are often very complex and difficult to represent using simple formulas, a significant part of modern theory is devoted to the qualitative analysis of their behavior, i.e. the development of methods that make it possible, without solving the equation, to say something significant about the nature of the solutions as a whole: for example, that they are all limited, or have a periodic nature, or depend in a certain way on the coefficients.

During the course work, an analysis of the method of integrating differential equations using power and generalized power series was carried out.

Literature:

  1. Matveev N.V. Methods for integrating ordinary differential equations. Ed. 4th, rev. and additional Minsk, “Highest. school”, 1974. - 768 p. with ill.
  2. Agafonov S.A., German A.D., Muratova T.V. Differential equations: Textbook. for universities / Ed. B.C. Zarubina, A.P. Krischenko. - 3rd ed., stereotype. -M.: Publishing house of MSTU im. N.E. Bauman, 2004. - 352 p.
  3. Bugrov Ya. S., Nikolsky S. M. Higher mathematics. T.3: Differential equations. Multiple integrals. Rows. Functions of a complex variable: Textbook. for universities: In 3 volumes / Ya. S. Bugrov, S. M. Nikolsky; Ed. V. A. Sadovnichy. — 6th ed., stereotype. — M.: Bustard, 2004. —— 512 p.: ill.
  4. Samoleinko A. M., Krivosheya S. A., Perestyuk N. A. Differential equations: examples and problems. Textbook allowance. - 2nd ed., revised. - M.: Higher. school, 1989. - 383 pp.: ill.
  5. Filippov A.F. Collection of problems on differential equations. Textbook manual for universities. - M.: Fizmatizd, 1961. - 100 pp.: ill.

Download: You do not have access to download files from our server.