Presentation is loading. Please wait.

Presentation is loading. Please wait.

Differential Equations

Similar presentations


Presentation on theme: "Differential Equations"— Presentation transcript:

1 Differential Equations
Brannan Copyright © 2010 by John Wiley & Sons, Inc. All rights reserved. Chapter 03: Systems of Two First Order Equations

2 Chapter 3 Systems of Two First Order Equations
We introduce systems of two first order equations In this chapter, we consider only systems of two first order equations and we focus most of our attention on systems of the simplest kind: two first order linear equations with constant coefficients. Our goals are to show what kinds of solutions such a system may have and how the solutions can be determined and displayed graphically, so that they can be easily visualized.

3 Chapter 3 Systems of Two First Order Equations
3.1 Systems of Two Linear Algebraic Equations 3.2 Systems of Two First Order Linear Differential Equations 3.3 Homogeneous Linear Systems with Constant Coefficients 3.4 Complex Eigenvalues 3.5 Repeated Eigenvalues 3.6 A Brief Introduction to Nonlinear Systems 3.7 Numerical Methods for Systems of First Order Equations

4 3.1 Systems of Two Linear Algebraic Equations

5 Solutions to a system of equations
There are three distinct possibilities for two straight lines in a plane: they may intersect at a single point, they may be parallel and nonintersecting, or they may be coincident. Examples: 1. 3x1 − x2 = 8, x1 + 2x2 = 5. 2. x1 + 2x2 = 1, x1 + 2x2 = 5. 3. 2x1 + 4x2 = 10, x1 + 2x2 = 5.

6 Cramer’s Rule – THEOREM 3.1.1
The system a11x1 + a12x2 = b1, a21x1 + a22x2 = b2, has a unique solution if and only if the determinant Δ = a11a22 − a12a21 = 0. The solution is given by If Δ = 0, then the system has either no solution or infinitely many.

7 Matrix Method Consider coefficient matrix,
If A−1 exists, then A is called nonsingular or invertible. On the other hand, if A−1 does not exist, then A is said to be singular or noninvertible. The solution to Ax=B is x = A−1b.

8 Homogeneous System THEOREM The homogeneous system Ax = 0 always has the trivial solution x1 = 0, x2 = 0, and this is the only solution when det(A) ≠ 0. Nontrivial solutions exist if and only if det(A) =0. In this case, unless A = 0, all solutions are proportional to any nontrivial solution; in other words, they lie on a line through the origin. If A = 0, then every point in the x1x2-plane is a solution of system. Example: Solve the system 3x1 − x2 = 0, x1 + 2x2 = 0.

9 Eigenvalues and Eigenvectors
Eigenvalues (λ) of the matrix A are the solutions to Ax = λx. The eigenvector x corresponding to the eigenvalue λ is obtained by solving Ax = λx for x for the given λ. For a 2X2 matrix Ax = λx reduces to Since det(A-λI)=0, get

10 Characteristic equation
The characteristic equation of the matrix A is λ2 − (a11 + a22)λ + a11a22 − a12a21 = 0. Solutions determine the eigenvalues. The two solutions, the eigenvalues λ1 and λ2, may be real and different, real and equal, or complex conjugates.

11 Examples Find the eigenvalues and eigenvectors of the matrix A. 1. 2.
3.

12 THEOREM 3.1.3 Let A have real or complex eigenvalues λ1 and λ2 such that λ1≠λ2, and let the corresponding eigenvectors be x1 and x2. If X is the matrix with first and second columns taken to be x1 and x2, respectively, then det(X) ≠0. That is,

13 3.2 Systems of Two First Order Linear Differential Equations
du/dt = Ku + b. where K is a given 2X2 matrix and b a given 2x1 matrix. U is 2X1 matrix of unknowns whose first derivative is du/dt. We solve this system subject to a given initial condition u(0)= u0, a 2X1 matrix with given values.

14 Example Here

15 Terminology The components of u are scalar valued functions of t, so we can plot their graphs. Plots of u1 and u2 versus t are called component plots. The variables u1 and u2 are often called state variables, since their values at any time describe the state of the system. Similarly, the vector u = u1i + u2j is called the state vector of the system. The u1u2- plane itself is called the state space. If there are only two state variables, the u1u2- plane may be called the state plane or, more commonly, the phase plane.

16 Direction Fields The right side of a system of first order equations du/dt = Ku + b defines a vector field that governs the direction and speed of motion of the solution at each point in the phase plane. Because the vectors generated by a vector field for a specific system often vary significantly in length, it is customary to scale each nonzero vector so that they all have the same length. These vectors are then referred to as direction field vectors for the system and the resulting picture is called the direction field.

17 Phase Portraits Using a computer we can to generate solution trajectories. A plot of a representative sample of the trajectories, including any constant solutions, is called a phase portrait of the system of equations.

18 Phase portrait for the example

19 General Solutions of Two First Order Linear Equations
THEOREM Existence and Uniqueness of Solutions Let each of the functions p11, , p22, g1, and g2 be continuous on an open interval I =α < t < β, let t0 be any point in I, and let x0 and y0 be any given numbers. Then there exists a unique solution of the system that also satisfies the initial conditions Further, the solution exists throughout the interval I.

20 First Order Linear Equations
In matrix form, we write The system above is called a first order linear system of dimension two because it consists of first order equations and because its state space (the xy-plane) is two-dimensional. Further, if g(t) = 0 for all t, that is, g1(t) = g2(t) = 0 for all t, then the system is said to be homogeneous. Otherwise, it is nonhomogeneous.

21 Linear Autonomous Systems
If the right side of does not depend explicitly on the independent variable t, the system is said to be autonomous. Then the coefficient matrix P and the components of the vector g must be constants. We use the notation dx/dt = Ax + b, where A is a constant matrix and b is a constant vector, to denote autonomous linear systems.

22 Critical points of linear autonomous system
For the linear autonomous system, we find the equilibrium solutions, or critical points, by setting dx/dt equal to zero. Hence any solution of Ax = −b is a critical point of the system. If the coefficient matrix A has an inverse, as we usually assume, then Ax = −b has a single solution, namely, x=−A−1b. This is then the only critical point of the system. However, if A is singular, then Ax = −b has either no solution or infinitely many.

23 Transformation of a Second Order Equation to a System of First Order Equations
Consider the second order equation y'' + p(t)y' + q(t)y = g(t), where p, q, and g are given functions that we assume to be continuous on an interval I. Substituting x1 = y and x2 = y'. This system can be transformed to a system of two first order equations,

24 Example Consider the differential equation
u'' u' + 2u = 3 sin t. Suppose that initial conditions u(0) = 2, u(0) = −2. Transform this problem into an equivalent one for a system of first order equations. Write the matrix notation for this initial value problem. Answer:

25 Component plots of the solution to this initial value problem

26 3.3 Homogeneous Linear Systems with Constant Coefficients
Reducing x' = Ax + b to x' = Ax If A has an inverse, then the only critical, or equilibrium, point of x' = Ax + b is xeq = −A−1b. In such cases it is convenient to shift the origin of the phase plane to the critical point using the coordinate transformation x = xeq + x˜. Substituting, we get dx˜/dt = Ax˜. Therefore, if x = φ(t) is a solution of the homogeneous system x' = Ax, then the solution of the nonhomogeneous system x' = Ax + b is given by x = φ(t) + xeq = φ(t) − A−1b.

27 The Eigenvalue Method for Solving x' = Ax
Consider a general system of two first order linear homogeneous differential equations with constant coefficients dx/dt=Ax. x = eλtv is a solution of dx/dt = Ax provided that λ is an eigenvalue and v is a corresponding eigenvector of the coefficient matrix A.Hence Av = λv, or (A − λI)v = 0.

28 THEOREM 3.3.1 - Principle of Superposition
Suppose that x1(t) = eλ1tv1 and x2(t) = eλ2tv2 are solutions of dx/dt = Ax. Then the expression x = c1x1(t) + c2x2(t), where c1 and c2 are arbitrary constants, is also a solution. We assume that λ1 and λ2 are real and different.

29 Example Consider the system dx/dt = x.
Find solutions of the system and then find the particular solution that satisfies the initial Condition x(0) = Answer:

30 Wronskian determinant
The determinant is called the Wronskian determinant or, more simply, the Wronskian of the two vectors x1 and x2. If x1(t) = eλ1tv1 and x2(t) = eλ2tv2, then their Wronskian is Two solutions x1(t) and x2(t) of whose Wronskian is not zero are referred to as a fundamental set of solutions. The linear combination of x1 and x2 given with arbitrary coefficients c1 and c2, x = c1x1(t) + c2x2(t), is called the general solution.

31 THEOREM 3.3.2 Suppose that x1(t) and x2(t) are two solutions of dx/dt = Ax, and that their Wronskian is not zero. Then x1(t) and x2(t) form a fundamental set of solutions, and the general solution is given by, x = c1x1(t) + c2x2(t), where c1 and c2 are arbitrary constants. If there is a given initial condition x(t0) = x0, where x0 is any constant vector, then this condition determines the constants c1 and c2 uniquely. Note: The theorem is true if coefficient matrix A has eigenvalues that are real and different. It is also valid even when the eigenvalues are complex or repeated.

32 EXAMPLE - A Rockbed Heat Storage System Revisited
Consider again the greenhouse/rockbed heat storage problem with coordinates centered at the critical point given by, dx/dt = x = Ax. Find the general solution of this system. Then plot a direction field, a phase portrait, and several component plots of the system. Discuss the answer and provide the phase portrait.

33 Answer The general solution is
The eigenvalues are λ1 = -7/4 and λ2 = −1/8. Direction field and phase portrait for the system is shown in the next slide.

34 Nodal Sources and Nodal Sinks
The pattern of trajectories in Figure is typical of all second order systems x' = Ax whose eigenvalues are real, different, and of the same sign. The origin is called a node for such a system. This is te Phase portrait for the system in previous example.

35 Nodal Sources and Nodal Sinks
If the eigenvalues were positive rather than negative, then the trajectories would be similar but traversed in the outward direction. Nodes are asymptotically stable if the eigenvalues are negative and unstable if the eigenvalues are positive. Asymptotically stable nodes and unstable nodes are also referred to as nodal sinks and nodal sources respectively.

36 Example Consider the system dx/dt = x = Ax.
Find the general solution and draw a phase portrait.

37 Answer The general solution is
The eigenvalues are λ1 = 3 and λ2 = −1. Direction field and phase portrait for the system is shown in the next slide.

38 Saddle Points The pattern of trajectories in Figure is typical of all second order systems x' = Ax for which the eigenvalues are real and of opposite signs. The origin is called a saddle point in this case. Saddle points are always unstable because almost all trajectories depart from them as t increases.

39 3.4 Complex Eigenvalues Consider a two dimensional system x= Ax with complex conjugate eigenvalues To solve the system, find the eigenvalues and eigenvectors, observing that they are complex conjugates. Then write down x1(t) and separate it into its real and imaginary parts u(t) and w(t), respectively. Finally, form a linear combination of u(t) and w(t), x = c1u(t) + c2w(t). Of course, if complex-valued solutions are acceptable, you can simply use the solutions x1(t) and x2(t). Thus Theorem is also valid when the eigenvalues are complex.

40 Example Q: Consider the system
Find a fundamental set of solutions and display them graphically in a phase portrait and component plots. A: The General solution

41 Component plots for the solutions u(t) and w(t) of the system

42 A direction field and phase portrait for the system

43 Spiral Points The phase portrait in previous Figure is typical of all two-dimensional systems x' = Ax whose eigenvalues are complex with a negative real part. The origin is called a spiral point and is asymptotically stable because all trajectories approach it as t increases. Such a spiral point is often called a spiral sink. For a system whose eigenvalues have a positive real part, the trajectories are similar to those in Figure, but the direction of motion is away from the origin and the trajectories become unbounded. In this case, the origin is unstable and is often called a spiral source.

44 Centers If the real part of the eigenvalues is zero, then there is no exponential factor in the solution and the trajectories neither approach the origin nor become unbounded. Instead, they repeatedly traverse a closed curve about the origin. An example of this behavior can be seen in Figure to left. In this case, the origin is called a center and is said to be stable, but not asymptotically stable. In all three cases, the direction of motion may be either clockwise, as in previous Example, or counterclockwise, depending on the elements of the coefficient matrix A.

45 Summary For two-dimensional systems with real coefficients, we have now completed our description of the three main cases that can occur: 1. Eigenvalues are real and have opposite signs; x = 0 is a saddle point. 2. Eigenvalues are real and have the same sign but are unequal; x = 0 is a node. 3. Eigenvalues are complex with nonzero real part; x = 0 is a spiral point.

46 3.4 Repeated Eigenvalues An Example
Q: Consider the system x' = Ax, where Draw a direction field, a phase portrait, and typical component plots. A: The eigenvalues are λ1 = λ2 = −1. General solution x = c1x1(t) + c2x2(t) where

47 Typical component plots for the system

48 A direction field and phase portrait for the system

49 Proper node or Star Point
It is possible to show that the only 2 X2 matrices with a repeated eigenvalue and two independent eigenvectors are the diagonal matrices with the eigenvalues along the diagonal. Such matrices form a rather special class, since each of them is proportional to the identity matrix. The system in above Example is entirely typical of this class of systems. In this case the origin is called a proper node or, sometimes, a star point.

50 Repeated Eigenvalues (in general)
Consider two-dimensional linear homogeneous systems with constant coefficients x' = Ax. Suppose that λ1 is a repeated eigenvalue of the matrix A and that there is only one independent eigenvector v1. Then one solution is x1(t) = e λ1t v1. A second solution is x2(t) = teλ1tv1 + eλ1tw, where w satisfies (A − λ1I)w = v1.

51 Repeated Eigenvalues (Ctd.)
The vector w is called a generalized eigenvector corresponding to the eigenvalue λ1. In the case where the 2X2 matrix A has a repeated eigenvalue and only one eigenvector, the origin is called an improper or degenerate node.

52 Example Q: Consider the system
Find the eigenvalues and eigenvectors of the coefficient matrix, and then find the generalsolution of the system. Draw a direction field, phase portrait, and component plots. A: The eigenvalues are λ1 = λ2 = −1/2. General solution x = c1x1(t) + c2x2(t) where

53 A direction field and phase portrait for the system
improper or degenerate node

54 Typical plots of x1 versus t for the system

55 Summary of Results

56 3.6 A Brief Introduction to Nonlinear Systems
In Section 3.2, we introduced the general two- dimensional first order linear system Of course, two-dimensional systems that are not of the form (1) or (2) may also occur. Such systems are said to be nonlinear.

57 THEOREM 3.6.1 - Existence and Uniqueness of Solutions.
Let each of the functions f and g and the partial derivatives ∂f /∂x, ∂f /∂y, ∂g/∂x, and ∂g/∂y be continuous in a region R of txy-space defined by α < t < β, α1 < x < β1, α2 < y < β2, and let the point (t0, x0, y0) be in R. Then there is an interval |t − t0| < h in which there exists a unique solution of the system of differential equations that also satisfies the initial conditions x(t0) = x0, y(t0) = y0.

58 Autonomous Systems It is usually impossible to solve nonlinear systems exactly by analytical methods. Therefore for such systems graphical methods and numerical approximations become even more important. In the next section, we will extend our discussion of approximate numerical methods to two- dimensional systems. Here we will consider systems for which direction fields and phase portraits are of particular importance. These are systems that do not depend explicitly on the independent variable t. In other words, the functions f and g in the equation depend only on x and y and not on t. Such a system is called autonomous, and can be written in the form

59 Equilibrium points or Critical points
To find equilibrium, or constant, solutions of the autonomous system, we set dx/dt and dy/dt equal to zero, and solve the resulting equations f (x, y) = 0, g(x, y) = 0 for x and y. Any solution of these is a point in the phase plane that is a trajectory of an equilibrium solution. Such points are called equilibrium points or critical points. Depending on the particular forms of f and g, the nonlinear system can have any number of critical points, ranging from none to infinitely many.

60 Example Consider the system dx/dt = x − y, dy/dt= 2x − y − x2.
Find a function H(x, y) such that the trajectories of the system lie on the level curves of H. Find the critical points and draw a phase portrait for the given system. Describe the behavior of its trajectories.

61 Example (Ctd.) To find the critical points, solve the equations x − y = 0, 2x − y − x2 = 0. The critical points are (0, 0) and (1, 1). To determine the trajectories, note that for this system, becomes dy/dx =(2x − y − x2)/(x − y) This is exact and so solutions satisfy H(x, y) = x2 − xy + 1/2 y2 − 1/3 x3 = c, where c is an arbitrary constant.

62 A phase portrait for the system

63 3.7 Numerical Methods for Systems of First Order Equations
Numerical methods for approximating the solutions of initial value problems for a single first order differential equation In Sections 1.3, 2.7, and 2.8 can be used. The algorithms are the same for nonlinear and for linear equations, so we will not restrict ourselves to linear equations in this section. We consider a system of two first order equations x' = f (t, x, y), y' = g(t, x, y), with the initial conditions x(t0) = x0, y(t0) = y0. The functions f and g are assumed to satisfy the conditions of Theorem so that the initial value problem above has a unique solution in some interval of the t-axis containing the point t0. We wish to determine approximate values x1, x2, , xn, and y1, y2, , yn, of the solution x = φ(t), y = ψ(t) at the points tn = t0 + nh with n = 1, 2,

64 Euler formula The scalar Euler formula tn+1 = tn + h, xn+1 = xn + h fn is replaced by

65 Runge–Kutta method The Runge–Kutta method can be extended to a system. For the step from tn to tn+1 we have

66 Example Determine approximate values of the solution x=φ(t), y=ψ(t) of the initial value problem x' = −x + 4y, y' = x − y, x(0) = 2, y(0) = −0.5, at the point t = 0.2. Use the Euler method with h = 0.1 and the Runge–Kutta method with h = 0.2. Compare the results with the values of the exact solution: φ(t) =(et + 3e−3t)/2, ψ(t) =(et − 3e−3t)/4

67 Approximations to the solution of the initial value problem using the Euler method (h = 0.1) and the Runge–Kutta method (h = 0.2).

68 Summary

69 Section 3.1 Two-Dimensional Linear Algebra
Matrix notation for a linear algebraic system of two equations in two unknowns is Ax = b. If det A ≠ 0, the unique solution of Ax = b is x = A−1b. If det A = 0, Ax = b may have (i) no solution, or (ii) a straight line of solutions in the plane; in particular, if b = 0 and A ≠ 0, the solution set is a straight line passing through the origin. The eigenvalue problem: (A − λI)x = 0. The eigenvalues of A are solutions of the characteristic equation det(A − λI) = 0. An eigenvector for the eigenvalue λ is a nonzero solution of (A − λI)x = 0. Eigenvalues may be real and different, real and equal, or complex conjugates.

70 Section 3.2 Systems of Two First Order Linear Equations

71 Section 3.3 Homogeneous Systems with Constant Coefficients: x' = Ax

72 Section 3.4 Complex Eigenvalues
If the eigenvalues of A are μ±iν, ν ≠ 0, with corresponding eigenvectors a±ib, a fundamental set of real vector solutions of x = Ax consists of Re{exp[(μ + iν)t][a + ib]} = exp(μt)(cos νta − sin νtb) and Im{exp[(μ + iν)t][a + ib]} = exp(μt)(sin νta + cos νtb). If μ ≠ 0, then the critical point (the origin) is a spiral point. If μ = 0, then the critical point is a center.

73 Section 3.5 Repeated Eigenvalues
If A has a single repeated eigenvalue λ, then a general solution of x' = Ax is (i) x = c1eλtv1 + c2eλtv2 if v1 and v2 are independent eigenvectors, or (ii) x = c1eλtv + c2eλt (w + tv), where (A − λI)w = v if v is the only eigenvector of A. The critical point at the origin is a proper node if there are two independent eigenvectors, and an improper or degenerate node if there is only one eigenvector.

74 Section 3.6 Nonlinear Systems
Nonautonomous: x' = f (t, x), Autonomous: x ' = f (x) Theorem provides conditions that guarantee, locally in time, existence and uniqueness of solutions to the initial value problem x' =f(t, x), x(t0) = x0. Examples of two-dimensional nonlinear autonomous systems suggest that locally their solutions behave much like solutions of linear systems.

75 Section 3.7 Numerical Approximation Methods for Systems
The Euler and Runge–Kutta methods described in Chapters 1 and 2 are extended to systems of first order equations, and are illustrated for a typical two-dimensional system.


Download ppt "Differential Equations"

Similar presentations


Ads by Google