Presentation is loading. Please wait.

Presentation is loading. Please wait.

Iteration Methods “Mini-Lecture” on a method to solve problems by iteration Ch. 4: (Nonlinear Oscillations & Chaos). Some nonlinear problems are solved.

Similar presentations


Presentation on theme: "Iteration Methods “Mini-Lecture” on a method to solve problems by iteration Ch. 4: (Nonlinear Oscillations & Chaos). Some nonlinear problems are solved."— Presentation transcript:

1

2 Iteration Methods “Mini-Lecture” on a method to solve problems by iteration Ch. 4: (Nonlinear Oscillations & Chaos). Some nonlinear problems are solved by iteration (Sect. 4.7 & also Homework Ch. 4!). Direct (linear) iteration might or might not be efficient! In some cases it might not even converge, especially if you make a poor first guess ! A method which is often more efficient (has convergence in fewer iterations) than direct iteration is NEWTON’S METHOD

3 Linear Iteration A typical problem (as in Sect. 4.7 or in Ch. 4 homework) may result in having to solve (find the roots x) of a nonlinear algebraic equation: f(x) = 0 (1) where f(x) = some nonlinear function of x Linear (direct) Iteration Method –First, rewrite f(x) = 0 in the form: x = g(x) (2) –Often there is no one, unique way to do this! However, one can usually do this. (2) is equivalent to (1)! Requiring this equivalence defines g(x). –Goal: Find x’s which satisfy (2)

4 f(x) = 0 (1)  x = g(x) (2) Choose an initial approximation for x which (approximately) satisfies (2). Call this x 0 –A judicious choice of x 0 (first guess) is needed or the following scheme might not converge! –How to choose x 0 ? As an educated guess, DRAW A GRAPH to get an approximate root,... –Put x 0 in the right side of (2) to generate a new approximation  x 1. That is:x 1 = g(x 0 ) –Repeat to generate a new approximation  x 2 That is: x 2 = g(x 1 )

5 f(x) = 0 (1)  x = g(x) (2) At the n th iteration, we have: x n+1 = g(x n ) Goal: Want the ratio |x n+1 - x n |/|x n+1 | < ε where ε = some specified small number Mathematicians have developed criteria (& proved theorems!) for –When this linear iteration process will converge –How large the error is on the nth iteration –etc., etc. –We aren’t concerned with these here! See any numerical analysis book! Take Physics 4301!

6 Newton’s Method Even if the linear iteration method converges, depending on the function f(x) (or g(x)) it might converge very slowly & thus be very inefficient! A better method (more efficient, usually faster convergence!) is NEWTON’S METHOD We still want to solve f(x) = 0 (1) We still do this by first rewriting (1) as x = g(x) (2) Newton’s Method really just makes a very special & judicious choice for the (up to now arbitrary) function g(x)! (Back to this point soon!)

7 We want to solve: f(x) = 0 (1) Suppose we have very good insight & can make a very good initial approximation (first guess) for x satisfying (1)  x 0 Even though x 0 is a good first guess, it still won’t be an exact zero of (1)!  Let x be the “Exact Root” of (1) & do a Taylor’s series expansion of (1) about the first guess x 0 ! Assume that x 0 is near enough to the true root x that we can stop at the linear term.

8 That is: f(x)  f(x 0 ) + (x - x 0 )(df/dx) 0 +.. (3) –Could take the expansion to higher orders if desired. Usually this is not necessary! We still want to solve: f(x) = 0 (1) (1) & (3) together  f(x 0 ) + (x - x 0 )(df/dx) 0  0 (4) Solve (4) for x: x = x 0 - f(x 0 )/(df/dx) 0 (5) –Assumption! (df/dx) 0  0 Use (5) to iterate & get a new approximation for x  x 1 as: x 1 = x 0 - f(x 0 )/(df/dx) 0

9 So we have: 1 st iteration: x 1 = x 0 - f(x 0 )/(df/dx) 0 2 nd iteration: x 2 = x 1 - f(x 1 )/(df/dx) 1 –etc., etc. for 3 rd, 4 th, …. At the n th iteration, we have: x n+1 = x n - f(x n )/(df/dx) n Note: This is of the form x = g(x) (2) with a special choice of the function g(x) being g(x)  x - f(x)/(df/dx)

10 Newton’s Method of iteration: x n+1 = x n - f(x n )/(df/dx) n Goal (as in linear iteration): Want the ratio |x n+1 - x n |/|x n+1 | < ε where ε = some specified small number Mathematicians have developed criteria (& proved theorems!) for –When this iteration process will converge (usually faster than linear iteration!) –How large the error is on the n th iteration (assuming a good 1 st guess, usually faster than linear iteration!) –etc., etc. –We aren’t concerned with these here! See any numerical analysis book!

11 Iteration Example Hand held calculator! Ch. 4, Prob. # 5 a: Find the root (4 significant figures) of: x + x 2 + 1 = tan(x), [0  x  (½)π]. First, ALWAYS make a rough graph  The root we want is where the curve x + x 2 + 1 crosses the curve tan(x)! From the graph, a reasonable first guess is x 0  (3π/8)

12 Try Linear Iteration! x + x 2 + 1 = tan(x), [0  x  (½)π]. x 0 = (3π/8) = 1.1781 (radians!) Write: x = tan -1 [x + x 2 + 1]. This is in the form: (the Choice of g(x) not unique!): x = g(x) = tan -1 [x + x 2 + 1] The iteration procedure (& results): x n+1 = g(x n ) x 1 = g(x 0 ) = tan -1 [x 0 + x 0 2 + 1] = 1.2974 x 2 = g(x 1 ) = tan -1 [x 1 + x 1 2 + 1] = 1.3247 x 3 = g(x 2 ) = tan -1 [x 2 + x 2 2 + 1] = 1.3304 x 4 = g(x 3 ) = tan -1 [x 3 + x 3 2 + 1] = 1.3316 x 5 = g(x 4 ) = tan -1 [x 4 + x 4 2 + 1] = 1.3318 x 6 = g(x 5 ) = tan -1 [x 5 + x 5 2 + 1] = 1.3319  x = 1.3319

13 Try Newton’s Method! x + x 2 + 1 = tan(x), [0  x  (½)π]. x 0 = (3π/8) = 1.1781 (radians!) Write: tan(x) - (x + x 2 + 1) = 0. This is in the form: f(x) = tan(x) - (x + x 2 + 1) = 0  (df/dx) = [cos(x)] -2 - (1 +2x) Iteration procedure (& results): x n+1 = x n - f(x n )/(df/dx) n x 1 = x 0 - f(x 0 )/(df/dx) 0 = 1.5098, x 2 = x 1 - f(x 1 )/(df/dx) 1 = 1.4646 x 3 = x 2 - f(x 2 )/(df/dx) 2 = 1.4085, x 4 = x 3 - f(x 3 )/(df/dx) 3 = 1.3588 x 5 = x 4 - f(x 4 )/(df/dx) 4 = 1.3354, x 6 = x 5 - f(x 5 )/(df/dx) 5 = 1.3320 x 7 = x 6 - f(x 6 )/(df/dx) 6 = 1.3319  x = 1.3319 In this case, Newton’s Method takes more iterations to converge than linear iteration!!


Download ppt "Iteration Methods “Mini-Lecture” on a method to solve problems by iteration Ch. 4: (Nonlinear Oscillations & Chaos). Some nonlinear problems are solved."

Similar presentations


Ads by Google