Iteration Methods “Mini-Lecture” on a method to solve problems by iteration Ch. 4: (Nonlinear Oscillations & Chaos). Some nonlinear problems are solved.

Slides:



Advertisements
Similar presentations
Lecture 5 Newton-Raphson Method
Advertisements

Mathematics1 Mathematics 1 Applied Informatics Štefan BEREŽNÝ.
4.5: Linear Approximations and Differentials
Open Methods Chapter 6 The Islamic University of Gaza
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
7 INVERSE FUNCTIONS.
Open Methods Chapter 6 The Islamic University of Gaza
PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
INFINITE SEQUENCES AND SERIES
Lecture #18 EEE 574 Dr. Dan Tylavsky Nonlinear Problem Solvers.
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods
APPLICATIONS OF DIFFERENTIATION Newton’s Method In this section, we will learn: How to solve high-degree equations using Newton’s method. APPLICATIONS.
Systems of Non-Linear Equations
Open Methods Chapter 6 The Islamic University of Gaza
Roots of Equations Open Methods Second Term 05/06.
FP1: Chapter 2 Numerical Solutions of Equations
MATH 175: NUMERICAL ANALYSIS II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2 nd Semester AY
Solving Non-Linear Equations (Root Finding)
Interpolation. Interpolation is important concept in numerical analysis. Quite often functions may not be available explicitly but only the values of.
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
Fixed-Point Iteration Douglas Wilhelm Harder Department of Electrical and Computer Engineering University of Waterloo Copyright © 2007 by Douglas Wilhelm.
Taylor Series & Error. Series and Iterative methods Any series ∑ x n can be turned into an iterative method by considering the sequence of partial sums.
LIMITS AND DERIVATIVES 2. We have used calculators and graphs to guess the values of limits.  However, we have learned that such methods don’t always.
In previous sections we have been using calculators and graphs to guess the values of limits. Sometimes, these methods do not work! In this section we.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
We have used calculators and graphs to guess the values of limits.  However, we have learned that such methods do not always lead to the correct answer.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Infinite Sequences and Series 8. Taylor and Maclaurin Series 8.7.
In section 11.9, we were able to find power series representations for a certain restricted class of functions. Here, we investigate more general problems.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Numerical Methods for Engineering MECN 3500
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3- Chapter 12 Iterative Methods.
Boundary Value Problems l Up to this point we have solved differential equations that have all of their initial conditions specified. l There is another.
Numerical Methods.
CHAPTER 3 NUMERICAL METHODS
The Convergence Problem Recall that the nth Taylor polynomial for a function f about x = x o has the property that its value and the values of its first.
Example Ex. Find Sol. So. Example Ex. Find (1) (2) (3) Sol. (1) (2) (3)
In this section we develop general methods for finding power series representations. Suppose that f (x) is represented by a power series centered at.
Linear Systems – Iterative methods
Dr. Jie Zou PHY Chapter 2 Solution of Nonlinear Equations: Lecture (II)
Today’s class Roots of equation Finish up incremental search
Circuits Theory Examples Newton-Raphson Method. Formula for one-dimensional case: Series of successive solutions: If the iteration process is converged,
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
Solving Non-Linear Equations (Root Finding)
Today’s class Numerical differentiation Roots of equation Bracketing methods Numerical Methods, Lecture 4 1 Prof. Jinbo Bi CSE, UConn.
Advanced Engineering Mathematics, 7 th Edition Peter V. O’Neil © 2012 Cengage Learning Engineering. All Rights Reserved. CHAPTER 4 Series Solutions.
4.8 Newton’s Method Mon Nov 9 Do Now Find the equation of a tangent line to f(x) = x^5 – x – 1 at x = 1.
Linearization, Newton’s Method
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
4 Numerical Methods Root Finding.
Warm Up Write an equation of the tangent line to the curve at the given point. 1)f(x)= x 3 – x + 1 where x = -1 2)g(x) = 3sin(x/2) where x = π/2 3)h(x)
The formulae for the roots of a 3rd degree polynomial are given below
2.3 Calculating Limits Using the Limit Laws LIMITS AND DERIVATIVES In this section, we will: Use the Limit Laws to calculate limits.
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
Trigonometric Identities
Numerical Methods.
Ch. 5 – Applications of Derivatives
Interpolation.
Numerical Methods.
SOLUTION OF NONLINEAR EQUATIONS
3.8 Newton’s Method How do you find a root of the following function without a graphing calculator? This is what Newton did.
INFINITE SEQUENCES AND SERIES
Newton-Raphson Method
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
1 Newton’s Method.
Presentation transcript:

Iteration Methods “Mini-Lecture” on a method to solve problems by iteration Ch. 4: (Nonlinear Oscillations & Chaos). Some nonlinear problems are solved by iteration (Sect. 4.7 & also Homework Ch. 4!). Direct (linear) iteration might or might not be efficient! In some cases it might not even converge, especially if you make a poor first guess ! A method which is often more efficient (has convergence in fewer iterations) than direct iteration is NEWTON’S METHOD

Linear Iteration A typical problem (as in Sect. 4.7 or in Ch. 4 homework) may result in having to solve (find the roots x) of a nonlinear algebraic equation: f(x) = 0 (1) where f(x) = some nonlinear function of x Linear (direct) Iteration Method –First, rewrite f(x) = 0 in the form: x = g(x) (2) –Often there is no one, unique way to do this! However, one can usually do this. (2) is equivalent to (1)! Requiring this equivalence defines g(x). –Goal: Find x’s which satisfy (2)

f(x) = 0 (1)  x = g(x) (2) Choose an initial approximation for x which (approximately) satisfies (2). Call this x 0 –A judicious choice of x 0 (first guess) is needed or the following scheme might not converge! –How to choose x 0 ? As an educated guess, DRAW A GRAPH to get an approximate root,... –Put x 0 in the right side of (2) to generate a new approximation  x 1. That is:x 1 = g(x 0 ) –Repeat to generate a new approximation  x 2 That is: x 2 = g(x 1 )

f(x) = 0 (1)  x = g(x) (2) At the n th iteration, we have: x n+1 = g(x n ) Goal: Want the ratio |x n+1 - x n |/|x n+1 | < ε where ε = some specified small number Mathematicians have developed criteria (& proved theorems!) for –When this linear iteration process will converge –How large the error is on the nth iteration –etc., etc. –We aren’t concerned with these here! See any numerical analysis book! Take Physics 4301!

Newton’s Method Even if the linear iteration method converges, depending on the function f(x) (or g(x)) it might converge very slowly & thus be very inefficient! A better method (more efficient, usually faster convergence!) is NEWTON’S METHOD We still want to solve f(x) = 0 (1) We still do this by first rewriting (1) as x = g(x) (2) Newton’s Method really just makes a very special & judicious choice for the (up to now arbitrary) function g(x)! (Back to this point soon!)

We want to solve: f(x) = 0 (1) Suppose we have very good insight & can make a very good initial approximation (first guess) for x satisfying (1)  x 0 Even though x 0 is a good first guess, it still won’t be an exact zero of (1)!  Let x be the “Exact Root” of (1) & do a Taylor’s series expansion of (1) about the first guess x 0 ! Assume that x 0 is near enough to the true root x that we can stop at the linear term.

That is: f(x)  f(x 0 ) + (x - x 0 )(df/dx) (3) –Could take the expansion to higher orders if desired. Usually this is not necessary! We still want to solve: f(x) = 0 (1) (1) & (3) together  f(x 0 ) + (x - x 0 )(df/dx) 0  0 (4) Solve (4) for x: x = x 0 - f(x 0 )/(df/dx) 0 (5) –Assumption! (df/dx) 0  0 Use (5) to iterate & get a new approximation for x  x 1 as: x 1 = x 0 - f(x 0 )/(df/dx) 0

So we have: 1 st iteration: x 1 = x 0 - f(x 0 )/(df/dx) 0 2 nd iteration: x 2 = x 1 - f(x 1 )/(df/dx) 1 –etc., etc. for 3 rd, 4 th, …. At the n th iteration, we have: x n+1 = x n - f(x n )/(df/dx) n Note: This is of the form x = g(x) (2) with a special choice of the function g(x) being g(x)  x - f(x)/(df/dx)

Newton’s Method of iteration: x n+1 = x n - f(x n )/(df/dx) n Goal (as in linear iteration): Want the ratio |x n+1 - x n |/|x n+1 | < ε where ε = some specified small number Mathematicians have developed criteria (& proved theorems!) for –When this iteration process will converge (usually faster than linear iteration!) –How large the error is on the n th iteration (assuming a good 1 st guess, usually faster than linear iteration!) –etc., etc. –We aren’t concerned with these here! See any numerical analysis book!

Iteration Example Hand held calculator! Ch. 4, Prob. # 5 a: Find the root (4 significant figures) of: x + x = tan(x), [0  x  (½)π]. First, ALWAYS make a rough graph  The root we want is where the curve x + x crosses the curve tan(x)! From the graph, a reasonable first guess is x 0  (3π/8)

Try Linear Iteration! x + x = tan(x), [0  x  (½)π]. x 0 = (3π/8) = (radians!) Write: x = tan -1 [x + x 2 + 1]. This is in the form: (the Choice of g(x) not unique!): x = g(x) = tan -1 [x + x 2 + 1] The iteration procedure (& results): x n+1 = g(x n ) x 1 = g(x 0 ) = tan -1 [x 0 + x ] = x 2 = g(x 1 ) = tan -1 [x 1 + x ] = x 3 = g(x 2 ) = tan -1 [x 2 + x ] = x 4 = g(x 3 ) = tan -1 [x 3 + x ] = x 5 = g(x 4 ) = tan -1 [x 4 + x ] = x 6 = g(x 5 ) = tan -1 [x 5 + x ] =  x =

Try Newton’s Method! x + x = tan(x), [0  x  (½)π]. x 0 = (3π/8) = (radians!) Write: tan(x) - (x + x 2 + 1) = 0. This is in the form: f(x) = tan(x) - (x + x 2 + 1) = 0  (df/dx) = [cos(x)] -2 - (1 +2x) Iteration procedure (& results): x n+1 = x n - f(x n )/(df/dx) n x 1 = x 0 - f(x 0 )/(df/dx) 0 = , x 2 = x 1 - f(x 1 )/(df/dx) 1 = x 3 = x 2 - f(x 2 )/(df/dx) 2 = , x 4 = x 3 - f(x 3 )/(df/dx) 3 = x 5 = x 4 - f(x 4 )/(df/dx) 4 = , x 6 = x 5 - f(x 5 )/(df/dx) 5 = x 7 = x 6 - f(x 6 )/(df/dx) 6 =  x = In this case, Newton’s Method takes more iterations to converge than linear iteration!!