Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods

Slides:



Advertisements
Similar presentations
Part 2 Chapter 6 Roots: Open Methods
Advertisements

Chapter 6: Roots: Open Methods
Lecture 5 Newton-Raphson Method
Part 2 Chapter 6 Roots: Open Methods
Open Methods Chapter 6 The Islamic University of Gaza
ROOTS OF EQUATIONS Student Notes ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
Optimization Introduction & 1-D Unconstrained Optimization
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 61.
Open Methods Chapter 6 The Islamic University of Gaza
Open Methods.
Chapter 4 Roots of Equations
PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
Roots of Equations Open Methods (Part 2).
Chapter 6 Open Methods.
Second Term 05/061 Roots of Equations Bracketing Methods.
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
Roots of Equations Bracketing Methods.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 9 Roots of Equations Open Methods.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Martin Mendez UASLP Chapter 61 Unit II.
Systems of Non-Linear Equations
Open Methods Chapter 6 The Islamic University of Gaza
Dr. Marco A. Arocha Aug,  “Roots” problems occur when some function f can be written in terms of one or more dependent variables x, where the.
NUMERICAL METHODS WITH C++ PROGRAMMING
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 8 Roots of Equations Open Methods.
Roots of Equations Open Methods Second Term 05/06.
FP1: Chapter 2 Numerical Solutions of Equations
Chapter 3 Root Finding.
Solving Non-Linear Equations (Root Finding)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 Roots of Equations Why? But.
Newton-Raphson Method
Introduction This chapter gives you several methods which can be used to solve complicated equations to given levels of accuracy These are similar to.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Roots of Equations ~ Open Methods Chapter 6 Credit:
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Numerical Methods for Engineering MECN 3500
Numerical Methods.
CHAPTER 3 NUMERICAL METHODS
Numerical Methods Root Finding 4. Fixed-Point Iteration---- Successive Approximation Many problems also take on the specialized form: g(x)=x, where we.
Applied Numerical Methods
Today’s class Roots of equation Finish up incremental search
MECN 3500 Inter - Bayamon Lecture 6 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
4 Numerical Methods Root Finding Secant Method Modified Secant
Newton-Raphson Method. Figure 1 Geometrical illustration of the Newton-Raphson method. 2.
ROOTS OF EQUATIONS. Bracketing Methods The Bisection Method The False-Position Method Open Methods Simple Fixed-Point Iteration The Secant Method.
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
Solving Non-Linear Equations (Root Finding)
Numerical Methods Solution of Equation.
4 Numerical Methods Root Finding Secant Method Modified Secant
4 Numerical Methods Root Finding.
Newton-Raphson Method Computer Engineering Majors Authors: Autar Kaw, Jai Paul Transforming Numerical Methods Education.
Project on Newton’s Iteration Method Presented by Dol Nath Khanal Project Advisor- Professor Dexuan Xie 05/11/2015.
Solution of Nonlinear Equations ( Root Finding Problems ) Definitions Classification of Methods  Analytical Solutions  Graphical Methods  Numerical.
CSE 330: Numerical Methods. Introduction The bisection and false position method require bracketing of the root by two guesses Such methods are called.
Solution of Nonlinear Equations ( Root Finding Problems )
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
4 Numerical Methods Root Finding Secant Method Modified Secant
Newton’s Method for Systems of Non Linear Equations
CS B553: Algorithms for Optimization and Learning
Chapter 6.
Solution of Equations by Iteration
Numerical Analysis Lecture 7.
Roots of equations Class VII.
Computers in Civil Engineering 53:081 Spring 2003
MATH 2140 Numerical Methods
Lecture 10 Root Finding using Open Methods
ROOTS OF EQUATIONS.
Newton-Raphson Method
Chapter 6.
1 Newton’s Method.
Presentation transcript:

Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods Roots of Equations Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods

The following root finding methods will be introduced: A. Bracketing Methods A.1. Bisection Method A.2. Regula Falsi B. Open Methods B.1. Fixed Point Iteration B.2. Newton Raphson's Method B.3. Secant Method

B. Open Methods To find the root for f(x) = 0, we construct a magic formulae xi+1 = g(xi) to predict the root iteratively until x converge to a root. However, x may diverge! Bisection method Open method (diverge) Open method (converge)

What you should know about Open Methods How to construct the magic formulae g(x)? How can we ensure convergence? What makes a method converges quickly or diverge? How fast does a method converge?

B.1. Fixed Point Iteration Also known as one-point iteration or successive substitution To find the root for f(x) = 0, we reformulate f(x) = 0 so that there is an x on one side of the equation. If we can solve g(x) = x, we solve f(x) = 0. x is known as the fixed point of g(x). We solve g(x) = x by computing until xi+1 converges to x.

Fixed Point Iteration – Example Reason: If x converges, i.e. xi+1  xi

Example Find root of f(x) = e-x - x = 0. (Answer: α= 0.56714329) i xi 100.0 1 1.000000 76.3 2 0.367879 171.8 35.1 3 0.692201 46.9 22.1 4 0.500473 38.3 11.8 5 0.606244 17.4 6.89 6 0.545396 11.2 3.83 7 0.579612 5.90 2.20 8 0.560115 3.48 1.24 9 0.571143 1.93 0.705 10 0.564879 1.11 0.399

Two Curve Graphical Method The point, x, where the two curves, f1(x) = x and f2(x) = g(x), intersect is the solution to f(x) = 0. Demo

There are infinite ways to construct g(x) from f(x). Fixed Point Iteration There are infinite ways to construct g(x) from f(x). For example, (ans: x = 3 or -1) Case a: Case b: Case c: So which one is better?

Converge! Diverge! Converge, but slower x0 = 4 x1 = 3.31662

How to choose g(x)? Can we know which g(x) would converge to solution before we do the computation?

Convergence of Fixed Point Iteration By definition Fixed point iteration

Convergence of Fixed Point Iteration According to the derivative mean-value theorem, if g(x) and g'(x) are continuous over an interval xi ≤ x ≤ α, there exists a value x = c within the interval such that Therefore, if |g'(c)| < 1, the error decreases with each iteration. If |g'(c)| > 1, the error increase. If the derivative is positive, the iterative solution will be monotonic. If the derivative is negative, the errors will oscillate.

Demo (a) |g'(x)| < 1, g'(x) is +ve converge, monotonic (b) |g'(x)| < 1, g'(x) is -ve converge, oscillate (c) |g'(x)| > 1, g'(x) is +ve diverge, monotonic (d) |g'(x)| > 1, g'(x) is -ve diverge, oscillate Demo

Fixed Point Iteration Impl. (as C function) // x0: Initial guess of the root // es: Acceptable relative percentage error // iter_max: Maximum number of iterations allowed double FixedPt(double x0, double es, int iter_max) { double xr = x0; // Estimated root double xr_old; // Keep xr from previous iteration int iter = 0; // Keep track of # of iterations do { xr_old = xr; xr = g(xr_old); // g(x) has to be supplied if (xr != 0) ea = fabs((xr – xr_old) / xr) * 100; iter++; } while (ea > es && iter < iter_max); return xr; }

The following root finding methods will be introduced: A. Bracketing Methods A.1. Bisection Method A.2. Regula Falsi B. Open Methods B.1. Fixed Point Iteration B.2. Newton Raphson's Method B.3. Secant Method

B.2. Newton-Raphson Method Use the slope of f(x) to predict the location of the root. xi+1 is the point where the tangent at xi intersects x-axis.

Newton-Raphson Method What would happen when f '(α) = 0? For example, f(x) = (x –1)2 = 0

Error Analysis of Newton-Raphson Method By definition Newton-Raphson method

Error Analysis of Newton-Raphson Method Suppose α is the true value (i.e., f(α) = 0). Using Taylor's series When xi and α are very close to each other, c is between xi and α. The iterative process is said to be of second order.

The Order of Iterative Process (Definition) Using an iterative process we get xk+1 from xk and other info. We have x0, x1, x2, …, xk+1 as the estimation for the root α. Let δk = α – xk Then we may observe The process in such a case is said to be of p-th order. It is called Superlinear if p > 1. It is call quadratic if p = 2 It is called Linear if p = 1. It is called Sublinear if p < 1.

Error of the Newton-Raphson Method Each error is approximately proportional to the square of the previous error. This means that the number of correct decimal places roughly doubles with each approximation. Example: Find the root of f(x) = e-x - x = 0 (Ans: α= 0.56714329) Error Analysis

Error Analysis i xi εt (%) |δi| estimated |δi+1| 100 0.56714329 0.0582 100 0.56714329 0.0582 1 0.500000000 11.8 0.06714329 0.008158 2 0.566311003 0.147 0.0008323 0.000000125 3 0.567143165 0.0000220 2.83x10-15 4 0.567143290 < 10-8

Newton-Raphson vs. Fixed Point Iteration Find root of f(x) = e-x - x = 0. (Answer: α= 0.56714329) Fixed Point Iteration with i xi εa (%) εt (%) 100.0 1 1.000000 76.3 2 0.367879 171.8 35.1 3 0.692201 46.9 22.1 4 0.500473 38.3 11.8 5 0.606244 17.4 6.89 6 0.545396 11.2 3.83 7 0.579612 5.90 2.20 8 0.560115 3.48 1.24 9 0.571143 1.93 0.705 10 0.564879 1.11 0.399 Newton-Raphson i xi εt (%) |δi| 100 0.56714329 1 0.500000000 11.8 0.06714329 2 0.566311003 0.147 0.0008323 3 0.567143165 0.0000220 0.000000125 4 0.567143290 < 10-8

Pitfalls of the Newton-Raphson Method Sometimes slow iteration xi 0.5 1 51.65 2 46.485 3 41.8365 4 37.65285 5 33.8877565 … 40 41 42 43 1.002316024 1.000023934 1.000000003 1.000000000

Pitfalls of the Newton-Raphson Method Figure (a) An inflection point (f"(x)=0) at the vicinity of a root causes divergence. Figure (b) A local maximum or minimum causes oscillations.

Pitfalls of the Newton-Raphson Method Figure (c) It may jump from one location close to one root to a location that is several roots away. Figure (d) A zero slope causes division by zero.

Overcoming the Pitfalls? No general convergence criteria for Newton-Raphson method. Convergence depends on function nature and accuracy of initial guess. A guess that's close to true root is always a better choice Good knowledge of the functions or graphical analysis can help you make good guesses Good software should recognize slow convergence or divergence. At the end of computation, the final root estimate should always be substituted into the original function to verify the solution.

Other Facts Newton-Rahpson method converges quadratically (when it converges). Except when the root is a multiple roots When the initial guess is close to the root, Newton-Rahpson method usually converges. To improve the chance of convergence, we could use a bracketing method to locate the initial value for the Newton-Raphson method.

Summary Differences between bracketing methods and open methods for locating roots Guarantee of convergence? Performance? Convergence criteria for fixed-point iteration method Rate of convergence Linear, quadratic, super-linear, sublinear Understand what conditions make Newton-Raphson method converges quickly or diverges