Chapter 3 Root Finding.

Slides:



Advertisements
Similar presentations
Numerical Computation Lecture 4: Root Finding Methods - II United International College.
Advertisements

Lecture 5 Newton-Raphson Method
Polynomial Approximation PSCI 702 October 05, 2005.
Roundoff and truncation errors
ROOTS OF EQUATIONS Student Notes ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Chapter 4 Roots of Equations
Roots of Equations Open Methods (Part 2).
INFINITE SEQUENCES AND SERIES
Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods
8 TECHNIQUES OF INTEGRATION. In defining a definite integral, we dealt with a function f defined on a finite interval [a, b] and we assumed that f does.
APPLICATIONS OF DIFFERENTIATION Newton’s Method In this section, we will learn: How to solve high-degree equations using Newton’s method. APPLICATIONS.
Revision.
Open Methods Chapter 6 The Islamic University of Gaza
Roots of Equations Open Methods Second Term 05/06.
FP1: Chapter 2 Numerical Solutions of Equations
Fin500J: Mathematical Foundations in Finance Topic 3: Numerical Methods for Solving Non-linear Equations Philip H. Dybvig Reference: Numerical Methods.
1 Chapter 6 Numerical Methods for Ordinary Differential Equations.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 8. Nonlinear equations.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Techniques of Integration
Solving Non-Linear Equations (Root Finding)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 Roots of Equations Why? But.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Numerical Methods Applications of Loops: The power of MATLAB Mathematics + Coding 1.
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 31.
Chapter 4 Interpolation and Approximation. 4.1 Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Roots of Equations ~ Open Methods Chapter 6 Credit:
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
In section 11.9, we were able to find power series representations for a certain restricted class of functions. Here, we investigate more general problems.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Numerical Methods for Engineering MECN 3500
Numerical Methods.
CHAPTER 3 NUMERICAL METHODS
Numerical Methods Root Finding 4. Fixed-Point Iteration---- Successive Approximation Many problems also take on the specialized form: g(x)=x, where we.
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
Solving Non-Linear Equations (Root Finding)
Numerical Methods Solution of Equation.
4.1 Extreme Values of Functions
4 Numerical Methods Root Finding Secant Method Modified Secant
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
4 Numerical Methods Root Finding.
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
CHAPTER 3 NUMERICAL METHODS
Chapter 7 Numerical Differentiation and Integration
Secant Method.
Newton’s Method for Systems of Non Linear Equations
CS B553: Algorithms for Optimization and Learning
Multiplicity of a Root First Modified Newton’s Method
Read Chapters 5 and 6 of the textbook
Secant Method – Derivation
Chapter 6.
Solution of Equations by Iteration
Numerical Analysis Lecture 7.
Chapter 7 Numerical Differentiation and Integration
Roots of equations Class VII.
Computers in Civil Engineering 53:081 Spring 2003
SOLUTION OF NONLINEAR EQUATIONS
TECHNIQUES OF INTEGRATION
Newton-Raphson Method
Chapter 6.
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
1 Newton’s Method.
EE, NCKU Tien-Hao Chang (Darby Chang)
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Presentation transcript:

Chapter 3 Root Finding

3.1 The Bisection Method Let f be a continues function. Suppose we know that f(a) f(b) < 0, then there is a root between a and b.

Example 3.1 A formal statement is given in Algorithm 3.1.

Theorem 3.1 Bisection Convergence and Error

Bisection Method Advantage: Disadvantage: A global method: it always converge no matter how far you start from the actual root. Disadvantage: It cannot be used to find roots when the function is tangent is the axis and does not pass through the axis. For example: It converges slowly compared with other methods.

3.2 Newton’s Method: Derivation and Examples Newton’s method is the classic algorithm for finding roots of functions. Two good derivations of Newton’s method: Geometric derivation Analytic derivation

Newton’s Method : Geometric Derivation

Newton’s Method : Geometric Derivation The fundamental idea in Newton’s method is to use the tangent line approximation to the function f at point . The point-slope formula for the equation of the straight line gives us: Continue the process with another straight line to get

Newton’s Method : Analytic Derivation

Example 3.2

Newton’s Method Advantage: Disadvantage: Very fast Not a global method For example: Figure 3.3 (root x = 0.5) Another example: Figure 3.4 (root x = 0.05) In these example, the initial point should be carefully chosen. Newton’s method will cycle indefinitely. Newton’s method will just hop back and forth between two values. For example: Consider (root x = 0)

because the root is positive Initial value Wrong predictions, because the root is positive Very close to the actual root

3.3 How to Stop Newton’s Method Ideally, we would want to stop when the error is sufficiently small. (p. 12)

To make sure f(xn) is also small enough

3.4 Application: Division using Newton’s Method The purpose is to illustrate the use of Newtown’s method and the analysis of the resulting iteration. f ’(x) f (x)

The way that computer stores numbers: Questions: When does this iteration converge and how fast? What initial guesses x0 will work for us? The way that computer stores numbers:

From (2.11) p.53 Initial x0 p.56

Example 3.3

3.5 The Newton Error Formula

Definition 3.1 The requirement that C be nonzero and finite actually forces p to be a single unique value. Linear convergence: p = 1 Quadratic convergence: p = 2 Superlinearly convergence: but

Example 3.6

3.6 Newton’s Method: Theory and Convergence Its proof is shown at pp. 106-108.

3.7 Application: Computation of the Square Root

The relative error satisfies Questions: Can we find an initial guess such that Newton’s method will always converge for b on this interval? How rapidly will it converge? The Newton error formula (3.12) applied to : (3.25) The relative error satisfies (3.26)

relative error

How to find the initial value? Choose the midpoint of the interval For example: If , Using linear interpolation b is known

3.8 The Secant Method: Derivation and Examples An obvious drawback of Newton’s method is that it requires a formula for the derivative of f. One obvious way to deal with this problem is to use an approximation to the derivative in the Newton formula. For example: Another method: the secant method Used a secant line

The Secant Method

The Secant Method

The Secant Method Its advantages over Newton’s method: It not require the derivative. It can be coded in a way requiring only a single function evaluation per iteration. Newton’s requires two, one for the function and one for the derivative.

Example 3.7

Error Estimation The error formula for the secant method:

The Convergence This is almost the same as Newton’s method.

3.9 Fixed-point Iteration The goal of this section is to use the added understanding of simple iteration to enhance our understanding of and ability to solve root-finding problems. The root of f is equal to the fixed-point of g. root

Fixed-point Iteration Because show that this kind of point is called a fixed point of the function g, and an iteration of the form (3.33) is called a fixed-point iteration for g.

Fixed point Root

Example 3.8

g (x)

Theorem 3.5

Theorem 3.5 (con.)

3.10 Special Topics in Root-finding Method 3.10.1 Extrapolation and Acceleration The examples have some mistakes, so we jump this subsection.

3.10.2 Variants of Newton’s Method Newton’s method v.s. the chord method v.s. How much do we lose? The chord method is only linear, and only locally convergent. The chord method is useful in solving nonlinear systems of equations. One interesting variant of the chord method updates the point at which the derivative is evaluated, but not every iteration.

Example 3.12

Other Approximations to the Derivative In Section 3.8, a method using a finite difference approximation to the derivative in Newton’s method. Only linear convergence (shown on pages 133 to 134)

3.10.3 The Secant Method: Theory and Convergence The proof is shown on pages 136 to 139. You can study it by yourselves.

3.10.4 Multiple Roots So far our study of root-finding methods has assumed that the derivative of the function does not vanish at the root: What happens if the derivative does vanish at the root?

Example 3.13 -1

L’Hopital’s Rule for forms of type 0/0

Another example (f(x)=1-xe1-x)

Another example (f(x)=1-xe1-x) The data (Table 3.10a) suggests that both iterations are converging, but neither one is converging as rapidly as we might have expected. Can we explain this? The fact that will have an effect on both Newton and secant methods. The error formulas (3.12) and (3.50) and limits (3.24) and (3.47) all require that . Can we find anything more in the way of an explanation?

Discussion—Newton’s Method Assume f has a double root and Note that we no longer have , therefore (according to Theorem 3.7 page 124) we no longer have quadratic convergence.

Discussion—Newton’s Method If we change the Newton iteration to be now we have . (quadratic convergence) More generally, The problem with this technique is that it requires that we know the degree of multiplicity of the root ahead of time.

Discussion—Newton’s Method So an alternative is needed. The draw back of this method is that applying Newton’s method to u will require that we have a formula for the second derivative of f.

Discussion—Newton’s Method (3.60) (3.61)

Table 3.10

Discussion From Table 3.10, we can see the accuracy is not as good as past. What is going on? Let’s look at a graph of the polynomial Fig. 3.11 shows a plot of 8000 points from this curve on the interval [0.45, 0.55] (root = 0.5) Premature convergence This is not caused by the root-finding method. It is because using finite precision arithmetic.

3.10.5 In Search of Fast Global Convergence: Hybrid Algorithm Bisection method: slow but steady and reliable Newton’s method and the secant method: fast but potentially unreliable Brent’s algorithm: incorporate these basic ideas into an algorithm Algorithm 3.6

Example 3.14 Step 3. (b) Step 3. (c) Step 1. Step 2. (a) Step 2. (b)

Another Example