Root Finding COS 323. 1-D Root Finding Given some function, find location where f ( x )=0Given some function, find location where f ( x )=0 Need:Need:

Slides:



Advertisements
Similar presentations
Numerical Computation Lecture 4: Root Finding Methods - II United International College.
Advertisements

Lecture 5 Newton-Raphson Method
Polynomial Approximation PSCI 702 October 05, 2005.
CSE 330: Numerical Methods
Root Finding COS D Root Finding Given some function, find location where f ( x )=0Given some function, find location where f ( x )=0 Need:Need:
CSE 541 – Numerical Methods
Open Methods Chapter 6 The Islamic University of Gaza
ROOTS OF EQUATIONS Student Notes ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
Chapter 4 Roots of Equations
PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
Roots of Equations Open Methods (Part 2).
Chapter 6 Open Methods.
Lecture 2: Numerical Differentiation. Derivative as a gradient
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods
Engineering Optimization
Lecture 1: Numerical Solution of Non-linear equations.
Dr. Marco A. Arocha Aug,  “Roots” problems occur when some function f can be written in terms of one or more dependent variables x, where the.
Roots of Equations Open Methods Second Term 05/06.
FP1: Chapter 2 Numerical Solutions of Equations
Chapter 3 Root Finding.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
MATH 175: NUMERICAL ANALYSIS II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2 nd Semester AY
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 8. Nonlinear equations.
Scientific Computing Algorithm Convergence and Root Finding Methods.
Solving Non-Linear Equations (Root Finding)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 Roots of Equations Why? But.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Numerical Methods Applications of Loops: The power of MATLAB Mathematics + Coding 1.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Roots of Equations ~ Open Methods Chapter 6 Credit:
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Numerical Methods.
Numerical Analysis - Advanced Topics in Root Finding - Hanyang University Jong-Il Park.
559 Fish 559; Lecture 25 Root Finding Methods. 559 What is Root Finding-I? Find the value for such that the following system of equations is satisfied:
4 Numerical Methods Root Finding Secant Method Modified Secant
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
ROOTS OF EQUATIONS. Bracketing Methods The Bisection Method The False-Position Method Open Methods Simple Fixed-Point Iteration The Secant Method.
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
Solving Non-Linear Equations (Root Finding)
Numerical Methods Solution of Equation.
Today’s class Numerical differentiation Roots of equation Bracketing methods Numerical Methods, Lecture 4 1 Prof. Jinbo Bi CSE, UConn.
§3.6 Newton’s Method. The student will learn about
4 Numerical Methods Root Finding Secant Method Modified Secant
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
6/13/ Secant Method Computer Engineering Majors Authors: Autar Kaw, Jai Paul
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
Divide and Conquer.
Root Finding Methods Fish 559; Lecture 15 a.
CS B553: Algorithms for Optimization and Learning
CSE 575 Computer Arithmetic Spring 2003 Mary Jane Irwin (www. cse. psu
Computational Methods EML3041
Newton’s method for finding local minima
Chapter 6.
Solution of Equations by Iteration
Numerical Analysis Lecture 7.
Numerical Analysis Lecture 45.
Computers in Civil Engineering 53:081 Spring 2003
POLYNOMIAL INTERPOLATION
SOLUTION OF NONLINEAR EQUATIONS
ROOTS OF EQUATIONS.
Newton’s Method and Its Extensions
FP1: Chapter 2 Numerical Solutions of Equations
Chapter 6.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Presentation transcript:

Root Finding COS 323

1-D Root Finding Given some function, find location where f ( x )=0Given some function, find location where f ( x )=0 Need:Need: – Starting position x 0, hopefully close to solution – Ideally, points that bracket the root f ( x + ) > 0 f ( x – ) < 0

1-D Root Finding Given some function, find location where f ( x )=0Given some function, find location where f ( x )=0 Need:Need: – Starting position x 0, hopefully close to solution – Ideally, points that bracket the root – Well-behaved function

What Goes Wrong? Tangent point: very difficult to find Singularity: brackets don’t surround root Pathological case: infinite number of roots – e.g. sin(1/x)

Bisection Method Given points x + and x – that bracket a root, find x half = ½ ( x + + x – ) and evaluate f ( x half )Given points x + and x – that bracket a root, find x half = ½ ( x + + x – ) and evaluate f ( x half ) If positive, x +  x half else x –  x halfIf positive, x +  x half else x –  x half Stop when x + and x – close enoughStop when x + and x – close enough If function is continuous, this will succeed in finding some rootIf function is continuous, this will succeed in finding some root

Bisection Very robust methodVery robust method Convergence rate:Convergence rate: – Error bounded by size of [ x + … x – ] interval – Interval shrinks in half at each iteration – Therefore, error cut in half at each iteration: |  n +1 | = ½ |  n | – This is called “linear convergence” – One extra bit of accuracy in x at each iteration

Faster Root-Finding Fancier methods get super-linear convergenceFancier methods get super-linear convergence – Typical approach: model function locally by something whose root you can find exactly – Model didn’t match function exactly, so iterate – In many cases, these are less safe than bisection

Secant Method Simple extension to bisection: interpolate or extrapolate through two most recent pointsSimple extension to bisection: interpolate or extrapolate through two most recent points

Secant Method Faster than bisection: |  n +1 | = const. |  n | 1.6Faster than bisection: |  n +1 | = const. |  n | 1.6 Faster than linear: number of correct bits multiplied by 1.6Faster than linear: number of correct bits multiplied by 1.6 Drawback: the above only true if sufficiently close to a root of a sufficiently smooth functionDrawback: the above only true if sufficiently close to a root of a sufficiently smooth function – Does not guarantee that root remains bracketed

False Position Method Similar to secant, but guarantee bracketingSimilar to secant, but guarantee bracketing Stable, but linear in bad casesStable, but linear in bad cases

Other Interpolation Strategies Ridders’s method: fit exponential to f ( x + ), f ( x – ), and f ( x half )Ridders’s method: fit exponential to f ( x + ), f ( x – ), and f ( x half ) Van Wijngaarden-Dekker-Brent method: inverse quadratic fit to 3 most recent points if within bracket, else bisectionVan Wijngaarden-Dekker-Brent method: inverse quadratic fit to 3 most recent points if within bracket, else bisection Both of these safe if function is nasty, but fast (super-linear) if function is niceBoth of these safe if function is nasty, but fast (super-linear) if function is nice

Newton-Raphson Best-known algorithm for getting quadratic convergence when derivative is easy to evaluateBest-known algorithm for getting quadratic convergence when derivative is easy to evaluate Another variant on the extrapolation themeAnother variant on the extrapolation theme Slope = derivative at 1

Newton-Raphson Begin with Taylor seriesBegin with Taylor series Divide by derivative (can’t be zero!)Divide by derivative (can’t be zero!)

Newton-Raphson Method fragile: can easily get confusedMethod fragile: can easily get confused Good starting point criticalGood starting point critical – Newton popular for “polishing off” a root found approximately using a more robust method

Newton-Raphson Convergence Can talk about “basin of convergence”: range of x 0 for which method finds a rootCan talk about “basin of convergence”: range of x 0 for which method finds a root Can be extremely complex: here’s an example in 2-D with 4 rootsCan be extremely complex: here’s an example in 2-D with 4 roots

Popular Example of Newton: Square Root Let f ( x ) = x 2 – a : zero of this is square root of aLet f ( x ) = x 2 – a : zero of this is square root of a f ’( x ) = 2 x, so Newton iteration is f ’( x ) = 2 x, so Newton iteration is “Divide and average” method“Divide and average” method

Reciprocal via Newton Division is slowest of basic operationsDivision is slowest of basic operations On some computers, hardware divide not available (!): simulate in softwareOn some computers, hardware divide not available (!): simulate in software Need only subtract and multiplyNeed only subtract and multiply

Rootfinding in >1D Behavior can be complex: e.g. in 2DBehavior can be complex: e.g. in 2D

Rootfinding in >1D Can’t bracket and bisectCan’t bracket and bisect Result: few general methodsResult: few general methods

Newton in Higher Dimensions Start withStart with Write as vector-valued functionWrite as vector-valued function

Newton in Higher Dimensions Expand in terms of Taylor seriesExpand in terms of Taylor series f ’ is a Jacobian f ’ is a Jacobian

Newton in Higher Dimensions Solve for Solve for  Requires matrix inversion (we’ll see this later)Requires matrix inversion (we’ll see this later) Often fragile, must be carefulOften fragile, must be careful – Keep track of whether error decreases – If not, try a smaller step in direction 