Download presentation

Presentation is loading. Please wait.

Published byDraven Crosslin Modified over 2 years ago

1
Root Finding COS 323

2
1-D Root Finding Given some function, find location where f ( x )=0Given some function, find location where f ( x )=0 Need:Need: – Starting position x 0, hopefully close to solution – Ideally, points that bracket the root f ( x + ) > 0 f ( x – ) < 0

3
1-D Root Finding Given some function, find location where f ( x )=0Given some function, find location where f ( x )=0 Need:Need: – Starting position x 0, hopefully close to solution – Ideally, points that bracket the root – Well-behaved function

4
What Goes Wrong? Tangent point: very difficult to find Singularity: brackets don’t surround root Pathological case: infinite number of roots – e.g. sin(1/x)

5
Example: Press et al., Numerical Recipes in C Equation (3.0.1), p. 105 2 2 ln((Pi - x) ) 2 ln((Pi - x) ) f := x -> 3 x + ------------- + 1 f := x -> 3 x + ------------- + 1 4 Pi Pi > evalf(f(Pi-1e-10)); 30.1360472472915692... > evalf(f(Pi-1e-100)); 25.8811536623525653... > evalf(f(Pi-1e-600)); 2.24285595777501258... > evalf(f(Pi-1e-700)); -2.4848035831404979... > evalf(f(Pi+1e-700)); -2.4848035831404979... > evalf(f(Pi+1e-600)); 2.24285595777501258... > evalf(f(Pi+1e-100)); 25.8811536623525653... > evalf(f(Pi+1e-10)); 30.1360472510614803...

6
Bisection Method Given points x + and x – that bracket a root, find x half = ½ ( x + + x – ) and evaluate f ( x half )Given points x + and x – that bracket a root, find x half = ½ ( x + + x – ) and evaluate f ( x half ) If positive, x + x half else x – x halfIf positive, x + x half else x – x half Stop when x + and x – close enoughStop when x + and x – close enough If function is continuous, this will succeed in finding some rootIf function is continuous, this will succeed in finding some root

7
Bisection Very robust methodVery robust method Convergence rate:Convergence rate: – Error bounded by size of [ x + … x – ] interval – Interval shrinks in half at each iteration – Therefore, error cut in half at each iteration: | n +1 | = ½ | n | – This is called “linear convergence” – One extra bit of accuracy in x at each iteration

8
Faster Root-Finding Fancier methods get super-linear convergenceFancier methods get super-linear convergence – Typical approach: model function locally by something whose root you can find exactly – Model didn’t match function exactly, so iterate – In many cases, these are less safe than bisection

9
Secant Method Simple extension to bisection: interpolate or extrapolate through two most recent pointsSimple extension to bisection: interpolate or extrapolate through two most recent points 1 2 3 4

10
Secant Method Faster than bisection: | n +1 | = const. | n | 1.6Faster than bisection: | n +1 | = const. | n | 1.6 Faster than linear: number of correct bits multiplied by 1.6Faster than linear: number of correct bits multiplied by 1.6 Drawback: the above only true if sufficiently close to a root of a sufficiently smooth functionDrawback: the above only true if sufficiently close to a root of a sufficiently smooth function – Does not guarantee that root remains bracketed

11
False Position Method Similar to secant, but guarantee bracketingSimilar to secant, but guarantee bracketing Stable, but linear in bad casesStable, but linear in bad cases 1 2 3 4

12
Other Interpolation Strategies Ridders’s method: fit exponential to f ( x + ), f ( x – ), and f ( x half )Ridders’s method: fit exponential to f ( x + ), f ( x – ), and f ( x half ) Van Wijngaarden-Dekker-Brent method: inverse quadratic fit to 3 most recent points if within bracket, else bisectionVan Wijngaarden-Dekker-Brent method: inverse quadratic fit to 3 most recent points if within bracket, else bisection Both of these safe if function is nasty, but fast (super-linear) if function is niceBoth of these safe if function is nasty, but fast (super-linear) if function is nice

13
Newton-Raphson Best-known algorithm for getting quadratic convergence when derivative is easy to evaluateBest-known algorithm for getting quadratic convergence when derivative is easy to evaluate Another variant on the extrapolation themeAnother variant on the extrapolation theme 1 2 3 4 Slope = derivative at 1

14
Newton-Raphson Begin with Taylor seriesBegin with Taylor series Divide by derivative (can’t be zero!)Divide by derivative (can’t be zero!)

15
Newton-Raphson Method fragile: can easily get confusedMethod fragile: can easily get confused Good starting point criticalGood starting point critical – Newton popular for “polishing off” a root found approximately using a more robust method

16
Newton-Raphson Convergence Can talk about “basin of convergence”: range of x 0 for which method finds a rootCan talk about “basin of convergence”: range of x 0 for which method finds a root Can be extremely complex: here’s an example in 2-D with 4 rootsCan be extremely complex: here’s an example in 2-D with 4 roots Yale site Yale site D.W. Hyatt D.W. Hyatt

17
Popular Example of Newton: Square Root Let f ( x ) = x 2 – a : zero of this is square root of aLet f ( x ) = x 2 – a : zero of this is square root of a f ’( x ) = 2 x, so Newton iteration is f ’( x ) = 2 x, so Newton iteration is “Divide and average” method“Divide and average” method

18
Reciprocal via Newton Division is slowest of basic operationsDivision is slowest of basic operations On some computers, hardware divide not available (!): simulate in softwareOn some computers, hardware divide not available (!): simulate in software Need only subtract and multiplyNeed only subtract and multiply

19
Rootfinding in >1D Behavior can be complex: e.g. in 2DBehavior can be complex: e.g. in 2D

20
Rootfinding in >1D Can’t bracket and bisectCan’t bracket and bisect Result: few general methodsResult: few general methods

21
Newton in Higher Dimensions Start withStart with Write as vector-valued functionWrite as vector-valued function

22
Newton in Higher Dimensions Expand in terms of Taylor seriesExpand in terms of Taylor series f ’ is a Jacobian f ’ is a Jacobian

23
Newton in Higher Dimensions Solve for Solve for Requires matrix inversion (we’ll see this later)Requires matrix inversion (we’ll see this later) Often fragile, must be carefulOften fragile, must be careful – Keep track of whether error decreases – If not, try a smaller step in direction

Similar presentations

OK

Scientific Computing Algorithm Convergence and Root Finding Methods.

Scientific Computing Algorithm Convergence and Root Finding Methods.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

By appt only movie device Interactive ppt on heredity Ppt on supply chain management of hyundai motors Ppt on formal letter writing Ppt on tsunami 2004 in india Project ppt on student information system Ppt on forward rate agreement formula Ppt on unity in diversity studio Ppt on beer lambert law Ppt on natural numbers meaning