 A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.

Presentation on theme: "A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence."— Presentation transcript:

A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence looks at asymptotic convergence

Definition: Rate of convergence if we say that the method converges to x-true with order p>0. Higher p is faster convergence. p=1 is linear p=2 is quadratic

Lambda is asymptotic error constant Bisection: p=1 Regula falsi: p=1.4 to 1.6

Another open method is fixed point iteration Idea: rewrite original equation f(x)=0 into form x=g(x). Use iteration x i+1 =g(x i ) to find a value that reaches convergence Example:

For our Manning’s equation problem becomes

Fortran program performing fixed-point iteration for Manning’s eq. example

Fixed point iteration doesn’t always work. Basically, if |g’(x)| is <1 near the intersection with the x line, it will work. (See your book for derivation). Example where it doesn’t work

King of the root-finding methods Newton-Raphson method Based on Taylor series expansion

Truncate to get At the root, f(x i+1 )=0, so and

Note that an evaluation of the derivative is required. You may have to do this numerically. However, can converge very quickly.

Example using our Manning’s equation problem The derivative of this w.r.t h is

Error analysis and convergence of Newton- Raphson The error of the Newton-Raphson method can be estimated from Because the error at time i+1 is proportional to the square of the previous error, the number of correct decimal places doubles each iteration

Although Newton-Raphson converges very rapidly, it can diverge, and fail to find roots. 1) if an inflection point is near the root 2) if there is a local minimum or maximum 3) if there are multiple roots 4) if a zero slope is reached

Secant method continued There is an alternate secant method that uses a perturbation method to approximate derivative. Start with

Now plug this approximation for the derivative into the Taylor series approximation used in Newton-Raphson: becomes

No derivative evaluation - like the secant method Only one initial guess is needed - like Newton-Raphson method Matlab example

Download ppt "A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence."

Similar presentations