Scientific Computing Algorithm Convergence and Root Finding Methods.

Slides:



Advertisements
Similar presentations
Numerical Computation Lecture 4: Root Finding Methods - II United International College.
Advertisements

Chapter 6: Roots: Open Methods
Roundoff and truncation errors
CSE 330: Numerical Methods
Root Finding COS D Root Finding Given some function, find location where f ( x )=0Given some function, find location where f ( x )=0 Need:Need:
Regula-Falsi Method. Type of Algorithm (Equation Solver) The Regula-Falsi Method (sometimes called the False Position Method) is a method used to find.
Chapter 8 Representing Information Digitally.
Bisection Method (Midpoint Method for Equations).
Asymptotic error expansion Example 1: Numerical differentiation –Truncation error via Taylor expansion.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Introduction to Scientific Computing ICE / ICE 508 Prof. Hyuckjae Lee KAIST- ICC
Chapter 4 Roots of Equations
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
Root Finding COS D Root Finding Given some function, find location where f ( x )=0Given some function, find location where f ( x )=0 Need:Need:
Notes, part 5. L’Hospital Another useful technique for computing limits is L'Hospital's rule: Basic version: If, then provided the latter exists. This.
Lectures on Numerical Methods 1 Numerical Methods Charudatt Kadolkar Copyright 2000 © Charudatt Kadolkar.
Lecture 1: Numerical Solution of Non-linear equations.
Notes, part 4 Arclength, sequences, and improper integrals.
FP1: Chapter 2 Numerical Solutions of Equations
Fin500J: Mathematical Foundations in Finance Topic 3: Numerical Methods for Solving Non-linear Equations Philip H. Dybvig Reference: Numerical Methods.
Chapter 3 Root Finding.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Introduction to Numerical Methods Andreas Gürtler Hans Maassen tel.:
MATH 175: NUMERICAL ANALYSIS II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2 nd Semester AY
February 17, 2015Applied Discrete Mathematics Week 3: Algorithms 1 Double Summations Table 2 in 4 th Edition: Section th Edition: Section th.
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
8/30/ Secant Method Major: All Engineering Majors Authors: Autar Kaw, Jai Paul
Solving Non-Linear Equations (Root Finding)
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
Chapter 6 Finding the Roots of Equations
MATH 685/CSI 700 Lecture Notes Lecture 1. Intro to Scientific Computing.
10/21/ Bisection Method Major: All Engineering Majors Authors: Autar Kaw, Jai Paul
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Graphics Graphics Korea University kucg.korea.ac.kr 2. Solving Equations of One Variable Korea University Computer Graphics Lab. Lee Seung Ho / Shin.
Newton’s Method Other Recursive Methods Modified Fixed Point Method.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Newton’s Method, Root Finding with MATLAB and Excel
4 Numerical Methods Root Finding Secant Method Modified Secant
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
AUGUST 2. MATH 104 Calculus I Review of previous material…. …methods and applications of integration, differential equations ………..
Solving Non-Linear Equations (Root Finding)
Solution of Nonlinear Equations Topic: Bisection method
Numerical Methods Solution of Equation.
Today’s class Numerical differentiation Roots of equation Bracketing methods Numerical Methods, Lecture 4 1 Prof. Jinbo Bi CSE, UConn.
4 Numerical Methods Root Finding Secant Method Modified Secant
Linearization, Newton’s Method
Recursive Methods for Finding Roots of Functions Bisection & Newton’s Method.
A Different Solution  alternatively we can use the following algorithm: 1. if n == 0 done, otherwise I. print the string once II. print the string (n.
9.5 Testing for Convergence Remember: The series converges if. The series diverges if. The test is inconclusive if. The Ratio Test: If is a series with.
AP Calc AB IVT. Introduction Intermediate Value Theorem If a function is continuous between a and b, then it takes on every value between and. Because.
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
7/11/ Bisection Method Major: All Engineering Majors Authors: Autar Kaw, Jai Paul
Numerical Calculation Part 1: Equations &Roots Dr.Entesar Ganash (Bisection's Method)
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
CSE 330: Numerical Methods. Introduction The bisection and false position method require bracketing of the root by two guesses Such methods are called.
Numerical Methods and Analysis
Secant Method.
Read Chapters 5 and 6 of the textbook
Secant Method – Derivation
Newton-Raphson Method
Bisection Method.
Newton-Raphson Method
SOLUTION OF NONLINEAR EQUATIONS
3.8: Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
3.8: Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
Newton-Raphson Method
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
Numerical Modeling Ramaz Botchorishvili
Major: All Engineering Majors Authors: Autar Kaw, Jai Paul
Presentation transcript:

Scientific Computing Algorithm Convergence and Root Finding Methods

Algorithm Convergence Definition: We say that a numerical algorithm to solve some problem is convergent if the numerical solution generated by the algorithm approaches the actual solution as the number of steps in the algorithm increases. Definition: Stability of an algorithm refers to the ability of a numerical algorithm to reach the correct solution, even if small errors are introduced during the execution of the algorithm.

Algorithm Stability Stable algorithm: Small changes in the initial data produce small changes in the final result (Also, called Insensitive or Well-Conditioned) Unstable or conditionally stable algorithm: Small changes in the initial data may produce large changes in the final result (Also called Sensitive or Ill-Conditioned)

Algorithm Stability Condition Number: The condition number of a problem is the ratio of the relative change in a solution to the relative change of the input Want cond no to be small!

Algorithm Stability Example: Consider p(x) = x 5 – 10x x 3 – 80x x – 32. Note that p(x) = (x –2) 5 Does Matlab roots command use a stable algorithm?

Algorithm Stability Example: Use the fzero Matlab function. Does Matlab fzero command use a stable algorithm?

Algorithm Error Analysis To calculate convergence, we usually try to estimate the error in the numerical solution compared to the actual solution. We try to find out how fast this error is decreasing (or increasing!)

Algorithm Error Disaster!! Explosion of the Ariane 5 On June 4, 1996 an unmanned Ariane 5 rocket launched by the European Space Agency exploded just forty seconds after lift-off. The cause of the failure was a software error in the inertial guidance system. A 64 bit floating point number relating to the horizontal velocity of the rocket with respect to the platform was converted to a 16 bit signed integer. The number was larger than 32,768, the largest integer storeable in a 16 bit signed integer, and thus the conversion failed. (source

Algorithm Rate of Convergence To get an idea of how fast an algorithm convergences, we measure the rate at which the sequence of approximations generated by the algorithm converge to some value. Definition:

Algorithm Rate of Convergence Alternatively, we can consider how fast the error goes to zero from one iteration of the algorithm to the next. Definition: Let ε n be the error in the nth iteration of the algorithm. That is, ε n = |α n - α |. Definition: We say that the algorithm has linear convergence if |  n+1 | = C |  n | Definition: We say that the algorithm has quadratic convergence if |  n+1 | = C |  n | 2 Which convergence type is faster?

Finding Roots (Zeroes) of Functions Given some function f(x), find location x where f(x)=0. This is called a root (or zero) of f(x) For success we will need: – Starting position x 0, hopefully close to solution x – Well-behaved function f(x)

Problem: Determine the depth of an object in water without submerging it. x = depth of sphere in water Example 1 1-x x r r 1

Problem: Determine the depth of an object in water without submerging it. Sample material: Cork –> ρ/ρ w = 0.25 (Specific Gravity) Solution: We found that the solution required finding the zero of Example

Finding Roots (Zeroes) of Functions Notes: – There may be multiple roots of f(x). That is why we need to specify an initial guess of x 0. – If f(x) is not continuous, we may miss a root no matter what root-finding algorithm we try. – The roots may be real numbers or complex numbers. In this course we will consider only functions with real roots.

Finding Roots (Zeroes) of Functions What else can go wrong? Tangent point: very difficult to find Singularity: brackets don’t surround root Pathological case: infinite number of roots – e.g. sin(1/x)

Bisection method Simplest Root finding method. Based on Intermediate Value Theorem: If f(x) is continuous on [a, b] then for any y such that y is between f(a) and f(b) there is a c in [a, b] such that f(c) = y.

Bisection method Given points a and b that bracket a root, find x = ½ (a+ b) and evaluate f(x) If f(x) and f(b) have the same sign (+ or -) then b  x else a  x Stop when a and b are “close enough” If function is continuous, this will succeed in finding some root.

Bisection Matlab Function function v = bisection(f, a, b, eps) % Function to find root of (continuous) f(x) within [a,b] % Assumes f(a) and f(b) bracket a root k = 0; while abs(b-a) > eps*abs(b) x = (a + b)/2; if sign(f(x)) == sign(f(b)) b = x; else a = x; end k = k + 1; root = x; v = [root k]; end

Bisection Matlab Function f = inline('x^3-3*x^2+1') bisection(f, 1, 2, 0.01) ans = bisection(f, 1, 2, 0.005) ans = bisection(f, 1, 2, ) ans = bisection(f, 1, 2, ) ans = Is there a pattern?

Bisection Convergence Convergence rate: – Error bounded by size of [a… b] interval – Interval shrinks in half at each iteration – Therefore, error cut in half at next iteration: |  n+1 | = ½ |  n | – This is linear convergence

Root Finding Termination Let p n be the approximation generated by the nth iteration of a root-finding algorithm. How do we know when to stop? Bisection – uses a version of the second condition.

Practice Class Exercise: Determine the number of iterations needed to achieve an approximation to the solution of x 3 – 2x -1 = 0 lying within the interval [1.5, 2.0] and accurate to – Using the Bisection Method Estimate using theory, then try using Matlab