Solving Non-Linear Equations (Root Finding)

Slides:



Advertisements
Similar presentations
Lecture 5 Newton-Raphson Method
Advertisements

CSE 330: Numerical Methods
Open Methods Chapter 6 The Islamic University of Gaza
ROOTS OF EQUATIONS Student Notes ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.
Unit-I Roots of Equation
Chapter 4 Roots of Equations
PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 51.
Chapter 6 Open Methods.
Second Term 05/061 Roots of Equations Bracketing Methods.
Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods
Roots of Equations Bracketing Methods.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Martin Mendez UASLP Chapter 61 Unit II.
Lectures on Numerical Methods 1 Numerical Methods Charudatt Kadolkar Copyright 2000 © Charudatt Kadolkar.
Open Methods Chapter 6 The Islamic University of Gaza
Dr. Marco A. Arocha Aug,  “Roots” problems occur when some function f can be written in terms of one or more dependent variables x, where the.
NUMERICAL METHODS WITH C++ PROGRAMMING
Notes, part 4 Arclength, sequences, and improper integrals.
Bracketing Methods Chapter 5 The Islamic University of Gaza
Roots of Equations Open Methods Second Term 05/06.
FP1: Chapter 2 Numerical Solutions of Equations
Fin500J: Mathematical Foundations in Finance Topic 3: Numerical Methods for Solving Non-linear Equations Philip H. Dybvig Reference: Numerical Methods.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Chapter 4 Roots of Polynomials.
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
Chapter 14: Numerical Methods
Solving Non-Linear Equations (Root Finding)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 Roots of Equations Why? But.
Introduction This chapter gives you several methods which can be used to solve complicated equations to given levels of accuracy These are similar to.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Numerical Methods Applications of Loops: The power of MATLAB Mathematics + Coding 1.
Copyright © 2014, 2010 Pearson Education, Inc. Chapter 2 Polynomials and Rational Functions Copyright © 2014, 2010 Pearson Education, Inc.
Lecture 8 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Lecture 3 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Roots of Equations ~ Open Methods Chapter 6 Credit:
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 71.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 7.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Numerical Methods for Engineering MECN 3500
Numerical Methods.
CHAPTER 3 NUMERICAL METHODS
Newton’s Method, Root Finding with MATLAB and Excel
4 Numerical Methods Root Finding Secant Method Modified Secant
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 7 Roots of Polynomials.
The Islamic University of Gaza Faculty of Engineering Civil Engineering Department Numerical Analysis ECIV 3306 Chapter 5 Bracketing Methods.
Numerical Methods Solution of Equation.
Today’s class Numerical differentiation Roots of equation Bracketing methods Numerical Methods, Lecture 4 1 Prof. Jinbo Bi CSE, UConn.
4 Numerical Methods Root Finding Secant Method Modified Secant
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
4 Numerical Methods Root Finding.
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
Numerical Methods and Optimization C o u r s e 1 By Bharat R Chaudhari.
Solution of Nonlinear Equations ( Root Finding Problems ) Definitions Classification of Methods  Analytical Solutions  Graphical Methods  Numerical.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 / Chapter 5.
Solution of Nonlinear Equations ( Root Finding Problems )
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
4 Numerical Methods Root Finding Secant Method Modified Secant
Numerical Methods and Analysis
Read Chapters 5 and 6 of the textbook
Numerical Analysis Lecture 7.
SOLUTION OF NONLINEAR EQUATIONS
4 Numerical Methods Root Finding.
Roots of Polynomials Chapter 7 The Islamic University of Gaza
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
Presentation transcript:

Solving Non-Linear Equations (Root Finding) Root finding Methods Solving Non-Linear Equations (Root Finding)

Root finding Methods What are root finding methods? Methods for determining a solution of an equation. Essentially finding a root of a function, that is, a zero of the function.

Root finding Methods Where are they used? Some applications for root finding are: systems of equilibrium, elliptical orbits, the van der Waals equation, and natural frequencies of spring systems.

The problem of solving non-linear equations or sets of non-linear equations is a common problem in science, applied mathematics.

The problem of solving non-linear equations or sets of non-linear equations is a common problem in science, applied mathematics. The goal is to solve f(x) = 0 , for the function f(x). The values of x which make f(x) = 0 are the roots of the equation.

f(x) f(a)=0 f(b)=0 a,b are roots of the function f(x) x

More complicated problems are sets of non-linear equations. f(x, y, z) = 0 h(x, y, z) = 0 g(x, y, z) = 0

More complicated problems are sets of non-linear equations. f(x, y, z) = 0 h(x, y, z) = 0 g(x, y, z) = 0 For N variables (x, y, z ,…), the equations are solvable if there are N linearly independent equations.

There are many methods for solving non-linear equations There are many methods for solving non-linear equations. The methods, which will be highlighted have the following properties: the function f(x) is expected to be continuous. If not the method may fail. the use of the algorithm requires that the solution be bounded. once the solution is bounded, it is refined to specified tolerance.

Four such methods are: Interval Halving (Bisection method) Regula Falsi (False position) Secant method Fixed point iteration Newton’s method

Bisection Method

Bisection Method It is the simplest root-finding algorithm. requires previous knowledge of two initial guesses, a and b, so that f(a) and f(b) have opposite signs. initial pt ‘b’ a d c b root ‘d’ initial pt ‘a’

Bisection Method Two estimates are chosen so that the solution is bracketed. ie f(a) and f(b) have opposite signs. In the diagram this f(a) < 0 and f(b) > 0. The root d lies between a and b! initial pt ‘b’ a d c b root ‘d’ initial pt ‘a’

Bisection Method The root d must always be bracketed (be between a and b)! Iterative method. initial pt ‘b’ a d c b root ‘d’ initial pt ‘a’

Bisection Method The interval between a and b is halved by calculating the average of a and b. The new point c = (a+b)/2. This produces are two possible intervals: a < x < c and c < x < b. initial pt ‘b’ a d c b root ‘d’ initial pt ‘a’

Bisection Method This produces are two possible intervals: a < x < c and c < x < b. If f(c ) > 0, then x = d must be to the left of c :interval a < x < c. If f(c ) < 0, then x = d must be to the right of c :interval c < x < b. initial pt ‘b’ a d c b root ‘d’ initial pt ‘a’

Bisection Method If f(c ) > 0, let anew = a and bnew = c and repeat process. If f(c ) < 0, let anew = c and bnew = b and repeat process. This reassignment ensures the root is always bracketed!! initial pt ‘b’ a d c b root ‘d’ initial pt ‘a’

Bisection Method This produces are two possible intervals: a < x < c and c < x < b. If f(c ) > 0, then x = d must be to the left of c :interval a < x < c. If f(c ) < 0, then x = d must be to the right of c :interval c < x < b. initial pt ‘b’ a d c b root ‘d’ initial pt ‘a’

Bisection Method ci = (ai + bi )/2 if f(ci) > 0; ai+1 = a and bi+1 = c if f(ci) < 0; ai+1 = c and bi+1 = b

Bisection Method Bisection is an iterative process, where the initial interval is halved until the size of the interval decreases until it is below some predefined tolerance : |a-b|   or f(x) falls below a tolerance : |f(c ) – f(c-1)|  .  

Bisection Method Advantages Is guaranteed to work if f(x) is continuous and the root is bracketed. The number of iterations to get the root to specified tolerance is known in advance

Bisection Method Disadvantages Slow convergence. Not applicable when here are multiple roots. Will only find one root at a time.

Linear Interpolation Methods

Linear Interpolation Methods Most functions can be approximated by a straight line over a small interval. The two methods :- secant and false position are based on this.

Secant Method

Secant Method

Overview of Secant Method Again to initial guesses are chosen. However there is not requirement that the root is bracketed! The method proceeds by drawing a line through the points to get a new point closer to the root. This is repeated until the root is found.

Secant Method First we find two points(x0,x1), which are hopefully near the root (we may use the bisection method). A line is then drawn through the two points and we find where the line intercepts the x-axis, x2.

Secant Method If f(x) were truly linear, the straight line would intercept the x-axis at the root. However since it is not linear, the intercept is not at the root but it should be close to it.

Secant Method From similar triangles we can write that,

Secant Method From similar triangles we can write that, Solving for x2 we get:

Secant Method Iteratively this is written as:

Algorithm Given two guesses x0, x1 near the root, If then Swap x0 and x1. Repeat Set Set x0 = x1 Set x1 = x2 Until < tolerance value.

Because the new point should be closer the root after the 2nd iteration we usually choose the last two points. After the first iteration there is only one new point. However x1 is chosen so that it is closer to root that x0. This is not a “hard and fast rule”!

Discussion of Secant Method The secant method has better convergence than the bisection method( see pg40 of Applied Numerical Analysis). Because the root is not bracketed there are pathological cases where the algorithm diverges from the root. May fail if the function is not continuous.

Pathological Case See Fig 1.2 page 40

False Position

False Position

False Position The method of false position is seen as an improvement on the secant method. The method of false position avoids the problems of the secant method by ensuring that the root is bracketed between the two starting points and remains bracketing between successive pairs.

False Position This technique is similar to the bisection method except that the next iterate is taken as the line of interception between the pair of x-values and the x-axis rather than at the midpoint.

False Position

Algorithm Repeat Set If is of opposite sign to then Set x1 = x2 Given two guesses x0, x1 that bracket the root, Repeat Set If is of opposite sign to then Set x1 = x2 Else Set x0 = x1 End If Until < tolerance value.

Discussion of False Position Method This method achieves better convergence but a more complicated algorithm. May fail if the function is not continuous.

Newton’s Method

Newton’s Method The bisection method is useful up to a point. In order to get a good accuracy a large number of iterations must be carried out. A second inadequacy occurs when there are multiple roots to be found.

Newton’s Method The bisection method is useful up to a point. In order to get a good accuracy a large number of iterations must be carried out. A second inadequacy occurs when there are multiple roots to be found. Newton’s method is a much better algorithm.

Newton’s Method Newton’s method relies on calculus and uses linear approximation to the function by finding the tangent to the curve.

Newton’s Method Algorithm requires an initial guess, x0, which is close to the root. The point where the tangent line to the function (f (x)) meets the x-axis is the next approximation, x1. This procedure is repeated until the value of ‘x’ is sufficiently close to zero.

Newton’s Method The equation for Newton’s Method can be determined graphically!

Newton’s Method The equation for Newton’s Method can be determined graphically! From the diagram tan Ө = ƒ'(x0) = ƒ(x0)/(x0 – x1)

Newton’s Method The equation for Newton’s Method can be determined graphically! From the diagram tan Ө = ƒ'(x0) = ƒ(x0)/(x0 – x1) Thus, x1=x0 -ƒ(x0)/ƒ'(x0).

Newton’s Method The general form of Newton’s Method is: xn+1 = xn – f(xn)/ƒ'(xn)

Newton’s Method The general form of Newton’s Method is: xn+1 = xn – f(xn)/ƒ'(xn) Algorithm Pick a starting value for x Repeat x:= x – f(x)/ƒ'(x) Return x

Fixed point iteration

Fixed point iteration

Fixed point iteration Newton’s method is a special case of the final algorithm: fixed point iteration.

Fixed point iteration Newton’s method is a special case of the final algorithm: fixed point iteration. The method relies on the Fixed point Theorem: If g(x) and g'(x) are continuous on an interval containing a root of the equation g(x) = x, and if |g'(x)| < 1 for all x in the interval then the series xn+1 = g(xn) will converge to the root.

For Newton’s method g(x) = x – f(x)/ƒ'(x). However for fixed point, f(x) = 0 thus g(x)=x.

The fixed point iteration method requires us to rewrite the equation f(x) = 0 in the form x = g(x), then finding x=a such that a = g(a), which is equivalent to f(a) = 0. The value of x such that x = g(x) is called a fixed point of g(x). Fixed point iteration essentially solves two functions simultaneously: x(x) and g(x). The point of intersection of these two functions is the solution to x = g(x), and thus to f(x) = 0.

Fixed point iteration Consider the example on page 54.

Fixed point iteration

Fixed point iteration We get successive iterates as follows: Start at the initial x value on the x-axis(p0) Go vertically to the curve. Then to the line y=x

We get successive iterates as follows: Start at the initial x value on the x-axis(p0) Go vertically to the curve. Then to the line y=x Then to the curve Then the line. Repeat!

Fixed point iteration Algorithm Pick a starting value for x. Repeat x := g(x) Return x

Fixed point iteration The procedure starts from an initial guess of x, which is improved by iteration until convergence is achieved. For convergence to occur, the derivative (dg/dx) must be smaller than 1 in magnitude for the x values that are encountered during the iterations.

Fixed point iteration Convergence is established by requiring that the change in x from one iteration to the next be no greater in magnitude than some small quantity ε.

Fixed point iteration Alternative algorithm:

Roots of Polynomials The roots of polynomials such as Follow these rules: For an nth order equation, there are n real or complex roots. If n is odd, there is at least one real root. If complex root exist in conjugate pairs (that is, (l+mi and l-mi ), where .

Conventional Methods The efficiency of bracketing and open methods depends on whether the problem being solved involves complex roots. If only real roots exist, these methods could be used. Finding good initial guesses complicates both the open and bracketing methods, also the open methods could be susceptible to divergence.

Conventional Methods Müller method Bairstow methods Special methods have been developed to find the real and complex roots of polynomials: Müller method Bairstow methods

Roots of Polynomials: Müller’s Method Müller’s method obtains a root estimate by projecting a parabola to the x axis through three function values. Muller’s Method Secant Method

Muller’s Method The method consists of deriving the coefficients of parabola that goes through the three points: 1. Write the equation in a convenient form:

Muller’s Method 2. The parabola should intersect the three points [xo, f(xo)], [x1, f(x1)], [x2, f(x2)]. The coefficients of the polynomial can be estimated by substituting three points to give

Muller’s Method 3. Three equations can be solved for three unknowns, a, b, c. Since two of the terms in the 3rd equation are zero, it can be immediately solved for c = f(x2).

Muller’s Method Solving the above equations

Muller’s Method Roots can be found by applying an alternative form of quadratic formula: The error can be calculated as ± term yields two roots. This will result in a largest denominator, and will give root estimate that is closest to x2.

Once x3 is determined, the process is repeated using the following guidelines: If only real roots are being located, choose the two original points that are nearest the new root estimate, x3. If both real and complex roots are estimated, employ a sequential approach just like in secant method, x1, x2, and x3 to replace xo, x1, and x2. by Lale Yurttas, Texas A&M University Chapter 7

Muller’s Method: Example Use Muller’s method to find roots of f(x)= x3 - 13x - 12 Initial guesses of x0, x1, and x2 of 4.5, 5.5 and 5.0 respectively. (Roots are -3, -1 and 4) Solution - f(xo)= f(4.5)=20.626, - f(x1)= f(5.5)=82.875 and, - f(x2)= f(5)= 48.0 - ho= 5.5-4.5 = 1, h1 = 5-5.5 = -0.5 - do= (82.875-20.625) /(5.5-4.5) = 62.25 - d1= (48-82.875)/ (5-5.5) = 69.75

Muller’s Method: Example - b =15(-0.5)+ 69.75 = 62.25 - c = 48 ±(b2-4ac)0.5 = ±31.54461 Choose sign similar to the sign of b (+ve) x3 = 5 + (-2)(48)/(62.25+31.54461) = 3.976487 The error estimate is et=|(-1.023513)/(3.976487)|*100 = 25.7% The second iteration will have x0=5.5 x1=5 and x2=3.976487

Müller’s Method: Example Iteration xr Error % 0 5 1 3.976487 25.7 2 4.001 0.614 3 4.000 0.026 4 4.000 0.000012