FP1: Chapter 2 Numerical Solutions of Equations

Slides:



Advertisements
Similar presentations
“Teach A Level Maths” Vol. 2: A2 Core Modules
Advertisements

ROOTS OF EQUATIONS Student Notes ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
Unit-I Roots of Equation
Chapter 4 Roots of Equations
PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
Open Methods (Part 1) Fixed Point Iteration & Newton-Raphson Methods
APPLICATIONS OF DIFFERENTIATION Newton’s Method In this section, we will learn: How to solve high-degree equations using Newton’s method. APPLICATIONS.
Dr. Marco A. Arocha Aug,  “Roots” problems occur when some function f can be written in terms of one or more dependent variables x, where the.
5.5 Solving Trigonometric Equations Example 1 A) Is a solution to ? B) Is a solution to cos x = sin 2x ?
NUMERICAL METHODS WITH C++ PROGRAMMING
Roots of Equations Open Methods Second Term 05/06.
- + Suppose f(x) is a continuous function of x within interval [a, b]. f(a) = - ive and f(b) = + ive There exist at least a number p in [a, b] with f(p)
Chapter 3 Root Finding.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
Scientific Computing Algorithm Convergence and Root Finding Methods.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Solving Non-Linear Equations (Root Finding)
Solving Trigonometric Equations Section Acceleration due to gravity: g = 9.8 m/s 2. In physics, you learn that this is a constant, but in actuality,
Continuity ( Section 1.8) Alex Karassev. Definition A function f is continuous at a number a if Thus, we can use direct substitution to compute the limit.
Chapter 6 Finding the Roots of Equations
Introduction This chapter gives you several methods which can be used to solve complicated equations to given levels of accuracy These are similar to.
C1: Chapters 8 & 10 Trigonometry
Numerical Methods Applications of Loops: The power of MATLAB Mathematics + Coding 1.
Copyright © Cengage Learning. All rights reserved. 4 Applications of Differentiation.
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 31.
Lecture 3 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
1 Nonlinear Equations Jyun-Ming Chen. 2 Contents Bisection False Position Newton Quasi-Newton Inverse Interpolation Method Comparison.
Lecture 6 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Solving equations numerically The sign - change rule If the function f(x) is continuous for an interval a  x  b of its domain, if f(a) and f(b) have.
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
Numerical Methods for Engineering MECN 3500
Numerical Methods.
Sign Change Re-cap Show that each of the following have a solution for x: 1.The intersection of (between 1.8 and 1.9) and 2.The intersection of(between.
CHAPTER 3 NUMERICAL METHODS
Numerical Methods Root Finding 4. Fixed-Point Iteration---- Successive Approximation Many problems also take on the specialized form: g(x)=x, where we.
4 Numerical Methods Root Finding Secant Method Modified Secant
Numerical methods You have encountered equations that can be solved by a direct algebraic method – for example any linear or quadratic function can be.
Solving Non-Linear Equations (Root Finding)
Applications of Loops: The power of MATLAB Mathematics + Coding
Numerical Methods Solution of Equation.
Today’s class Numerical differentiation Roots of equation Bracketing methods Numerical Methods, Lecture 4 1 Prof. Jinbo Bi CSE, UConn.
4 Numerical Methods Root Finding Secant Method Modified Secant
Sect What is SOLVING a trig equation? It means finding the values of x that will make the equation true. (Just as we did with algebraic equations!)
Recursive Methods for Finding Roots of Functions Bisection & Newton’s Method.
4 Numerical Methods Root Finding.
Iteration The solution lies between 0 and 1. e.g. To find integer bounds for we can sketch and  0 and 1 are the integer bounds. We often start by finding.
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
Lecture 4 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
Numerical Methods and Optimization C o u r s e 1 By Bharat R Chaudhari.
GCSE: Transformations of Functions Dr J Frost Last updated: 31 st August 2015.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
CSE 330: Numerical Methods. Introduction The bisection and false position method require bracketing of the root by two guesses Such methods are called.
C1 Chapters 8 & 10 :: Trigonometry
Numerical Methods and Analysis
Numerical Methods.
Numerical Methods.
Solution of Equations by Iteration
Dr J Frost GCSE Iteration Dr J Frost Last modified:
GCSE Completing The Square
Intermediate Value Theorem
Intermediate Value Theorem
C3 Chapter 4: Numerical Methods
Continuity Alex Karassev.
FP1: Chapter 2 Numerical Solutions of Equations
Copyright © Cengage Learning. All rights reserved.
MATH 1910 Chapter 3 Section 8 Newton’s Method.
Solutions for Nonlinear Equations
Presentation transcript:

FP1: Chapter 2 Numerical Solutions of Equations Dr J Frost (jfrost@tiffin.kingston.sch.uk) Last modified: 2nd January 2014

Iterative Methods Suppose we wanted to find solutions to x = cos(x). y = x y = cos(x) There is in fact no way to express the solution in an ‘exact’ way, i.e. involving sums, divisions, roots, logs, trigonometric function, etc. We instead have to use numerical methods to approximate the solution.

Iterative Methods ? xn+1 = cos(xn) x = 0.739085... [0.5] [=] The principle of iterative methods is that we start with some initial approximation of the solution, and ‘iteratively’ repeat some process to gradually get closer to the true solution. This is a method you will see in C3: Suppose we start (by observing the graph) with an approximation of x0 = 0.5 We could use the iterative formula: xn+1 = cos(xn) y = x y = cos(x) You could do this on a calculator using: [0.5] [=] [cos] [ANS] [=] [=] [=] [=] [=] ... x = 0.739085... ?

Iterative Methods There are different numerical methods we could use. Why have different methods? 1 Some converge to the solution more quickly than others. 2 Some methods may not converge at all for certain equations/initial choice of x, either diverging, or oscillating between values. For FP1 we will be exploring 3 root-finding algorithms: Interval Bisection Linear Interpolation Newton-Rhapson Process These will all be used to find the roots of functions, i.e. the x for which f(x) = 0.

f(x) = 0 ? ? ? ? Root-finding algorithms find the root of a function! Just simply put your equation into the form f(x) = 0 if not already so. We want to solve... Use f(x) = x = cos(x) ? f(x) = x – cos(x) x2 = 2 ? f(x) = x2 – 2 x = x3 + 3 ? f(x) = x3 – x + 3 x2 = 2x – 6 ? f(x) = x2 – 2x + 6

Approach 1: Interval Bisection This approach starts with an interval for which f(x) changes sign, then halves this interval at each iteration. This is loosely an approach you used at GCSE. Suppose we know the root lies in the interval [a, b] We want to narrow this interval. So try halfway. We find f((a+b)/2) is positive, so we replace b with (a+b)/2. x a a + b 2 a + b 2 a + b 2 b And repeat... 

Example Using the initial interval [2,3], find the positive root to the equation x2 = 7 to 2dp. ? Bro Tip: Keeping a table like this helps you easily keep track of your bounds. f(x) = x2 - 7 a b (a+b)/2 f((a+b)/2) 2 3 2.5 -0.75 2.75 0.5625 2.625 -0.109 2.6875 0.223 2.65625 0.0557 ... 2.64453125 2.646484375 2.645507813 ? We can stop at this point because: Both the new bounds will be 2.65 to 2dp, so we know the solution must be 2.65 to 2dp. ?

Exam Question ? ? f(2) = -1, f(2.5) = 3.4062 (M1) a Bro Tip: This is an incredibly common question, so know your mark scheme here! f(2) = -1, f(2.5) = 3.4062 (M1) Sign change (and f(x) is continuous), therefore a root α exists between x = 2 and x = 2.5 (A1) f(2.25) = 0.673828125 (B1) f(2.125) = -0.27526855 (M1) Therefore 2.125  α  2.25 (A1) a ? b ?

Exercises To Do

Analysis of Interval Bisection (Note that none of this bit will be in an exam) Failure Analysis: Guaranteed to converge to a root provided that for the initial interval [a,b], f(a) and f(b) have opposite signs, and f(x) is continuous. Other Comments: Another advantage: Simple to carry out. Rate of Convergence: Horribly slow in terms of crawling very slowly towards the root. The error halves each time – we say this is linear convergence. Therefore in practice Interval Bisection tends not to be used.

Approach 2: Linear Interpolation Linear Interpolation builds on the method of Interval Bisection by choosing a point (generally) better than the midpoint of the bounds. x a α c b Initially the bound for our root is [a,b]. We establish c using linear interpolation, see that f(c) is +ve, and hence adjust our interval to [a,c] and repeat.

Approach 2: Linear Interpolation Show that the equation x3 + 5x – 10 = 0 has a root in the interval [1,2]. Using linear interpolation, find this root to 1 decimal place. (Hint: use similar triangles) f(1) = -4, f(2) = 8 (2 – α1)/(α1 – 1) = 8/4 Solving a1 = 1.33333 f(1.333...) = -0.96296. Since negative, Make new interval [1.333..., 2] Eventually we find a3 = 1.4196. We can check 1.4 is correct to 1dp by looking at f(1.35) and f(1.45). Change of sign means root root lies in [1.35, 1.45], i.e. 1.4 is correct to 1dp. ? x 1 α1 2

Exam Question ?

Exercises To Do

Analysis of Linear Interpolation (Note that none of this bit will be in an exam) Failure Analysis: As with Interval Bisection, guaranteed to converge to a root provided that for the initial interval [a,b], f(a) and f(b) have opposite signs, and f(x) is continuous. Rate of Convergence: The error of the approximation after each iteration is dependent on ‘curvy’ the line is: as you might expect, the flatter the curve is, the more it approximates a straight line, and hence the better linear interpolation will be (which assumes a straight line in the region). The rate of convergence is high when the second derivative at the root is small (i.e. the gradient is not changing very much).

Approach 3: The Newton-Raphson Process (also known as Newton’s method) Suppose we start with an approximation of the root, x0. Clearly this is well off the mark. y y = f(x) A seemingly sensible thing to do is to follow the direction of the line, i.e. use the gradient of the tangent. We can keep repeating this process to (hopefully) get increasingly accurate approximations. Can you come up with a formula for xn+1 in terms of xn? x x2 x1 x0

y – f(xn) = f’(xn)(x – xn) -f(xn) = f’(xn)(xn+1 – xn) Approach 3: The Newton-Raphson Process y Formula: ? Using C1 coordinate geometry: y – f(xn) = f’(xn)(x – xn) But we’re interested when x = xn+1 and y = 0 -f(xn) = f’(xn)(xn+1 – xn) which gives: y = f(x) Newton-Raphson Process: xn+1 = xn - f(xn)/f’(xn) x xn+1 xn

Approach 3: The Newton-Raphson Process Demo (Courtesy of Wikipedia)

Example Returning to our original example: x = cos(x), say letting x0 = 0.5 ? Let f(x) = x – cos(x) f’(x) = 1 + sin(x) f(0.5) = 0.5 – cos(0.5) = -0.3775825 f’(0.5) = 1 + sin(0.5) = 1.4794255 x1 = 0.5 – (-0.3775825 / 1.4794255) = 0.7552224171 x2 = 0.7391412 x3 = 0.7390851 After merely three iterations, our approximation is accurate to 7 decimal places. Holy smokes Batman! Bro Tip: To perform iterations quickly, do the following on your calculator: [0.5] [=] [ANS] – (ANS – cos(ANS))/(1 + sin(ANS)) Then spam [=].

Quickfire Questions ? ? f(x) = x3 - 2 xn+1 = xn – (xn3 – 2)/3xn2 Using Newton’s method, state the recurrence relation for the following functions. f(x) = x3 - 2 xn+1 = xn – (xn3 – 2)/3xn2 f(x) = √x + x - 2 xn+1 = xn – (√xn + xn - 2)/(0.5xn-0.5 + 1) ? xn+1 = xn – (xn2 – xn – 1 )/(2xn – 1) ? f(x) = x2 – x – 1

Exam Question ? f’(x) = 2x3 – 3x2 + 1 f’(-1.5) = -12.5 June 2013 (Retracted) ? f’(x) = 2x3 – 3x2 + 1 f’(-1.5) = -12.5 f(-1.5) = 1.40625 1 = -1.5 – 1.40625/(-12.5) = -1.3875

Exercises To do.

Really Nice Application #1 Find the square root of 3. Using the C3 method of putting equation in form x = f(x), then using xn+1 = f(xn) Using Newton’s method ? We want solutions to x2 – 3 = 0 f(x) = x2 – 3 f’(x) = 2x Using xn+1 = xn – (xn2 – 3)/2xn x0 = 1 x1 = 2 x2 = 7/4 = 1.75 x3 = 97/56 = 1.73214 x4 = 1.73205 x5 = 1.73205 ... ? We want solutions to x2 = 3. This gives xn+1 = 3/xn This method fails to converge, because it will oscillate between two values regardless of the starting value.

Really Nice Application #2 Approximate . Identify a function which has  as a root (Hint: think trig functions!) ? Note that cos() = -1, so a root of f(x) = 1 + cos(x) is . We could have also used f(x) = tan(x), provided we started between /2 and 3/2 Using Newton’s method (letting x0 = 3) ? xn+1 = xn + (1 + cos(xn))/sin(xn) x0 = 3 (a sensible starting place!) x1 = 3.070914844 x6 = 3.139385197 x11 = 3.141523671 x2 = 3.106268467 x7 = 3.140488926 x12 = 3.141558162 x3 = 3.123932397 x8 = 3.141040790 x13 = 3.141575408 x4 = 3.132762755 x9 = 3.141316722 x14 = 3.141584031 x5 = 3.137177733 x10 = 3.141454688 x15 = 3.141588342

Analysis of Newton-Raphson Method (Note that none of this bit will be in an exam) Failure Analysis: Approximations may diverge (and hence fail!). Consider what happens in the following scenario on the right: x0 Rate of Convergence: The Newton-Raphson’s rate of convergence is ‘quadratic’: that is, as we converge on the root, on each iteration the difference between the approximation and the root is squared (squaring a value less than 1 makes it smaller). That’s rather good (and is considerably better than Interval Bisection’s linear convergence).