Differentiation and Richardson Extrapolation

Slides:



Advertisements
Similar presentations
ES 240: Scientific and Engineering Computation. Chapter 17: Numerical IntegrationIntegration  Definition –Total area within a region –In mathematical.
Advertisements

Roundoff and truncation errors
1 MA 1128: Lecture 19 – 4/19/11 Quadratic Formula Solving Equations with Graphs.
Calculus Weeks 3&4. Integration by parts This is another way to try and integrate products. It is in fact the opposite of the product rule for derivatives:
Calculus I – Math 104 The end is near!. Series approximations for functions, integrals etc.. We've been associating series with functions and using them.
Computational Methods in Physics PHYS 3437
CSE Differentiation Roger Crawfis. May 19, 2015OSU/CIS 5412 Numerical Differentiation The mathematical definition: Can also be thought of as the.
EE3561_Unit 6(c)AL-DHAIFALLAH14351 EE 3561 : Computational Methods Unit 6 Numerical Differentiation Dr. Mujahed AlDhaifallah ( Term 342)
Numerical Computation
2. Numerical differentiation. Approximate a derivative of a given function. Approximate a derivative of a function defined by discrete data at the discrete.
Lecture 18 - Numerical Differentiation
Ch 5.2: Series Solutions Near an Ordinary Point, Part I
Lecture 2: Numerical Differentiation. Derivative as a gradient
APPLICATIONS OF DIFFERENTIATION 4. So far, we have been concerned with some particular aspects of curve sketching:  Domain, range, and symmetry (Chapter.
APPLICATIONS OF DIFFERENTIATION Newton’s Method In this section, we will learn: How to solve high-degree equations using Newton’s method. APPLICATIONS.
Boundary-value Problems and Finite-difference Equations Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University.
Math Calculus I Part 8 Power series, Taylor series.
Calculus I – Math 104 The end is near!. 1. Limits: Series give a good idea of the behavior of functions in the neighborhood of 0: We know for other reasons.
APPLICATIONS OF DIFFERENTIATION 4. In Sections 2.2 and 2.4, we investigated infinite limits and vertical asymptotes.  There, we let x approach a number.
Differential Equations and Boundary Value Problems
ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
MATH 212 NE 217 Douglas Wilhelm Harder Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada Copyright © 2011.
ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Numerical Differentiation and Integration Part 6 Calculus.
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
The Shooting Method for Boundary-value Problems
Fixed-Point Iteration Douglas Wilhelm Harder Department of Electrical and Computer Engineering University of Waterloo Copyright © 2007 by Douglas Wilhelm.
CHAPTER 2 LIMITS AND DERIVATIVES. 2.2 The Limit of a Function LIMITS AND DERIVATIVES In this section, we will learn: About limits in general and about.
3208 Unit 2 Limits and Continuity
MATH 212 NE 217 Douglas Wilhelm Harder Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada Copyright © 2011.
The Crank-Nicolson Method and Insulated Boundaries
MATH 212 NE 217 Douglas Wilhelm Harder Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada Copyright © 2011.
Numerical Methods Applications of Loops: The power of MATLAB Mathematics + Coding 1.
ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
ES 240: Scientific and Engineering Computation. Chapter 4 Chapter 4: Errors Uchechukwu Ofoegbu Temple University.
ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
Copyright © Cengage Learning. All rights reserved. 4 Quadratic Functions.
Solving Linear Equations To Solve an Equation means... To isolate the variable having a coefficient of 1 on one side of the equation. Examples x = 5.
10/27/ Differentiation-Continuous Functions Computer Engineering Majors Authors: Autar Kaw, Sri Harsha Garapati.
MATH 212 NE 217 Douglas Wilhelm Harder Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada Copyright © 2011.
CSE Differentiation Roger Crawfis Prepaid by:
GG 313 Geological Data Analysis Lecture 13 Solution of Simultaneous Equations October 4, 2005.
ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
Numerical Methods for Engineering MECN 3500
Problems with Floating-Point Representations Douglas Wilhelm Harder Department of Electrical and Computer Engineering University of Waterloo Copyright.
Boundary Value Problems l Up to this point we have solved differential equations that have all of their initial conditions specified. l There is another.
Numerical Methods.
TECHNIQUES OF INTEGRATION Due to the Fundamental Theorem of Calculus (FTC), we can integrate a function if we know an antiderivative, that is, an indefinite.
Solution of. Linear Differential Equations The first special case of first order differential equations that we will look is the linear first order differential.
This is an example of an infinite series. 1 1 Start with a square one unit by one unit: This series converges (approaches a limiting value.) Many series.
Today’s class Numerical differentiation Roots of equation Bracketing methods Numerical Methods, Lecture 4 1 Prof. Jinbo Bi CSE, UConn.
Interpolation Search Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada.
Interpolating Solutions to IVPs Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
Advanced Engineering Mathematics, 7 th Edition Peter V. O’Neil © 2012 Cengage Learning Engineering. All Rights Reserved. CHAPTER 4 Series Solutions.
MATH 212 NE 217 Douglas Wilhelm Harder Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada Copyright © 2011.
ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo,
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 6 - Chapters 22 and 23.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 21 Numerical Differentiation.
Open Addressing: Quadratic Probing
Class Notes 18: Numerical Methods (1/2)
CSE Differentiation Roger Crawfis.
Chapter 23.
CSE Differentiation Roger Crawfis.
Boundary-value problems and Finite-difference equations
Presentation transcript:

Differentiation and Richardson Extrapolation Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada ece.uwaterloo.ca dwharder@alumni.uwaterloo.ca © 2012 by Douglas Wilhelm Harder. Some rights reserved.

Outline This topic discusses numerical differentiation: Differentiation and Richardson Extrapolation Outline This topic discusses numerical differentiation: The use of interpolation The centred divided-difference approximations of the derivative and second derivative Error analysis using Taylor series The backward divided-difference approximation of the derivative Error analysis Richardson extrapolation

Outcomes Based Learning Objectives Differentiation and Richardson Extrapolation Outcomes Based Learning Objectives By the end of this laboratory, you will: Understand how to approximate first and second derivatives Understand how Taylor series are used to determine errors of various approximations Know how to eliminate higher errors using Richardson extrapolation Have programmed a Matlab routine with appropriate error checking and exception handling

Approximating the Derivative Differentiation and Richardson Extrapolation Approximating the Derivative Suppose we want to approximate the derivative:

Approximating the Derivative Differentiation and Richardson Extrapolation Approximating the Derivative If the limit exists, this suggests that if we choose a very small h, Unfortunately, this isn’t as easy as it first appears: >> format long >> cos(1) ans = 0.540302305868140 >> for i = 0:20 h = 10^(-i); (sin(1 + h) - sin(1))/h end

Approximating the Derivative Differentiation and Richardson Extrapolation Approximating the Derivative At first, the approximations improve: h 1 0.067826442017785 0.1 0.497363752535389 0.01 0.536085981011869 0.001 0.539881480360327 0.0001 0.540260231418621 0.00001 0.540298098505865 0.000001 0.540301885121330 0.0000001 0.540302264040449 0.00000001 0.540302302898255 >> cos(1) ans = 0.540302305868140

Approximating the Derivative Differentiation and Richardson Extrapolation Approximating the Derivative Then it seems to get worse: h 0.00000001 0.540302302898255 0.000000001 0.540302358409406 0.0000000001 0.540302247387103 0.00000000001 0.540301137164079 0.000000000001 0.540345546085064 10-13 0.539568389967826 10-14 0.544009282066327 10-15 0.555111512312578 10-16 0 10-17 0 10-18 0 10-19 0 10-20 0 >> cos(1) ans = 0.540302305868140

Approximating the Derivative Differentiation and Richardson Extrapolation Approximating the Derivative There are two things that must be explained: Why do we, to start with, appear to get one more digit of accuracy every time we divide h by 10 Why, after some point, does the accuracy decrease, ultimately rendering a useless approximations

Differentiation and Richardson Extrapolation Increasing Accuracy We will start with why the answer appears to improve: Recall Taylor’s approximation: where , that is, x is close to x

Differentiation and Richardson Extrapolation Increasing Accuracy We will start with why the answer appears to improve: Recall Taylor’s approximation: where , that is, x is close to x Solve this equation for the derivative

Increasing Accuracy First we isolate the term : Differentiation and Richardson Extrapolation Increasing Accuracy First we isolate the term :

Increasing Accuracy Then, divide each side by h: Differentiation and Richardson Extrapolation Increasing Accuracy Then, divide each side by h: Again, , that is, x is close to x

Differentiation and Richardson Extrapolation Increasing Accuracy Assuming that doesn’t vary too wildly, this term is approximately a constant:

Differentiation and Richardson Extrapolation Increasing Accuracy We can easily see this is true from our first example: where

Differentiation and Richardson Extrapolation Increasing Accuracy Thus, the absolute error of as an approximation of is Therefore, If we halve h, the absolute error should drop approximately half If we divide h by 10, the absolute error should drop by approximately 10

Increasing Accuracy h Absolute Error 1. 0.067826442017785 0.47248 Differentiation and Richardson Extrapolation Increasing Accuracy >> cos(1) ans = 0.540302305868140 h Absolute Error 1. 0.067826442017785 0.47248 0.42074 0.1 0.497363752535389 0.042939 0.042074 0.01 0.536085981011869 0.0042163 0.0042074 10–3 0.539881480360327 0.00042083 0.00042074 10–4 0.540260231418621 0.000042074 10–5 0.540298098505865 0.0000042074 10–6 0.540301885121330 0.00000042075 0.00000042074 10–7 0.540302264040449 0.0000000418276 0.000000042074 10–8 0.540302302898255 0.0000000029699 0.0000000042074 10–9 0.540302358409406 0.000000052541 0.00000000042074

Increasing Accuracy h Absolute Error 1. 0.067826442017785 0.47248 Differentiation and Richardson Extrapolation Increasing Accuracy >> cos(1) ans = 0.540302305868140 h Absolute Error 1. 0.067826442017785 0.47248 0.42074 0.1 0.497363752535389 0.042939 0.042074 0.01 0.536085981011869 0.0042163 0.0042074 10–3 0.539881480360327 0.00042083 0.00042074 10–4 0.540260231418621 0.000042074 10–5 0.540298098505865 0.0000042074 10–6 0.540301885121330 0.00000042075 0.00000042074 10–7 0.540302264040449 0.0000000418276 0.000000042074 10–8 0.540302302898255 0.0000000029699 0.0000000042074 10–9 0.540302358409406 0.000000052541 0.00000000042074

Increasing Accuracy h Absolute Error 1. 0.067826442017785 0.47248 Differentiation and Richardson Extrapolation Increasing Accuracy h Absolute Error 1. 0.067826442017785 0.47248 0.42074 0.1 0.497363752535389 0.042939 0.042074 0.01 0.536085981011869 0.0042163 0.0042074 10–3 0.539881480360327 0.00042083 0.00042074 10–4 0.540260231418621 0.000042074 10–5 0.540298098505865 0.0000042074 10–6 0.540301885121330 0.00000042075 0.00000042074 10–7 0.540302264040449 0.0000000418276 0.000000042074 10–8 0.540302302898255 0.0000000029699 0.0000000042074 10–9 0.540302358409406 0.000000052541 0.00000000042074

Increasing Accuracy Let’s try this with something less familiar: Differentiation and Richardson Extrapolation Increasing Accuracy Let’s try this with something less familiar: The Bessel function J2(x) has the derivative: These functions are implemented in Matlab as: J2(x) besselj( 2, x ) J1(x) besselj( 1, x ) J0(x) besselj( 0, x ) Bessel functions appear any time you are dealing with electromagnetic fields in cylindrical coordinates

Increasing Accuracy h Absolute Error 1. 0.067826442017785 0.133992 Differentiation and Richardson Extrapolation Increasing Accuracy >> x = 6.568; >> besselj( 1, x ) - 2*besselj( 2, x )/x ans = -0.039675290223248 h Absolute Error 1. 0.067826442017785 0.133992 0.144008 0.1 –0.025284847088251 0.0143904 0.0144008 0.01 –0.038235218035143 0.00144007 0.00144008 10–3 –0.039531281976313 0.000144008 10–4 –0.039660889397664 0.0000144008 10–5 –0.039673850132926 0.00000144009 0.00000144008 10–6 –0.039675146057405 0.000000144166 0.000000144008 10–7 –0.039675276397588 0.0000000183257 0.0000000144008 10–8 –0.039675285279372 0.00000000494388 0.00000000144008 10–9 –0.039675318586063 0.0000000283628 0.000000000144008

Increasing Accuracy We could use a rule of thumb: Use h = 10–8 Differentiation and Richardson Extrapolation Increasing Accuracy We could use a rule of thumb: Use h = 10–8 It appears to work… Unfortunately: It is not always the best approximation It may not give us sufficient accuracy We still don’t understand why our approximation breaks down…

Differentiation and Richardson Extrapolation Decreasing Precision Suppose we want 10 digits of accuracy in our answer: If h = 0.01, we need 12 digits when calculating sin(1.01) and sin(1): If h = 0.00001, we need 15 digits when calculating sin(1.00001) and sin(1):

Differentiation and Richardson Extrapolation Decreasing Precision Suppose we want 10 digits of accuracy in our answer: If h = 10–12, we need 22 digits when calculating sin(1 + h) and sin(1): Matlab, however, uses double-precision floating-point numbers: These have a maximum accuracy of 16 decimal digits: >> format long >> sin( 1 + 1e-12 ) ans = 0.841470984808437 >> sin( 1 ) 0.841470984807897

Differentiation and Richardson Extrapolation Decreasing Precision Because of the limitations of doubles, our approximation is Note: this is not entirely true because Matlab uses base 2 and not base 10, but the analogy is faithful…

Differentiation and Richardson Extrapolation Decreasing Precision We can view this using the binary representation of doubles: >> cos( 1 ) ans = 3fe14a280fb5068c 3 f e 1 4 a 2 8 0 f b 5 0 6 8 c 0011 1111 1110 0001 0100 1010 0010 1000 0000 1111 1011 0101 0000 0110 1000 1100 1.0001010010100010100000001111101101010000011010001100 × 201111111110 – 011111111 = 1.0001010010100010100000001111101101010000011010001100 × 2–1 = 0.10001010010100010100000001111101101010000011010001100

Decreasing Precision From this, we see: Differentiation and Richardson Extrapolation Decreasing Precision From this, we see: 0.10001010010100010100000001111101101010000011010001100 >> format long >> 1/2 + 1/32 + 1/128 + 1/1024 + 1/4096 + 1/65536 + 1/262144 + 1/33554432 ans = 0.540302306413651 >> cos( 1 ) ans = 0.540302305868140 >> format hex ans = 3fe14a2810000000 >> cos(1) ans = 3fe14a280fb5068c

Approximation with h = 2–n Differentiation and Richardson Extrapolation Decreasing Precision n Approximation with h = 2–n 0 0 0111111101 10001010111010001001011011110010001010011101011000000 1 0 0111111110 10011111110001001100000110000100011011000001001110100 2 0 0111111110 11011100001100000001101111000010000011010010011110000 3 0 0111111110 11111001000001011101110110001001110100000111000000000 4 0 0111111111 00000011011111110110111010001101110101110100101110000 5 0 0111111111 00000110111011011110010010001110111011111011010000000 6 0 0111111111 00001000101000001111110010011011101000110100000000000 7 0 0111111111 00001001011110010111100111101010111001011000110000000 8 0 0111111111 00001001111001010111010000100110110111000100000000000 9 0 0111111111 00001010000110110110000000010010010001101100000000000 10 0 0111111111 00001010001101100101000110111000011001000110000000000 11 0 0111111111 00001010010000111100100101110111001011110000000000000 12 0 0111111111 00001010010010101000010100010001011101111000000000000 13 0 0111111111 00001010010011011110001011001101010100110000000000000 14 0 0111111111 00001010010011111001000110100110111011100000000000000 15 0 0111111111 00001010010100000110100100010010101010000000000000000 16 0 0111111111 00001010010100001101010011001000010000000000000000000 17 0 0111111111 00001010010100010000101010100010111100000000000000000 18 0 0111111111 00001010010100010010010110010000010000000000000000000 19 0 0111111111 00001010010100010011001100000111000000000000000000000 20 0 0111111111 00001010010100010011100111000010000000000000000000000 21 0 0111111111 00001010010100010011110100100000000000000000000000000 22 0 0111111111 00001010010100010011111011001110000000000000000000000 23 0 0111111111 00001010010100010011111110100100000000000000000000000 24 0 0111111111 00001010010100010100000000010000000000000000000000000 25 0 0111111111 00001010010100010100000001000000000000000000000000000 26 0 0111111111 00001010010100010100000001100000000000000000000000000 0 0111111111 00001010010100010100000001111101101010000011010001100

Approximation with h = 2–n Differentiation and Richardson Extrapolation Decreasing Precision 0 0111111111 00001010010100010100000001111101101010000011010001100 27 0 0111111111 00001010010100010100000010000000000000000000000000000 28 0 0111111111 00001010010100010100000010000000000000000000000000000 29 0 0111111111 00001010010100010100000000000000000000000000000000000 30 0 0111111111 00001010010100010100000000000000000000000000000000000 31 0 0111111111 00001010010100010100000000000000000000000000000000000 32 0 0111111111 00001010010100010100000000000000000000000000000000000 33 0 0111111111 00001010010100010100000000000000000000000000000000000 34 0 0111111111 00001010010100010100000000000000000000000000000000000 35 0 0111111111 00001010010100010100000000000000000000000000000000000 36 0 0111111111 00001010010100010000000000000000000000000000000000000 37 0 0111111111 00001010010100010000000000000000000000000000000000000 38 0 0111111111 00001010010100000000000000000000000000000000000000000 39 0 0111111111 00001010010100000000000000000000000000000000000000000 40 0 0111111111 00001010010100000000000000000000000000000000000000000 41 0 0111111111 00001010010100000000000000000000000000000000000000000 42 0 0111111111 00001010010000000000000000000000000000000000000000000 43 0 0111111111 00001010010000000000000000000000000000000000000000000 44 0 0111111111 00001010000000000000000000000000000000000000000000000 45 0 0111111111 00001010000000000000000000000000000000000000000000000 46 0 0111111111 00001010000000000000000000000000000000000000000000000 47 0 0111111111 00001000000000000000000000000000000000000000000000000 48 0 0111111111 00001000000000000000000000000000000000000000000000000 49 0 0111111111 00000000000000000000000000000000000000000000000000000 50 0 0111111111 00000000000000000000000000000000000000000000000000000 51 0 0111111111 00000000000000000000000000000000000000000000000000000 52 0 0111111111 00000000000000000000000000000000000000000000000000000 53 0 0000000000 00000000000000000000000000000000000000000000000000000 n Approximation with h = 2–n

Differentiation and Richardson Extrapolation Decreasing Precision This effect when subtracting two similar numbers is called subtractive cancellation In industry, it is also referred to as catastrophic cancellation Ignoring the effects of subtractive cancellation is one of the most significant sources of numerical error

Decreasing Precision Consequence: Possible solutions: Differentiation and Richardson Extrapolation Decreasing Precision Consequence: Unlike calculus, we cannot make h arbitrarily small Possible solutions: Find a better formulas Use completely different approaches

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Idea: find the line that interpolates the two points (x, u(x)) and (x + h, u(x + h))

Better Approximations Differentiation and Richardson Extrapolation Better Approximations The slope of this interpolating line is our approximation of the derivative:

Better Approximations Differentiation and Richardson Extrapolation Better Approximations What happens if we find the interpolating quadratic going through the three points (x – h, u(x – h)) (x, u(x)) (x + h, u(x + h)) ?

Better Approximations Differentiation and Richardson Extrapolation Better Approximations The interpolating quadratic is clearly a local approximation

Better Approximations Differentiation and Richardson Extrapolation Better Approximations The slope of the interpolating quadratic is easy to find:

Better Approximations Differentiation and Richardson Extrapolation Better Approximations The slope of the interpolating quadratic is also closer to the slope of the original function at x

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Without going through the process, finding the interpolating quadratic function gives us a similar formula

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Additionally, we can approximate the concavity (2nd derivative) at the point x by finding the concavity of the interpolating quadratic polynomial

Better Approximations Differentiation and Richardson Extrapolation Better Approximations For those interested, this Maple code finds these formulas

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Question: how much better are these two approximations?

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Using Taylor series, we have approximations for both u(x + h) and u(x – h): Here, and

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Subtracting the second approximation from the first, we get

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Solving the equation for the derivative, we get:

Better Approximations Differentiation and Richardson Extrapolation Better Approximations The critical term is the h2 This says If we halve h, the error goes down by a factor of 4 If we divide h by 10, the error goes down by a factor of 100

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Adding the two approximations

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Solving the equation for the 2nd derivative, we get:

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Again, the term in the error is h2 Thus, both of these formulas are reasonable approximations for the first and second derivatives

Differentiation and Richardson Extrapolation Example We will demonstrate this by finding the approximation of both the derivative and 2nd-derivative of u(x) = x3 e–0.5x at x = 0.8 Using Maple, the correct values to 17 decimal digits are: u(1)(0.8) = 1.1154125566033037 u(2)(0.8) = 2.0163226984752030

Differentiation and Richardson Extrapolation Example h Approximation Error 10-1 1.216270589620254 1.0085e-1 1.115614538793770 2.020e-04 2.013121016529673 3.2017e-3 10-2 1.125495976919111 1.0083e-2 1.115414523410804 1.9668e-6 2.016290701661316 3.1997e-5 10-3 1.116420737455270 1.0082e-3 1.115412576266073 1.9663e-8 2.016322378395330 3.2008e-7 10-4 1.115513372934029 1.0082e-4 1.115412556799700 1.9340e-10 2.016322686593242 1.1882e-8 10-5 1.115422638214847 1.0082e-5 1.115412556604301 9.9676e-13 2.016322109277269 5.8920e-7 10-6 1.115413564789503 1.0082e-6 1.115412556651485 4.8181e-11 2.016276035021747 4.6663e-5 10-7 1.115412656682580 1.0082e-7 1.115412555929840 6.7346e-10 2.015054789694660 1.2679e-3 10-8 1.115412562313622 5.7103e-9 1.115412559538065 2.9348e-9 0.555111512312578 1.4612 10-9 1.115412484598011 7.2005e-8 1.115412512353586 4.4250e-8 -55.511151231257820 57.5275 u(1)(0.8) = 1.1154125566033037 u(2)(0.8) = 2.0163226984752030

Better Approximations Differentiation and Richardson Extrapolation Better Approximations To give names to these formulas: First Derivative 1st-order forward divided-difference formula 2nd-order centred divided-difference formula Second Derivative 2nd-order centred divided-difference formula

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Suppose, however, you don’t have access to both x + h and x – h , y This is often the case in a time-dependant system

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Using the same idea: find the interpolating polynomial, but now find the slope at the right-hand point:

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Using Taylor series, we have approximations for both u(t – Dt) and u(t – 2Dt): Here, and

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Expand the terms (2Dt)2 and (2Dt)3 : Now, to cancel the order (Dt)2 terms, we must subtract the second equation from four times the first equation

Better Approximations Differentiation and Richardson Extrapolation Better Approximations This leaves us a formula containing the derivative:

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Solving for the derivative yields This is the backward divided-difference approximation of the derivative at the point t

Better Approximations Differentiation and Richardson Extrapolation Better Approximations Comparing the error term, we see that both are second order The coefficient, however, for the centred divided difference formula, has a smaller coefficient Question: is a factor of ¼ or a factor of ½?

Better Approximations Differentiation and Richardson Extrapolation Better Approximations You will write four functions: function [dy] = D1st( u, x, h ) function [dy] = Dc( u, x, h ) function [dy] = D2c( u, x, h ) function [dy] = Db( u, x, h ) that implement, respectively, the formulas Yes, they’re all one line…

Better Approximations Differentiation and Richardson Extrapolation Better Approximations For example, >> format long >> D1st( @sin, 1, 0.1 ) ans = 0.497363752535389 >> Dc( @sin, 1, 0.1 ) 0.539402252169760 >> D2c( @sin, 1, 0.1 ) -0.840769992687418 >> Db( @sin, 1, 0.1 ) 0.542307034066392 >> D1st( @sin, 1, 0.01 ) ans = 0.536085981011869 >> Dc( @sin, 1, 0.01 ) 0.540293300874733 >> D2c( @sin, 1, 0.01 ) -0.841463972572898 >> Db( @sin, 1, 0.01 ) 0.540320525678883

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation There is something interesting about the error terms of the centred divided-difference formulas for the 1st and 2nd derivatives: If you calculate it out, we only have every second term…

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Let’s see if we can exploit this…. First, define Therefore, we have

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Let’s see if we can exploit this…. A better approximation: ¼ the error

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Expanding the products:

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Now, subtract the first equation from four times the second:

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Solving for the derivative: By taking a linear combination of two previous approximations, we have an approximation which has an O(h4) error

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Let’s try this with the sine function at x = 1 with h = 0.01: Doing the math, we see neither approximation is amazing, five digits in the second case…

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation If we calculate the linear combination, however, we get: All we did was take a linear combination of not-so-great approximations and we get an approximation good approximation… Let’s reduce h by half If the error is O(h6), reducing h by half should reduce the error by 1/64th

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Again, we get almost more digits of accuracy… How small must h be to get this accurate an answer? The error is given by the formula and thus we must solve to get h = 0.00000224:

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation As you may guess, we could repeat this again: Suppose we are solving some function f with a formula F Suppose the error is O(hn), then we can write and now we can subtract the first formula from 2n times the second:

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Solving for f(x), we get Note that the approximation is a weighted average of two other approximations

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Question: Is this formula subject to subtractive cancellation?

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Therefore, if we know that the powers of the approximation, we may apply the appropriate Richardson extrapolations… Given an initial value of h, we can define: R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation If the highest-order error is O(h2), then each subsequent approximation will have an absolute error ¼ the previous This applies for both centred divided-difference formulas for the 1st and 2nd derivatives R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Therefore, we could now calculate further approximations according to our Richardson extrapolation formula: R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation These values are now dropping according to O(h4) Whatever the error is for R2,2, the error of R3,2 is 1/16th that, and the error for R4,2 is reduced a further factor of 16 R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Replacing n with 4 in our formula, we get: and thus we have R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Again, now the errors are dropping by a factor of O(h6) and each approximation has 1/64th the error of the previous Why not give it another go? R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation We could, again, repeat this process: Thus, we would have a matrix of entries of which R5,5 is the most accurate

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation You will therefore be required to write a Matlab function function [du] = richardson22( D, u, x, h, N_max, eps_abs ) that will implement Richardson extrapolation: Create an (Nmax + 1) × (Nmax + 1) matrix of zeros Calculate R1,1 = D(u, x, h) Next, create a loop that iterates a variable i from 1 to Nmax that Calculates the value Ri + 1,1 = D(u, x, h/2i ) and Loops to calculate Ri + 1,j + 1 where j running from 1 to i using If , return the value Ri + 1,i + 1 If the loop finishes and nothing was returned, throw an exception indicating that Richardson extrapolation did not converge

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation The accuracy is actually quite impressive >> richardson22( @Dc, @sin, 1, 0.1, 5, 1e-12 ) ans = 0.540302305868148 >> cos( 1 ) 0.540302305868140 >> richardson22( @Dc, @sin, 2, 0.1, 5, 1e-12 ) -0.416146836547144 >> cos( 2 ) -0.416146836547142 >> richardson22( @Dc, @cos, 1, 0.1, 5, 1e-12 ) ans = -0.841470984807898 >> -sin( 1 ) -0.841470984807897 >> richardson22( @Dc, @cos, 2, 0.1, 5, 1e-12 ) -0.909297426825698 >> -sin( 2 ) -0.909297426825682

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation In reality, expecting an error as small as 10 is >> richardson22( @D2c, @sin, 1, 0.1, 5, 1e-12 ) ans = -0.841470984807975 >> -sin( 1 ) -0.841470984807897 >> richardson22( @D2c, @sin, 2, 0.1, 5, 1e-12 ) ??? Error using ==> richardson22 at 35 Richard extrapolation did not converge >> richardson22( @D2c, @sin, 2, 0.1, 5, 1e-10 ) -0.909297426827381 >> -sin( 2 ) -0.909297426825682 >> richardson22( @D2c, @cos, 1, 0.1, 5, 1e-12 ) ??? Error using ==> richardson22 at 20 Richard extrapolation did not converge >> richardson22( @D2c, @cos, 1, 0.1, 5, 1e-10 ) ans = -0.540302305869316 >> -cos( 1 ) -0.540302305868140 >> richardson22( @D2c, @cos, 2, 0.1, 5, 1e-10 ) 0.416146836545719 >> -cos( 2 ) 0.416146836547142

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation The Taylor series for the backward divided-difference formula does not drop off so quickly: Once you finish richardson22, it will be trivial to write richardson21 which is identical except it uses the formula:

Richardson Extrapolation Differentiation and Richardson Extrapolation Richardson Extrapolation Question: What happens if an error is larger than that expected by Richardson extrapolation? Will this significantly affect the answer? Fortunately, each step is just a linear combination with significant weight placed on the more accurate answer It won’t be worse than just calling, for example, Dc( u, x, h/2^N_max )

Summary In this topic, we’ve looked at approximating the derivative Differentiation and Richardson Extrapolation Summary In this topic, we’ve looked at approximating the derivative We saw the effect of subtractive cancellation Found the centred-divided difference formulas Found an interpolating function Differentiated that interpolating function Evaluated it at the point we wish to approximate the derivative We also found one backward divided-difference formula We then applied Richardson extrapolation

Differentiation and Richardson Extrapolation References [1] Glyn James, Modern Engineering Mathematics, 4th Ed., Prentice Hall, 2007, p.778. [2] Glyn James, Advanced Modern Engineering Mathematics, 4th Ed., Prentice Hall, 2011, p.164.