Presentation is loading. Please wait.

Presentation is loading. Please wait.

Differentiation and Richardson Extrapolation

Similar presentations


Presentation on theme: "Differentiation and Richardson Extrapolation"— Presentation transcript:

1 Differentiation and Richardson Extrapolation
Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada ece.uwaterloo.ca © 2012 by Douglas Wilhelm Harder. Some rights reserved.

2 Outline This topic discusses numerical differentiation:
Differentiation and Richardson Extrapolation Outline This topic discusses numerical differentiation: The use of interpolation The centred divided-difference approximations of the derivative and second derivative Error analysis using Taylor series The backward divided-difference approximation of the derivative Error analysis Richardson extrapolation

3 Outcomes Based Learning Objectives
Differentiation and Richardson Extrapolation Outcomes Based Learning Objectives By the end of this laboratory, you will: Understand how to approximate first and second derivatives Understand how Taylor series are used to determine errors of various approximations Know how to eliminate higher errors using Richardson extrapolation Have programmed a Matlab routine with appropriate error checking and exception handling

4 Approximating the Derivative
Differentiation and Richardson Extrapolation Approximating the Derivative Suppose we want to approximate the derivative:

5 Approximating the Derivative
Differentiation and Richardson Extrapolation Approximating the Derivative If the limit exists, this suggests that if we choose a very small h, Unfortunately, this isn’t as easy as it first appears: >> format long >> cos(1) ans = >> for i = 0:20 h = 10^(-i); (sin(1 + h) - sin(1))/h end

6 Approximating the Derivative
Differentiation and Richardson Extrapolation Approximating the Derivative At first, the approximations improve: h >> cos(1) ans =

7 Approximating the Derivative
Differentiation and Richardson Extrapolation Approximating the Derivative Then it seems to get worse: h >> cos(1) ans =

8 Approximating the Derivative
Differentiation and Richardson Extrapolation Approximating the Derivative There are two things that must be explained: Why do we, to start with, appear to get one more digit of accuracy every time we divide h by 10 Why, after some point, does the accuracy decrease, ultimately rendering a useless approximations

9 Differentiation and Richardson Extrapolation
Increasing Accuracy We will start with why the answer appears to improve: Recall Taylor’s approximation: where , that is, x is close to x

10 Differentiation and Richardson Extrapolation
Increasing Accuracy We will start with why the answer appears to improve: Recall Taylor’s approximation: where , that is, x is close to x Solve this equation for the derivative

11 Increasing Accuracy First we isolate the term :
Differentiation and Richardson Extrapolation Increasing Accuracy First we isolate the term :

12 Increasing Accuracy Then, divide each side by h:
Differentiation and Richardson Extrapolation Increasing Accuracy Then, divide each side by h: Again, , that is, x is close to x

13 Differentiation and Richardson Extrapolation
Increasing Accuracy Assuming that doesn’t vary too wildly, this term is approximately a constant:

14 Differentiation and Richardson Extrapolation
Increasing Accuracy We can easily see this is true from our first example: where

15 Differentiation and Richardson Extrapolation
Increasing Accuracy Thus, the absolute error of as an approximation of is Therefore, If we halve h, the absolute error should drop approximately half If we divide h by 10, the absolute error should drop by approximately 10

16 Increasing Accuracy h Absolute Error 1. 0.067826442017785 0.47248
Differentiation and Richardson Extrapolation Increasing Accuracy >> cos(1) ans = h Absolute Error 1. 0.1 0.01 10–3 10–4 10–5 10–6 10–7 10–8 10–9

17 Increasing Accuracy h Absolute Error 1. 0.067826442017785 0.47248
Differentiation and Richardson Extrapolation Increasing Accuracy >> cos(1) ans = h Absolute Error 1. 0.1 0.01 10–3 10–4 10–5 10–6 10–7 10–8 10–9

18 Increasing Accuracy h Absolute Error 1. 0.067826442017785 0.47248
Differentiation and Richardson Extrapolation Increasing Accuracy h Absolute Error 1. 0.1 0.01 10–3 10–4 10–5 10–6 10–7 10–8 10–9

19 Increasing Accuracy Let’s try this with something less familiar:
Differentiation and Richardson Extrapolation Increasing Accuracy Let’s try this with something less familiar: The Bessel function J2(x) has the derivative: These functions are implemented in Matlab as: J2(x) besselj( 2, x ) J1(x) besselj( 1, x ) J0(x) besselj( 0, x ) Bessel functions appear any time you are dealing with electromagnetic fields in cylindrical coordinates

20 Increasing Accuracy h Absolute Error 1. 0.067826442017785 0.133992
Differentiation and Richardson Extrapolation Increasing Accuracy >> x = 6.568; >> besselj( 1, x ) - 2*besselj( 2, x )/x ans = h Absolute Error 1. 0.1 0.01 10–3 10–4 10–5 10–6 10–7 10–8 10–9

21 Increasing Accuracy We could use a rule of thumb: Use h = 10–8
Differentiation and Richardson Extrapolation Increasing Accuracy We could use a rule of thumb: Use h = 10–8 It appears to work… Unfortunately: It is not always the best approximation It may not give us sufficient accuracy We still don’t understand why our approximation breaks down…

22 Differentiation and Richardson Extrapolation
Decreasing Precision Suppose we want 10 digits of accuracy in our answer: If h = 0.01, we need 12 digits when calculating sin(1.01) and sin(1): If h = , we need 15 digits when calculating sin( ) and sin(1):

23 Differentiation and Richardson Extrapolation
Decreasing Precision Suppose we want 10 digits of accuracy in our answer: If h = 10–12, we need 22 digits when calculating sin(1 + h) and sin(1): Matlab, however, uses double-precision floating-point numbers: These have a maximum accuracy of 16 decimal digits: >> format long >> sin( 1 + 1e-12 ) ans = >> sin( 1 )

24 Differentiation and Richardson Extrapolation
Decreasing Precision Because of the limitations of doubles, our approximation is Note: this is not entirely true because Matlab uses base 2 and not base 10, but the analogy is faithful…

25 Differentiation and Richardson Extrapolation
Decreasing Precision We can view this using the binary representation of doubles: >> cos( 1 ) ans = 3fe14a280fb5068c 3 f e a f b c × – = × 2–1 =

26 Decreasing Precision From this, we see:
Differentiation and Richardson Extrapolation Decreasing Precision From this, we see: >> format long >> 1/2 + 1/32 + 1/ / / / / / ans = >> cos( 1 ) ans = >> format hex ans = 3fe14a >> cos(1) ans = 3fe14a280fb5068c

27 Approximation with h = 2–n
Differentiation and Richardson Extrapolation Decreasing Precision n Approximation with h = 2–n

28 Approximation with h = 2–n
Differentiation and Richardson Extrapolation Decreasing Precision n Approximation with h = 2–n

29 Differentiation and Richardson Extrapolation
Decreasing Precision This effect when subtracting two similar numbers is called subtractive cancellation In industry, it is also referred to as catastrophic cancellation Ignoring the effects of subtractive cancellation is one of the most significant sources of numerical error

30 Decreasing Precision Consequence: Possible solutions:
Differentiation and Richardson Extrapolation Decreasing Precision Consequence: Unlike calculus, we cannot make h arbitrarily small Possible solutions: Find a better formulas Use completely different approaches

31 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Idea: find the line that interpolates the two points (x, u(x)) and (x + h, u(x + h))

32 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations The slope of this interpolating line is our approximation of the derivative:

33 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations What happens if we find the interpolating quadratic going through the three points (x – h, u(x – h)) (x, u(x)) (x + h, u(x + h)) ?

34 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations The interpolating quadratic is clearly a local approximation

35 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations The slope of the interpolating quadratic is easy to find:

36 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations The slope of the interpolating quadratic is also closer to the slope of the original function at x

37 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Without going through the process, finding the interpolating quadratic function gives us a similar formula

38 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Additionally, we can approximate the concavity (2nd derivative) at the point x by finding the concavity of the interpolating quadratic polynomial

39 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations For those interested, this Maple code finds these formulas

40 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Question: how much better are these two approximations?

41 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Using Taylor series, we have approximations for both u(x + h) and u(x – h): Here, and

42 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Subtracting the second approximation from the first, we get

43 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Solving the equation for the derivative, we get:

44 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations The critical term is the h2 This says If we halve h, the error goes down by a factor of 4 If we divide h by 10, the error goes down by a factor of 100

45 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Adding the two approximations

46 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Solving the equation for the 2nd derivative, we get:

47 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Again, the term in the error is h2 Thus, both of these formulas are reasonable approximations for the first and second derivatives

48 Differentiation and Richardson Extrapolation
Example We will demonstrate this by finding the approximation of both the derivative and 2nd-derivative of u(x) = x3 e–0.5x at x = 0.8 Using Maple, the correct values to 17 decimal digits are: u(1)(0.8) = u(2)(0.8) =

49 Differentiation and Richardson Extrapolation
Example h Approximation Error 10-1 1.0085e-1 2.020e-04 3.2017e-3 10-2 1.0083e-2 1.9668e-6 3.1997e-5 10-3 1.0082e-3 1.9663e-8 3.2008e-7 10-4 1.0082e-4 1.9340e-10 1.1882e-8 10-5 1.0082e-5 9.9676e-13 5.8920e-7 10-6 1.0082e-6 4.8181e-11 4.6663e-5 10-7 1.0082e-7 6.7346e-10 1.2679e-3 10-8 5.7103e-9 2.9348e-9 1.4612 10-9 7.2005e-8 4.4250e-8 u(1)(0.8) = u(2)(0.8) =

50 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations To give names to these formulas: First Derivative 1st-order forward divided-difference formula 2nd-order centred divided-difference formula Second Derivative 2nd-order centred divided-difference formula

51 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Suppose, however, you don’t have access to both x + h and x – h , y This is often the case in a time-dependant system

52 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Using the same idea: find the interpolating polynomial, but now find the slope at the right-hand point:

53 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Using Taylor series, we have approximations for both u(t – Dt) and u(t – 2Dt): Here, and

54 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Expand the terms (2Dt)2 and (2Dt)3 : Now, to cancel the order (Dt)2 terms, we must subtract the second equation from four times the first equation

55 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations This leaves us a formula containing the derivative:

56 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Solving for the derivative yields This is the backward divided-difference approximation of the derivative at the point t

57 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations Comparing the error term, we see that both are second order The coefficient, however, for the centred divided difference formula, has a smaller coefficient Question: is a factor of ¼ or a factor of ½?

58 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations You will write four functions: function [dy] = D1st( u, x, h ) function [dy] = Dc( u, x, h ) function [dy] = D2c( u, x, h ) function [dy] = Db( u, x, h ) that implement, respectively, the formulas Yes, they’re all one line…

59 Better Approximations
Differentiation and Richardson Extrapolation Better Approximations For example, >> format long >> 1, 0.1 ) ans = >> 1, 0.1 ) >> 1, 0.1 ) >> 1, 0.1 ) >> 1, 0.01 ) ans = >> 1, 0.01 ) >> 1, 0.01 ) >> 1, 0.01 )

60 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation There is something interesting about the error terms of the centred divided-difference formulas for the 1st and 2nd derivatives: If you calculate it out, we only have every second term…

61 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Let’s see if we can exploit this…. First, define Therefore, we have

62 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Let’s see if we can exploit this…. A better approximation: ¼ the error

63 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Expanding the products:

64 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Now, subtract the first equation from four times the second:

65 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Solving for the derivative: By taking a linear combination of two previous approximations, we have an approximation which has an O(h4) error

66 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Let’s try this with the sine function at x = 1 with h = 0.01: Doing the math, we see neither approximation is amazing, five digits in the second case…

67 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation If we calculate the linear combination, however, we get: All we did was take a linear combination of not-so-great approximations and we get an approximation good approximation… Let’s reduce h by half If the error is O(h6), reducing h by half should reduce the error by 1/64th

68 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Again, we get almost more digits of accuracy… How small must h be to get this accurate an answer? The error is given by the formula and thus we must solve to get h = :

69 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation As you may guess, we could repeat this again: Suppose we are solving some function f with a formula F Suppose the error is O(hn), then we can write and now we can subtract the first formula from 2n times the second:

70 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Solving for f(x), we get Note that the approximation is a weighted average of two other approximations

71 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Question: Is this formula subject to subtractive cancellation?

72 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Therefore, if we know that the powers of the approximation, we may apply the appropriate Richardson extrapolations… Given an initial value of h, we can define: R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

73 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation If the highest-order error is O(h2), then each subsequent approximation will have an absolute error ¼ the previous This applies for both centred divided-difference formulas for the 1st and 2nd derivatives R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

74 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Therefore, we could now calculate further approximations according to our Richardson extrapolation formula: R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

75 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation These values are now dropping according to O(h4) Whatever the error is for R2,2, the error of R3,2 is 1/16th that, and the error for R4,2 is reduced a further factor of 16 R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

76 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Replacing n with 4 in our formula, we get: and thus we have R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

77 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Again, now the errors are dropping by a factor of O(h6) and each approximation has 1/64th the error of the previous Why not give it another go? R1,1 = D(u, x, h) R2,1 = D(u, x, h/2) R3,1 = D(u, x, h/22) R4,1 = D(u, x, h/23) R5,1 = D(u, x, h/24)

78 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation We could, again, repeat this process: Thus, we would have a matrix of entries of which R5,5 is the most accurate

79 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation You will therefore be required to write a Matlab function function [du] = richardson22( D, u, x, h, N_max, eps_abs ) that will implement Richardson extrapolation: Create an (Nmax + 1) × (Nmax + 1) matrix of zeros Calculate R1,1 = D(u, x, h) Next, create a loop that iterates a variable i from 1 to Nmax that Calculates the value Ri + 1,1 = D(u, x, h/2i ) and Loops to calculate Ri + 1,j + 1 where j running from 1 to i using If , return the value Ri + 1,i + 1 If the loop finishes and nothing was returned, throw an exception indicating that Richardson extrapolation did not converge

80 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation The accuracy is actually quite impressive 1, 0.1, 5, 1e-12 ) ans = >> cos( 1 ) 2, 0.1, 5, 1e-12 ) >> cos( 2 ) 1, 0.1, 5, 1e-12 ) ans = >> -sin( 1 ) 2, 0.1, 5, 1e-12 ) >> -sin( 2 )

81 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation In reality, expecting an error as small as 10 is 1, 0.1, 5, 1e-12 ) ans = >> -sin( 1 ) 2, 0.1, 5, 1e-12 ) ??? Error using ==> richardson22 at 35 Richard extrapolation did not converge 2, 0.1, 5, 1e-10 ) >> -sin( 2 ) 1, 0.1, 5, 1e-12 ) ??? Error using ==> richardson22 at 20 Richard extrapolation did not converge 1, 0.1, 5, 1e-10 ) ans = >> -cos( 1 ) 2, 0.1, 5, 1e-10 ) >> -cos( 2 )

82 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation The Taylor series for the backward divided-difference formula does not drop off so quickly: Once you finish richardson22, it will be trivial to write richardson21 which is identical except it uses the formula:

83 Richardson Extrapolation
Differentiation and Richardson Extrapolation Richardson Extrapolation Question: What happens if an error is larger than that expected by Richardson extrapolation? Will this significantly affect the answer? Fortunately, each step is just a linear combination with significant weight placed on the more accurate answer It won’t be worse than just calling, for example, Dc( u, x, h/2^N_max )

84 Summary In this topic, we’ve looked at approximating the derivative
Differentiation and Richardson Extrapolation Summary In this topic, we’ve looked at approximating the derivative We saw the effect of subtractive cancellation Found the centred-divided difference formulas Found an interpolating function Differentiated that interpolating function Evaluated it at the point we wish to approximate the derivative We also found one backward divided-difference formula We then applied Richardson extrapolation

85 Differentiation and Richardson Extrapolation
References [1] Glyn James, Modern Engineering Mathematics, 4th Ed., Prentice Hall, 2007, p.778. [2] Glyn James, Advanced Modern Engineering Mathematics, 4th Ed., Prentice Hall, 2011, p.164.


Download ppt "Differentiation and Richardson Extrapolation"

Similar presentations


Ads by Google