Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 2: Numerical Differentiation. Derivative as a gradient

Similar presentations


Presentation on theme: "Lecture 2: Numerical Differentiation. Derivative as a gradient"— Presentation transcript:

1 Numerical Methods notes by G. Houseman for EARS1160, ENVI2240, CCFD1160
Lecture 2: Numerical Differentiation. Derivative as a gradient Taylor Series The forward difference: 1st order accurate Discretisation error Round-off error The centered difference: 2nd order accurate Convergence 2nd and higher derivatives The Laplacian

2 Numerical Differentiation
In this lecture we focus on numerical differentiation: Given a means of evaluating the function f(x), how do we compute the first derivative f'(x) [and, if necessary, higher derivatives f''(x), f'''(x), etc]. Differentiation may be required in order to locate function zeroes for example, or to evaluate a derived quantity such as electric field strength from electric potential. Recall that a derivative of a function is just the slope of the tangent line at that point. [f(x+h)-f(x)]/h gives an approximate value for the slope. x f(x) h

3 Taylor Series The basis of numerical differentiation is the Taylor series. In the vicinity of x0, we can write: This is an infinite series of course, but if the derivatives are well behaved, then the factor hn present in each term of the series causes the higher order terms to rapidly diminish in magnitude, provided h is sufficiently small. Exercises: (i) evaluate the Taylor series for cos(x), sin(x), tan(x), and sin(x) - tan(x), in the vicinity of x = 0. (ii) evaluate the Taylor series for exp(x) and ln(x) in the vicinity of x = 1. (iii) in each case, how large is the error if h = 0.1 and we truncate the series at n = 2.

4 The Forward Difference
The Taylor Series formula suggests a simple way to compute the first derivative at the point x0 In the representation of the discrete function, h is just the sampling interval. If we neglect those terms that contain the 2nd and higher order derivatives, then we get though we see that there is an error whose largest component is proportional to h. We say that this formula for the forward difference is accurate to first order in h.

5 Discretisation Error The difference between the actual value of the derivative, and the finite difference representation of it is referred to as the discretisation error. We usually just worry about the leading term of those that have been neglected. For the forward difference: If the discretisation error is too large, we will not get a very accurate estimate for the derivative. One way to decrease this source of error is to decrease h. Here we see that the discretisation error should decrease proportionally to h. The disadvantage of this technique is that as we decrease h, we (i) require more memory to represent the function, and more processor time to carry out operations (ii) run the risk of increasing computer round-off error in taking the difference of two near equal numbers.

6 Round-off Error Single precision in Fortran means 32 bits to store a real number: 23 bits for mantissa, 1 for sign, 7 for exponent and 1 to register overflow:about 7 places in decimal precision. The round-off error is the difference between the actual number and that stored in the computer. If you have two numbers that are the same to 5 significant figures and you subtract one from the other, the result is only accurate to two significant figures and round-off error is now at the level of 1% in the answer. At some point, if h is small enough f(x+h) = f(x) in the computer. Thus h in f(x+h)-f(x) should not be too small, or round-off error may cause unacceptable loss of accuracy. Alternative: Double precision: 64 bits, comprising 52 for mantissa, 10 for exponent, 1 for sign, 1 for overflow:15 places in decimal precision.

7 The Centred Difference
The other approach to decreasing discretisation error is to obtain a higher order approximation from the Taylor Series. By using another point from the discrete function we can eliminate the largest term in the error. Substituting -h for h in the expression for the Taylor Series gives: The sign of each odd-order term is changed. Subtracting this expression from the original Taylor Series:

8 2nd Order Accuracy This expression for the first derivative:
is second order accurate, because the leading term in the discretisation error is proportional to h2. If h is halved for example, the error is reduced by a factor of 4. Second order accurate approximations are preferred because they have better convergence behaviour for little cost in algorithm complexity.

9 Higher order approximations
If we can evaluate a 2nd order accurate expression for the first derivative from 3 points (central and one either side), can we do better and evaluate a 3rd or 4th order accurate expression ? Exercise: Using the Taylor Series expression for f(x+2h), and f(x-2h), together with the expressions for f(x+h) and f(x-h), derive an expression for the first derivative, which is 4th order accurate. The disadvantage of using these higher order expressions is that they start to get a little complicated for derivation and evaluation, and special treatment is required at either end of the discrete function to deal with those points outside the range in which the function is defined. E.g., assume function is symmetric, or antisymmetric

10 Convergence We introduced the term "convergence" last week in the context of getting a better and better approximation to the zero of a function. In numerical evaluation of derivatives, the same idea applies: As we decrease h, our estimate of the derivative should converge to the actual value of the derivative. In practice we don't know exactly how large the errors are, so we evaluate them empirically Plotting the estimated values of derivative, versus the discretisation interval h, we expect to see that the derivative values approach the axis along a quadratic curve (for a 2nd order approximation). By extrapolating to the axis we should get an accurate estimate of the derivative. h f'(x)

11 2nd Derivatives One approach to evaluating higher derivatives is build on the expressions at the previous level. Another approach is simply to go back to the Taylor Series. add these two expressions and thus: which is also evidently accurate to second order.

12 The Laplacian in 2D In two or more dimensions the expression for the Laplacian operator is obtained by simply generalising the expression for the 1D second derivative In diagrammatic form, we can represent the Laplacian operator by the spatial coefficients to be applied to the discretised function:

13 Discrete Functions f(x) x
f(x) is an arbitrary function of x, represented in the computer by a sequence of values at constant offset Dx This representation is not ideal because: (i) round-off error means that the point values are not exact, and (ii) even if each of the discretised points is located on the precise value of the function, we have discarded the information about the function between the points x f(x) Dx

14 Discrete Function Operations
The disadvantages of working with discrete functions are easily outweighed when we consider the powerful advantages that computer processing gives. Working with these functions requires, however, procedures for the standard mathematical operations (i) function evaluation (interpolation) (ii) differentiation (iii) integration In this lecture we focus on differentiation: Given a discrete representation of f(x), how do we compute the first derivative f'(x), and higher derivatives f''(x), f'''(x), etc, and extensions to higher dimensional problems. Differentiation may be required in order to locate function zeroes for example, or to evaluate a derived quantity such as electric field strength, given electric potential.


Download ppt "Lecture 2: Numerical Differentiation. Derivative as a gradient"

Similar presentations


Ads by Google