Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 3 Root Finding.

Similar presentations


Presentation on theme: "Chapter 3 Root Finding."— Presentation transcript:

1 Chapter 3 Root Finding

2 3.1 The Bisection Method Let f be a continues function. Suppose we know that f(a) f(b) < 0, then there is a root between a and b.

3 Example 3.1 A formal statement is given in Algorithm 3.1.

4

5 Theorem 3.1 Bisection Convergence and Error

6

7 Bisection Method Advantage: Disadvantage:
A global method: it always converge no matter how far you start from the actual root. Disadvantage: It cannot be used to find roots when the function is tangent is the axis and does not pass through the axis. For example: It converges slowly compared with other methods.

8 3.2 Newton’s Method: Derivation and Examples
Newton’s method is the classic algorithm for finding roots of functions. Two good derivations of Newton’s method: Geometric derivation Analytic derivation

9 Newton’s Method : Geometric Derivation

10 Newton’s Method : Geometric Derivation
The fundamental idea in Newton’s method is to use the tangent line approximation to the function f at point The point-slope formula for the equation of the straight line gives us: Continue the process with another straight line to get

11 Newton’s Method : Analytic Derivation

12 Example 3.2

13

14 Newton’s Method Advantage: Disadvantage: Very fast Not a global method
For example: Figure 3.3 (root x = 0.5) Another example: Figure 3.4 (root x = 0.05) In these example, the initial point should be carefully chosen. Newton’s method will cycle indefinitely. Newton’s method will just hop back and forth between two values. For example: Consider (root x = 0)

15

16 because the root is positive
Initial value Wrong predictions, because the root is positive Very close to the actual root

17 3.3 How to Stop Newton’s Method
Ideally, we would want to stop when the error is sufficiently small. (p. 12)

18 To make sure f(xn) is also small enough

19 3.4 Application: Division using Newton’s Method
The purpose is to illustrate the use of Newtown’s method and the analysis of the resulting iteration. f ’(x) f (x)

20 The way that computer stores numbers:
Questions: When does this iteration converge and how fast? What initial guesses x0 will work for us? The way that computer stores numbers:

21

22 From (2.11) p.53 Initial x0 p.56

23 Example 3.3

24 3.5 The Newton Error Formula

25 Definition 3.1 The requirement that C be nonzero and finite actually forces p to be a single unique value. Linear convergence: p = 1 Quadratic convergence: p = 2 Superlinearly convergence: but

26 Example 3.6

27 3.6 Newton’s Method: Theory and Convergence
Its proof is shown at pp

28 3.7 Application: Computation of the Square Root

29 The relative error satisfies
Questions: Can we find an initial guess such that Newton’s method will always converge for b on this interval? How rapidly will it converge? The Newton error formula (3.12) applied to : (3.25) The relative error satisfies (3.26)

30 relative error

31 How to find the initial value?
Choose the midpoint of the interval For example: If , Using linear interpolation b is known

32 3.8 The Secant Method: Derivation and Examples
An obvious drawback of Newton’s method is that it requires a formula for the derivative of f. One obvious way to deal with this problem is to use an approximation to the derivative in the Newton formula. For example: Another method: the secant method Used a secant line

33 The Secant Method

34 The Secant Method

35 The Secant Method Its advantages over Newton’s method:
It not require the derivative. It can be coded in a way requiring only a single function evaluation per iteration. Newton’s requires two, one for the function and one for the derivative.

36 Example 3.7

37

38 Error Estimation The error formula for the secant method:

39 The Convergence This is almost the same as Newton’s method.

40 3.9 Fixed-point Iteration
The goal of this section is to use the added understanding of simple iteration to enhance our understanding of and ability to solve root-finding problems. The root of f is equal to the fixed-point of g. root

41 Fixed-point Iteration
Because show that this kind of point is called a fixed point of the function g, and an iteration of the form (3.33) is called a fixed-point iteration for g.

42 Fixed point Root

43

44

45

46 Example 3.8

47

48 g (x)

49 Theorem 3.5

50 Theorem 3.5 (con.)

51

52 3.10 Special Topics in Root-finding Method
Extrapolation and Acceleration The examples have some mistakes, so we jump this subsection.

53 3.10.2 Variants of Newton’s Method
Newton’s method v.s. the chord method v.s. How much do we lose? The chord method is only linear, and only locally convergent. The chord method is useful in solving nonlinear systems of equations. One interesting variant of the chord method updates the point at which the derivative is evaluated, but not every iteration.

54 Example 3.12

55

56 Other Approximations to the Derivative
In Section 3.8, a method using a finite difference approximation to the derivative in Newton’s method. Only linear convergence (shown on pages 133 to 134)

57 3.10.3 The Secant Method: Theory and Convergence
The proof is shown on pages 136 to 139. You can study it by yourselves.

58 Multiple Roots So far our study of root-finding methods has assumed that the derivative of the function does not vanish at the root: What happens if the derivative does vanish at the root?

59 Example 3.13 -1

60 L’Hopital’s Rule for forms of type 0/0

61 Another example (f(x)=1-xe1-x)

62 Another example (f(x)=1-xe1-x)
The data (Table 3.10a) suggests that both iterations are converging, but neither one is converging as rapidly as we might have expected. Can we explain this? The fact that will have an effect on both Newton and secant methods. The error formulas (3.12) and (3.50) and limits (3.24) and (3.47) all require that Can we find anything more in the way of an explanation?

63 Discussion—Newton’s Method
Assume f has a double root and Note that we no longer have , therefore (according to Theorem 3.7 page 124) we no longer have quadratic convergence.

64 Discussion—Newton’s Method
If we change the Newton iteration to be now we have (quadratic convergence) More generally, The problem with this technique is that it requires that we know the degree of multiplicity of the root ahead of time.

65 Discussion—Newton’s Method
So an alternative is needed. The draw back of this method is that applying Newton’s method to u will require that we have a formula for the second derivative of f.

66 Discussion—Newton’s Method
(3.60) (3.61)

67 Table 3.10

68 Discussion From Table 3.10, we can see the accuracy is not as good as past. What is going on? Let’s look at a graph of the polynomial Fig shows a plot of 8000 points from this curve on the interval [0.45, 0.55] (root = 0.5) Premature convergence This is not caused by the root-finding method. It is because using finite precision arithmetic.

69

70 3.10.5 In Search of Fast Global Convergence: Hybrid Algorithm
Bisection method: slow but steady and reliable Newton’s method and the secant method: fast but potentially unreliable Brent’s algorithm: incorporate these basic ideas into an algorithm Algorithm 3.6

71

72 Example 3.14 Step 3. (b) Step 3. (c) Step 1. Step 2. (a) Step 2. (b)

73 Another Example


Download ppt "Chapter 3 Root Finding."

Similar presentations


Ads by Google