Download presentation
Presentation is loading. Please wait.
1
Part II: Numerical Computing Methods
2
Introduction to Numerical Methods
Understanding MATLAB language is not enough for us to efficiently solve engineering problems; since MATLAB only facilitates the computing process, but does not tell us how to compute. We need to learn numerical computing methods that are useful for solving engineering problems. Many of these methods are too complicated to do manually, and so require tools/software like MATLAB. We will study several important numerical computing problems, and use Matlab to implement the methods.
3
Chapter 9: Finding the Roots of a Function (Solving 𝑓 𝑥 =0)
Preliminaries for root findings Bracketing Bisection method Newton’s method This covers Ch. 6 of the Numerical Methods with MATLAB: Implementation and Application book.
4
Root Findings: Preliminaries
Many engineering problems require solving equations that cannot be manipulated and solved analytically. An important sub-class of problems requires finding values 𝑥 such that 𝑓 𝑥 =0, for a function 𝑓 𝑥 . These x values are called zeros or roots of the equation. e.g. 𝑓 𝑥 = cos 𝑥 −𝑥=0 cannot be solved analytically…
5
Root Findings: Preliminaries
How to use MATLAB to solve 𝑓(𝑥)=0? Lots of considerations first: Is this a special function that will be evaluated often? How much precision is needed? How fast and robust must the method be? Is the function a polynomial? There is no single root-finding method that is best for all situations. We need to select proper methods based on specific problems.
6
Root Findings: Preliminaries
The basic root-finding strategy: Plot the function: this provides an initial guess at the roots and reveal the function curve, gives indications of potential problems. Select an initial guess. Iteratively refine the initial guess with a root-finding algorithm. In general, we are seeking numerical solutions that are approximations to the actual roots. The iteration algorithms will NEVER produce the exact roots, but they can get arbitrarily close.
7
Bracketing 𝑓(𝑥) 𝑓 𝑥 <0 𝑓 𝑥 >0 𝑥 1 𝑥 2 𝑥 3 𝑥 Before we select an initial guess, it is helpful to plot the function. It roughly shows where potential roots are: a change in sign of 𝑓(𝑥). We also need to find the intervals that contain the roots.
8
Bracketing Bracketing is a procedure that searches for roots over a large interval on the x-axis. The result is the identification of a set of subintervals that are likely to contain roots. A root is bracketed on the interval [𝑎, 𝑏] if 𝑓(𝑎) and 𝑓(𝑏) have opposite sign. A sign change occurs for singularities as well as roots. In this case, we need to check the function plot to exclude the singularity case.
9
Bracketing Bracketing is used to make initial guesses at the roots, not to accurately estimate the values of the roots. Bracketing algorithm: n subintervals 𝑑𝑥 … 𝑥 𝑚𝑖𝑛 𝑥 𝑚𝑎𝑥
10
Bracketing How to test “if 𝑓(𝑥) changes sign in [𝑎, 𝑏]”?
One simple implementation is 𝑓 𝑎 ×𝑓 𝑏 <0? A better test uses the built-in sign function:
11
Bracketing The brackPlot function is a toolbox function that automates the search for bracket intervals: Looks for brackets of a user-defined 𝑓(𝑥) Plots the brackets and 𝑓(𝑥) Returns brackets in a two-column matrix Syntax: brackPlot(‘myFun’,xmin,xmax) or brackPlot(’myFun’,xmin,xmax,nx) myFun: a string type variable holding the name of the function (built-in or user-defined functions) xmin, xmax: defines the search range on x=axis nx: (optional) is the number of subintervals on [xmin,xmax] used to check for sign changes of f(x). Default: nx= 20
12
Bracketing The brackPlot function is provided (download brackPlot.m from class website). How to call this function? For a built-in 𝑓 𝑥 :brackPlot ′sin′, −4∗pi,4∗ pi ; For a use-defined 𝑓(𝑥): First create the 𝑓(𝑥) function m-file: brackPlot ′fx3′, 0,5 ; The array dot operator allows the function to process both vector and scalar inputs.
13
Bisection Method The logic: given a bracketed root, repeatedly halve the interval while continuing to bracket the root. Given an initial bracket [𝑎,𝑏], an improved approximation to the root is the midpoint of the interval: 𝑥 𝑚 = 1 2 𝑎+𝑏 =𝑎+ 𝑏−𝑎 2 After finding the midpoint, test to see whether the function changes sign between a and 𝑥 𝑚 or between 𝑥 𝑚 and b. Bisection always converges if the original interval contained a root.
14
Bisection Method 𝑓(𝑥) 𝑥 𝑎 𝑥 𝑚2 𝑥 𝑚1 𝑏 𝑥 𝑚1 =𝑎+ (𝑏−𝑎) 2
true root 𝑥 𝑎 𝑥 𝑚2 𝑥 𝑚1 𝑏 𝑥 𝑚1 =𝑎+ (𝑏−𝑎) 2 𝑥 𝑚2 =𝑎+ ( 𝑥 𝑚1 −𝑎) 2
15
Example: Apply Bisection to 𝑥− 𝑥 1 3 −2=0
From brackPlot function, we know that there is ONLY one root between 3 and 4.
16
Example: Apply Bisection to 𝑥− 𝑥 1 3 −2=0
The demoBisect function finds the root of 𝑥− 𝑥 −2=0 using a fixed number of bisection iterations. demoBisect(a,b,n): a and b determines the range, n is the number of interactions. e.g. demoBisect(3,4,15)
17
Analysis of the Bisection Method
Let 𝛿 𝑛 be the size of the bracketing interval at the 𝑛th stage of bisection. The size of the bracket interval measures the error in the location of the root since the root must lie within the interval. The bracket size is reduced by a constant factor of 2 at each step. In general, the larger the n is, the more accurate the answer is to the actual root.
18
Bisection: Convergence Criteria
It is extremely unlikely that a numerical procedure or algorithm will find the precise value of the root; we are “guessing” the root! We need to decide how close to the root the guess should be before stopping the search. Two criteria can be applied to decide when to stop (i.e., when the solution has converged).
19
Bisection: Convergence Criteria
Criterion 1: the estimate of the root from iteration to iteration doesn’t change much (tolerance on 𝑥). Criterion 2: the function value at the current estimate of the root is close enough to zero (tolerance on 𝑓(𝑥)). 𝑥 𝑚,𝑘 − 𝑥 𝑚,𝑘−1 < 𝛿 𝑥 𝑓 𝑥 < 𝛿 𝑓
20
Bisection: Convergence Criteria
If 𝑓(𝑥) is “flat” near the root, a tolerance on 𝑓(𝑥) is satisfied for a large range of 𝑥. A tolerance on 𝑥 is more conservative. If 𝑓(𝑥) is “steep” near the root, a tolerance on 𝑥 is satisfied when |𝑓 𝑥 | is still fairly large. A tolerance on 𝑓(𝑥) is more conservative.
21
Bisection: A General Implementation
The bisect function implements the bisection algorithm with both of the stopping criteria. Syntax: r=bisect(fun, xb, xtol, ftol, verbose) fun: (string) function name xb: initial bracket (vector [a,b]) xtol: (optional) tolerance for x (Default: xtol=5*eps) ftol: (optional) tolerance for f(x) (Default: ftol=5*eps) verbose: (optional) print switch (Default: verbose=0, no printing)
22
Bisection: A General Implementation
How to use bisect function? e.g.
23
Bisection: A General Implementation
𝑥𝑏 interval in general should contain one root only! To find all roots, should apply bisect to all intervals (the following is NOT a function; just a normal script file.)
24
Newton’s Method Newton’s method (also known as Newton-Raphson method) is another way for finding successively better approximations to the roots of 𝑓 𝑥 =0. The basic principle is approximate a function by its linearization at a point. 𝑥 𝑘+1 = 𝑥 𝑘 − 𝑓 𝑥 𝑘 𝑓 ′ 𝑥 𝑘
25
Newton’s Method 𝑓( 𝑥 1 ) 𝑥 2 − 𝑥 3 𝑥 1 𝑥 3 𝑥 2 𝑥 1 − 𝑥 2 𝑓( 𝑥 2 ) 𝑓(𝑥)
Slope at point 𝑥 1 : 𝑓 ′ ( 𝑥 1 ) 𝑓( 𝑥 1 ) 𝑥 2 − 𝑥 3 𝑥 1 𝑥 3 𝑥 2 𝑥 1 − 𝑥 2 𝑥 𝑓( 𝑥 2 ) Slope at point 𝑥 2 : 𝑓 ′ ( 𝑥 2 ) 𝑓 ′ 𝑥 1 = 𝑓 𝑥 𝑥 1 − 𝑥 2 𝑥 2 = 𝑥 1 − 𝑓 𝑥 1 𝑓 ′ 𝑥 1 𝑓 ′ 𝑥 2 = 𝑓 𝑥 𝑥 2 − 𝑥 3 𝑥 3 = 𝑥 2 − 𝑓 𝑥 2 𝑓 ′ 𝑥 2
26
Newton’s Method: Algorithm
Why does this method work? For a current guess 𝑥 𝑘 , 𝑓(𝑥 𝑘 ) and the slope 𝑓 ′ ( 𝑥 𝑘 ) predicts where 𝑓(𝑥) crosses the 𝑥 axis. If the function was a straight line, prediction would be perfect. In practice, we need to repeat the calculation 𝑥 𝑘+1 = 𝑥 𝑘 − 𝑓 𝑥 𝑘 𝑓 ′ 𝑥 𝑘 a number of times.
27
Newton’s Method: Example
Solve 𝑓 𝑥 =𝑥− 𝑥 1/3 −2=0. First derivative is 𝑓 ′ 𝑥 =1− 1 3 𝑥 −2/3 . The iteration formula is: 𝑥 𝑘+1 = 𝑥 𝑘 − 𝑥 𝑘 − 𝑥 𝑘 1/3 −2 1− 𝑥 𝑘 −2/3 When to stop? The change in |𝑥 𝑘+1 − 𝑥 𝑘 |=| 𝑥 𝑘 − 𝑥 𝑘 −2 1− 𝑥 𝑘 − | is smaller than tolerance. (See demoNewton_fx3.m)
28
Newton’s Method A few more issues:
Newton’s method can start with an arbitrary guess (no needs to identify an interval that contains one root). In general, Newton’s method converges much more quickly than bisection. Newton’s method requires an analytical formula for 𝑓 ′ (𝑥). Sometimes, the method may not converge (next slide)…
29
Divergence of Newton’s Method
Since 𝑥 𝑘+1 = 𝑥 𝑘 − 𝑓 𝑥 𝑘 𝑓 ′ 𝑥 𝑘 , the new guess 𝑥 𝑘+1 will be far from the old guess 𝑥 𝑘 whenever 𝑓 ′ ( 𝑥 𝑘 )≈0. Either 𝑥 𝑘 will eventually get close to the actual root and the method will then converge (rapidly), or the iteration will not approach the root… We need to check the function behavior near the root.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.