Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE 330: Numerical Methods

Similar presentations


Presentation on theme: "CSE 330: Numerical Methods"— Presentation transcript:

1 CSE 330: Numerical Methods
Lecture 2 True and Approximate Error Finding Roots of Nonlinear Equations Bisection Method For Slides Thanks to Dr. S. M. Lutful Kabir Visiting Professor, BRAC University & Professor, BUET

2 What is true error? True error is the difference between the true value (also called the exact value) and the approximate value. True Error = True value – Approximate value Example 1 The derivative of a function at a particular value of can be approximately calculated by For and h=0.3, find at x=2 a) the approximate value of f’(x) b) the true value of f’(x) c) the true error

3 True error for the example
The approximate value is obtained from the previous equation as The true value can be obtainted from the derivative of the function The true value from the above equation is 9.514 True error = True value – Approximate value =

4 Magnitude of the true error
The magnitude of true error does not show how bad the error is. A true error may seem to be small, but if the function given in the Example 1 were the true error in calculating f’(2) with h=0.3 would be X10-6 This value of true error is smaller, even when the two problems are similar in that they use the same value of the function argument, x=2 and the step size, h=0.3 This brings us to the definition of relative true error.

5 Relative True Error Relative true error is denoted by and is defined as the ratio between the true error and the true value True Error Relative True Error, = True value In both the case, the relative true error is %

6 What is approximate error?
In the previous section, we discussed how to calculate true errors Such errors are calculated only if true values are known. An example where this would be useful is when one is checking if a program is in working order and you know some examples where the true error is known But mostly we will not have the luxury of knowing true values as why would you want to find the approximate values if you know the true values So when we are solving a problem numerically, we will only have access to approximate values We need to know how to quantify error for such cases

7 Definition of Approximate Error
Approximate error is defined as the difference between the present approximation and previous approximation. Approximate Error= Present Approximation – Previous Approximation Relative approximate error is defined as the ratio between the approximate error and the present approximation In the previous exmple if we find the value of the derivative of the function at h=0.3 and h=0.15, the values and respectively So the relative approximate error in percentage is %

8 Use relative approximate errors to minimize the error
In a numerical method that uses iterative process, a user can calculate relative approximate error at the end of each iteration The user may pre-specify a minimum acceptable tolerance called the pre-specified tolerance If the absolute relative approximate error is less than or equal to the pre-specified tolerance, then the acceptable error has been reached and no more iterations would be required

9 Introduction to finding Roots
Mathematically models for a wide variety of problems in science and engineering can be formulated into equations of the form f(x)=0 where x and f(x) may be real, complex or vector quantities. The solution process often involves finding the values of x that would satisfy the equation These values are called the roots of the equation

10 Polynomial Equations Polynomial equations are a simple class of algebric equations that are represented as follows: This is called nth degree polynomial and has n numbers of roots. The roots may be real and different real and repeated complex number

11 Polynomial equations (continued)
Since complex roots appear in pairs, if n is odd, then the polynomial has at least one real root. For example, a cubic equation of the type will have at least one real root and the remaining two may be real or complex roots. Some specific examples of polynomial equations are:

12 Transcendental Equations
A non-algebric equation is called transcendantal equation. These include trigonometric, exponential and logarithmic functions Examples of transcendantal equation are:

13 Iterative Methods An iterative technique usually begins with an approximate value of the root, known as the initial guess, which is then successively corrected iteration by iteration under a certain mathematical basis The process of iteration stops when the desired level of accuracy is obtained Since ,in many cases, the iterative method needs a large number of iterations and arithmatic opeartion to reach a solution, the use of computers has become inevitable to make the task simple and efficient

14 Iterative Methods (continued)
Iterative methods, based on the number of guesses they use, can be categorized into two categories: Bracketing methods (Interpolation methods) Open end methods (Extrapolation methods) Bracketing methods starts with two initial guesses that ‘bracket’ the root and then systematically reduce the width of the bracket until the solution is reached Two popular methods under Bracketing category are Bisection method False position method These methods are based on the assumption that the function changes sign in the vicinity of a root

15 Iterative Methods (continued)
Open end methods use a single starting value or two values that do not necessarily bracket the root The following iterative methods fall under this category: Newton-Raphson method Secant method Muller’s method Fixed-point method Bairstow’s method

16 Starting an iterative process
Before an iterative process is initaited, we have to determine either an approximate value of root or a ‘search’ interval that contains a root One simple method is to plot the function Graphical representation will not only provide us rough estimate of the root but also help us in understanding the properties of the function Largest possible root For a polynomial represented by the largest possible root is given by

17 Starting an iterative process
Search Bracket Another relationship that might be useful for determining the search intervals that contain the real roots of a polynomial is where x is the root of the polynomial. This will be the maximum absolute value of the roots That means that no roots exceed xmax in absolute magnitude and thus, all real roots lie within the interval (-|x*max |,|x*max|)

18 Starting an iterative process (continued)
There is another relationship that suggests an interval for roots. All roots x satisfy the inequality where the ‘max’ denotes the maximum of the absolute values of |a0|, |a1|, |a2|, |an-2|, |an-1|

19 Example #1 Consider the polynomial equation
Estimate the possible initial guess value The largest possible root is That is, no root can be larger than the value 4 All roots must satisfy the relation Therefore, all real roots lie in the interval We can use these two points as initial guesses for the bracketing methods and one of them for the open end methods

20 Iteration Stopping Criterion
We must have an objective criterion for deciding when to stop the process We may use one of the following tests < Ea (absolute error in x) < Er (relative error in x) x<>0 < E (value of function at root) There may be the situations where these tests may fail In cases where we do not know whether the process converges or not, we must have a limit on the number of iterations, like Iterations > N (limit on iterations)

21 Bisection Method One of the first numerical methods developed to find the root of a nonlinear equation f(x)=0 was the bisection method (also called binary-search method). The method is based on the following theorem. Theorem An equation , where is a real continuous function, has at least one root between xl and xu and if f(xl)f(xu)<0 f (x) xℓ xu x

22 Bisection method (continued)
Since the method is based on finding the root between two points, the method falls under the category of bracketing methods. Since the root is bracketed between two points, xl and xu one can find the mid-point, xm between xl and xu. This gives us two new intervals xl and xm and xm and xu

23 There may not be any roots between the two points, if the function does not change sign
f (x) xℓ xu x

24 If the function does not change sign between the two points, roots of the equation may still exist
f (x) xℓ xu x

25 jjjjkkk More than one root may exist between the two points if the function changes sign between the two points f (x) xℓ xu x

26 Decision making process
Is the root now between xl and xm or between xm and xu? Well, one can find the sign of , and if then the new bracket is between xl and xm otherwise, it is between xm and xu. So, you can see that you are literally halving the interval. As one repeats this process, the width of the interval [xl , xu] becomes smaller and smaller, and you can reach to the root of the equation f(x)=0.

27 Complete Algorithm for the Bisection Method
Step #1: Choose xl and xu as two guesses for the root such that , in other words, f(x) changes sign between xl and xu. Step #2: Estimate the root, xm of the equation as the mid-point between xl and xu as, Step #3: Now check the following If then the root lies between xl and xm then xl = xl and xu = xm. If then the root lies between xm and xu then xl = xm and xu = xu. If then the root is xm and stop the iteration.

28 Complete Algorithm for the Bisection Method (continued)
Step #4: Find the new estimate of the root Step #5: Find the absolute relative approximate error as where, = estimated root from present iteration = estimated root from previous iteration

29 Complete Algorithm for the Bisection Method (continued)
Step #6: Compare the absolute relative approximate error with the pre-specified relative error tolerance . Step #7: If then go to Step 3, else stop the algorithm. Note: one should also check whether the number of iterations is more than the maximum number of iterations allowed. If so, one needs to terminate the algorithm and notify the user about it.

30 An Exercise A ceramic company that makes floats for commodes. The floating ball has a specific gravity of 0.6 and has a radius of 5.5 cm. You are asked to find the depth x to which the ball is submerged when floating in water. The equation that gives the depth to which the ball is submerged under water is given by Use the bisection method of finding roots of equations to find the depth to which the ball is submerged under water. Conduct three iterations to estimate the root of the above equation. Find the absolute relative approximate error at the end of each iteration.

31 Boundary of the Solution
From the physics of the problem, the ball would be submerged between x= 0 and x=2R where, R = radius of the ball that is, 0< x < 2R or 0< x < 0.11

32 Test for the boundaries of the root
Lets us assume, Check if the function changes sign between xl and xu. Hence, So there is at least one root between xl and xu that is between 0 and 0.11.

33 Iteration 1 The estimate of the root is xm=(0+0.11)/2=0.055
Hence the root is bracketed between xm and xu that is between and So, the lower and upper limit of the new bracket is xl = and xu = 0.11 At this point, the absolute relative approximate error cannot be calculated as we do not have a previous approximation

34 Iteration 2 Next estimate of the root is, =0.082
Hence, the root is bracketed between xm and xu that is, between and So, the lower and upper limit of the new bracket is xl=0.055 and xu=0.0825

35 Iteration 2 (continued)
The absolute relative approximate error at the end of Iteration 2 is =33.33% Let us assume that acceptable error is less than 5%. But because the absolute relative approximate error after 1st iteration is greater than 5%, so the error is not acceptable.

36 Iteration 3 xm= Hence, the root is bracketed between and , that is, between and So the lower and upper limit of the new bracket is xl = and xu = The absolute relative approximate error at the ends of Iteration 3 is 20% Still the absolute relative approximate error is greater than 5%

37 Convergence after ten iterations
Table 1 Root of as function of number of iterations for bisection method. Iterations xl xu xm % error f(xm) 1 0.11 0.055 6.655X10-5 2 0.0825 33.33 -1.622X10-4 3 20.00 X10-5 4 11.11 4.484X10-6 5 5.263 -2.593X10-5 6 2.702 -1.080X10-5 7 1.370 -3.176X10-6 8 0.0623 0.6897 6.497X10-7 9 0.3436 -1.265X10-6 10 0.1721 -3.077X10-7

38 Advantages of bisection method
Since the method brackets the root, the method is guaranteed to converge. As iterations are conducted, the interval gets halved. So one can guarantee the error in the solution of the equation.

39 Drawbacks of bisection method
The convergence of the bisection method is slow as it is simply based on halving the interval. If one of the initial guesses is closer to the root, it will take larger number of iterations to reach the root. If a function is such that it just touches the x-axis (Figure 6) such as it will be unable to find the lower guess, xl , and upper guess, xu , such that

40 Figure 6 : The equation has a single root and that cannot be bracketed
f (x) x

41 Drawbacks of bisection method
A singularity in a function is defined as a point where the function becomes infinite. For functions where there is a singularity and it reverses sign at the singularity, the bisection method may not converge on the singularity (Figure 7). An example includes where xl=-2 and xu=3 are valid initial guesses which satisfy However, the function is not continuous and the theorem that a root exists is also not applicable.

42 Figure 7 : The equation has no root but changes sign
f (x) x

43 Flow Chart of Bisection Method
Start = > f(xl)f(xm):0 Define the function < xu = xm xl = xm Read xl and xu Ea = |(xm-xmold)*100/xm| A > f(xl)f(xu) : 0 Xmold=xm < < Print Count, xm & f(xm) Ea : Elimit Read Elimit & max_iteration > Iteration_count++ Stop iteration_count=0 xmold = xl Iteration_count : max_iteration No convergence > Find xm=(xl+xu)/2 <

44 Thanks


Download ppt "CSE 330: Numerical Methods"

Similar presentations


Ads by Google