Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS-110 Computational Engineering Part 3 A. Avudainayagam Department of Mathematics.

Similar presentations


Presentation on theme: "CS-110 Computational Engineering Part 3 A. Avudainayagam Department of Mathematics."— Presentation transcript:

1 CS-110 Computational Engineering Part 3 A. Avudainayagam Department of Mathematics

2 Root Finding: f(x)=0 Method 1: The Bisection method Th: If f(x) is continuous in [a,b] and if f(a)f(b)<0, then there is atleast one root of f(x)=0 in (a,b). Single root in (a,b) f (a) < 0 a x1x1 b f(b) > 0 Multiple roots in (a,b) a x1x1 x2x2 x3x3 x4x4 x5x5 b y x y x

3 The Method Find an interval [x 0,x 1 ] so that f(x 0 )f(x 1 ) < 0 (This may not be easy. Use your knowledge of the physical phenomenon which the equation represents). Cut the interval length into half with each iteration,by examining the sign of the function at the mid point. If f(x 2 ) = 0, x 2 is the root. If f(x 2 )  0 and f(x 0 )f(x 2 ) < 0, root lies in [x 0,x 2 ]. Otherwise root lies in [x 2,x 1 ]. Repeat the process until the interval shrinks to a desired level.

4 Number of Iterations and Error Tolerance Length of the interval (where the root lies) after n iterations We can fix the number of iterations so that the root lies within an interval of chosen length  (error tolerance). If n satisfies this, root lies within a distance of  / 2 of the actual root.

5 Though the root lies in a small interval, |f(x)| may not be small if f(x) has a large slope. Conversely if |f(x)| small, x may not be close to the root if f(x) has a small slope. So, we use both these facts for the termination criterion. We first choose an error tolerance on f(x) : |f(x)| <  and m the maximum number of iterations.

6 Pseudo code (Bisection Method) 1. Input  > 0, m > 0, x 1 > x 0 so that f(x 0 ) f(x 1 ) < 0. Compute f 0 = f(x 0 ). k = 1 (iteration count) 2. Do { (a) Compute f 2 = f(x 2 ) = f (b) If f 2 f 0 < 0, set x 1 = x 2 otherwise set x 0 = f 2 and f 0 = f 2. (c) Set k = k+1. } 3. While |f 2 | >  and k  m set x = x 2, the root.

7 False Position Method (Regula Falsi) Instead of bisecting the interval [x 0,x 1 ], we choose the point where the straight line through the end points meet the x-axis as x 2 and bracket the root with [x 0,x 2 ] or [x 2,x 1 ] depending on the sign of f(x 2 ).

8 False Position Method (x 1,f 1 ) x0x0 (x 0,f 0 ) x2x2 x1x1 (x 2,f 2 ) y=f(x) Straight line through (x 0,f 0 ),(x 1,f 1 ) : New end point x 2 : y x

9 False Position Method (Pseudo Code) 3. While (|f 2 |   ) and (k  m) 1. Choose  > 0 (tolerance on |f(x)| ) m > 0 (maximum number of iterations ) k = 1 (iteration count) x 0,x 1 (so that f 0,f 1 < 0) 2. { a. Compute f 2 = f(x 2 ) b. If f 0 f 2 < 0 set x 1 = x 2, f 0 = f 2 c. k = k+1 } 4. x = x 2, the root.

10 Newton-Raphson Method / Newton’s Method At an approximate x k to the root,the curve is approximated by the tangent to the curve at x k and the next approximation x k+1 is the point where the tangent meets the x-axis. y = f(x) root x0x0 x x1x1 x2x2 y 

11 Warning : If f´(x k ) is very small, method fails. Two function Evaluations per iteration This tangent cuts the x-axis at x k+1 Tangent at (x k, f k ) : y = f(x k ) + f ´(x k )(x-x k )

12 Newton’s Method - Pseudo code 4. x = x 1 the root. 1. Choose  > 0 (function tolerance |f(x)| <  ) m > 0 (Maximum number of iterations) x 0 - initial approximation k - iteration count Compute f(x 0 ) 2. Do { q = f (x 0 ) (evaluate derivative at x 0 ) x 1 = x 0 - f 0 /q x 0 = x 1 f 0 = f(x 0 ) k = k+1 } 3. While (|f 0 |   ) and (k  m)

13 Getting caught in a cycle of Newton’s Method Alternate iterations fall at the same point. No Convergence. x k+1 xkxk x y

14 Newton’s Method for finding the square root of a number x =  a Example : a = 5, initial approximation x 0 = 2. x 1 = 2.25 x 2 = x 3 = x 4 = f(x) = x 2 - a 2 = 0

15 The secant Method The Newton’s Method requires 2 function evaluations (f,f ). The Secant Method requires only 1 function evaluation and converges as fast as Newton’s Method at a simple root. Start with two points x 0,x 1 near the root (no need for bracketing the root as in Bisection Method or Regula Falsi Method). x k-1 is dropped once x k+1 is obtained.

16 The Secant Method (Geometrical Construction) Two initial points x 0, x 1 are chosen The next approximation x 2 is the point where the straight line joining (x 0,f 0 ) and (x 1,f 1 ) meet the x-axis Take (x 1,x 2 ) and repeat. y = f(x 1 ) x2x2 x0x0 x1x1  (x 0,f 0 ) (x 1,f 1 ) x y

17 The secant Method (Pseudo Code) 1. Choose  > 0 (function tolerance |f(x)|   ) m > 0 (Maximum number of iterations) x 0, x 1 (Two initial points near the root ) f 0 = f(x 0 ) f 1 = f(x 1 ) k = 1 (iteration count) 2. Do { x 0 = x 1 f 0 = f 1 x 1 = x 2 f 1 = f(x 2 ) k = k+1 } 3. While (|f 1 |   ) and (m  k)

18 General remarks on Convergence # The false position method in general converges faster than the bisection method. (But not always ) counter example follows. # The bisection method and the false position method are guaranteed for convergence. # The secant method and the Newton-Raphson method are not guaranteed for convergence.

19 Order of Convergence # A measure of how fast an algorithm converges. Let  be the actual root : f (  ) = 0 Let x k be the approximate root at the kth iteration. Error at the kth iteration = e k = |x k -  | The algorithm converges with order p if there exist  such that

20 Order of Convergence of # Bisection method p = 1 ( linear convergence ) # False position - generally Super linear ( 1 < p < 2 ) # Secant method (super linear) # Newton Raphson method p = 2 quadratic

21 Machine Precision # The smallest positive float  M that can be added to one and produce a sum that is greater than one.

22 Pseudo code to find ‘Machine Epsilon’ 1. Set  M = 1 2. Do {  M =  M / 2 x = 1 +  M } 3. While ( x < 1 ) 4.  M = 2  M

23 Solutions Of Linear Algebraic Systems AX = B A n  n matrix B n  1 vector X n  1 vector (unknown) If |A|  0, the system has a unique solution, provided B  0. If |A| = 0, the matrix A is called a singular matrix. # Direct Methods # Iterative Methods

24 The Augmented Matrix [A | B]

25 Elementary row operations 1. Interchange rows j and k. 2. Multiply row k by . 3. Add  times row j to row k. Use elementary row operations and transform the equation so that the solution becomes apparent

26 Diagonal Matrix (D) = Upper Triangular Matrix (U) = Lower Triangular Matrix (L) =

27 Gauss Elimination Method Using row operations, the matrix is reduced to an upper triangular matrix and the solutions obtained by back substitution

28 Partial Pivot Method Step 1 We look for the largest element in absolute value (called pivot) in the first column and interchange this row with the first row Step 2 Multiply the I row by an appropriate number and subtract this from the II row so that the first element of the second row becomes zero. Do this for the remaining rows so that all the elements in the I column except the top most element are zero. Now the matrix looks like:

29 Now look for the pivot in the second column below the first row and interchange this with the second row. Multiply the II row by an appropriate numbers and subtract this from remaining rows (below the second row) so that the matrix looks like :

30 Now look for the pivot in the III column below the second row and interchange this row with the third row. Proceed till the matrix looks like : Now the solution is obtained by back substitution.

31 Gauss Elimination (An example) Augmented Matrix :

32  

33 Solution by back substitution x 3 = / = 5 x 2 = (2 - 5 )/1.5 = -2 x 1 = (2-8(5)+2)/4 = -9 x 1 = -9, x 2 = -2, x 3 = 5

34 Gauss Elimination (Pseudo Code) 1. Input matrix A of order n  n and r h s vector B and form the augmented matrix [A|B]. 2. For j = 1 to n do { (a) Compute the pivot index j  p  n such that (b) If A pj = 0, print “singular matrix” and exit. (c) If p > j, interchange rows p and j (d) For each i > j, subtract A ij /A jj times row j from row i. } 3. Now the augmented matrix is [C|D] where C is upper triangular. 4. Solve by back substitution For j = 1= n down to 1 compute :

35 Determinant of a Matrix Some properties of the determinant. 1. det (A T ) = det (A) 2. det (AB) = det (A) det (B) 3. det (U) = U 11 U 22 U 33...U nn 4. det [E 1 (k,j)] = -1. det A 5. det [E 2 (k,  )] =  det A 6. det [E 3 (k,j,  )] = det A

36 Matrix Determinant Gauss Elimination procedure uses two types of row operations. 1. Interchange of rows for pivoting 2. Add  times row j to row k. The first operation changes the sign of the determinant while the second has no effect on the determinant. To obtain the matrix determinant, follow Gauss elimination and obtain the upper triangular form U, keeping count of (r) the interchanges of rows Then det A = (-1) r U 11 U U nn

37 Matrix Determinant (Pseudo Code) 1. Set r = 0 2. For j = 1 to n do { (a) Compute the pivot index j  p  n such that 3. det = (-1) r A 11 A A nn (b) If A pj = 0, print “Singular Matrix” and exit. (c) If p > j interchange rows p and j, set r = r +1. (d) For each i > j, subtract A ij /A jj times row j and row i }

38 Matrix Determinant A =   det = (-1) 1 (1)(4)(5.5) = -22

39 LU Decomposition When Gauss elimination is used to solve a linear system, the row operations are applied simultaneously to the RHS vector. If the system is solved again for a different RHS vector, the row operations must be repeated. The LU decomposition eliminates the need to repeat the row operators each time a new RHS vector is used.

40 The Matrix A (AX = B) is written as a product of an upper triangular matrix U and a lower triangular matrix L. The diagonal elements of the upper triangular matrix are chosen to be 1 to get a consistent system of equations n = 3 =  9 Equations for 9 unknowns L 11, L 21, L 22, L 31, L 32, L 33, U 12, U 13, U 23.

41 First column gives L 11 L 21 L 31 = (A 11, A 21, A 31 ) First row gives U 11 U 12 U 13 = (1, A 12 /L 11, A 13 /L 11 ) Second column gives L 12 L 22 L 32 = (0, A 22 - L 21 U 12, A 32 - L 31 U 22 ) Second row gives U 21 U 22 U 23 = (0, 1, (A 23 - L 21 U 13 )/L 22 ) Third column gives L 13 L 23 L 33 = (0, 0, A 33 - L 31 U 13 - L 32 U 23 ) =

42 LU Decomposition (Pseudo Code) L i1 = A i1, i = 1, 2,..., n U 1j = A 1j /L 11, j = 2, 3,..., n for j = 2, 3,..., n-1, i = j, j+1,..., n, k = j, j+1,..., n

43 LU Decomposition (Example) =

44 The value of U are stored in the zero space of L. Q = [L \ U] = After each element of A is employed, it is not needed again. So Q can be stored in place of A.

45 After LU decomposition of the matrix A, the system AX = B is solved in 2 steps : (1) Forward substitution, (2) Backward Substitution AX = LUX = B Put UX = Y LY = = Forward Substitution : Y 1 = B 1 / L 11 Y 2 = (B 2 - L 21 Y 1 ) / L 22 Y 3 = (B 3 - L 31 B 1 - L 32 B 2 ) / L 33

46 UX = = Back Substitution : x 3 = Y 3 x 2 = Y 2 - x 3 U 23 x 1 = Y 1 - x 2 U 12 - x 3 U 23

47  As in Gauss elimination, LU decomposition must employ pivoting to avoid division by zero and to minimize round off errors. The pivoting is done immediately after computing each column.

48 Matrix Inverse Using the LU Decomposition LU decomposition can be used to obtain the inverse of the original coefficient matrix. Each column j of the inverse is determined by using a unit vector (with 1 in the jth row ) as the RHS vector

49 Matrix inverse using LU decomposition-an example = LU First column of the inverse of A is given by = The second and third columns are obtained by taking the RHS vector as (0,1,0) and (0,0,1). A

50 Iterative Method for solution of Linear systems Jacobi iteration Gauss - Seidel iteration

51 Jacobi Iteration AX = B n = 3 a 11 x 1 + a 12 x 2 + a 13 x 3 = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 = b 2 a 31 x 1 + a 32 x 2 + a 33 x 3 = b 3 Preprocessing : 1. Arrange the equations so that the diagonal terms of the coefficient matrix are not zero. 2. Make row transformation if necessary to make the diagonal elements as large as possible.

52 Rewrite the equation as x 1 = 1/a 11 (b 1 - a 12 x 2 - a 13 x 3 ) x 2 = 1/a 22 (b 2 - a 21 x 1 - a 23 x 3 ) x 3 = 1/a 33 (b 3 - a 31 x 1 - a 32 x 2 ) Choose an initial approximation vector (x 0, x 1, x 2 ). If no prior information is available then (x 0, x 1, x 2 ) = (0, 0, 0) will do. Iterate :, 1  i  n Stop when Jacobi Pseudo Code

53 Diagonally Dominant Matrices ie., the diagonal element of each row is larger than the sum of the absolute values of the other elements in the row. Diagonal dominance is a sufficient but not necessary condition for the convergence of the Jacobi or Gauss-Seidel iteration.

54 The Gauss-Seidel iteration In the Jacobi iteration all the components of the new estimate of the vector are computed from the current estimate However when is computed the updated estimates are already available. The Gauss - Seidel iteration takes advantage of the latest information available where updating an estimate.

55 Gauss - Seidel Jacobi First Iteration x 1 = (b 1 - a 12 x 2 - a 13 x 3 ) / a 11 x 2 = (b 2 - a 21 x 1 - a 23 x 3 ) / a 22 x 3 = (b 3 - a 31 x 1 - a 32 x 2 ) / a 33 x 1 = (b 1 - a 12 x 2 - a 13 x 3 ) / a 11 x 2 = (b 2 - a 21 x 1 - a 23 x 3 ) / a 22 x 3 = (b 3 - a 31 x 1 - a 23 x 3 ) / a 33 Second Iteration x 1 = (b 1 - a 12 x 2 - a 13 x 3 ) / a 11 x 2 = (b 2 - a 21 x 1 - a 23 x 3 ) / a 22 x 3 = (b 3 - a 31 x 1 - a 32 x 2 ) / a 33 x 1 = (b 1 - a 12 x 2 - a 13 x 3 ) / a 11 x 2 = (b 2 - a 21 x 1 - a 23 x 3 ) / a 22 x 3 = (b 3 - a 31 x 1 - a 32 x 2 ) / a 33

56 Matrix Condition Number Vector and Matrix norms Norm is a real valued function that gives a measure of size or length of the vector or the matrix. Example : Length of the vector A = (a, b, c) is a norm. For the n dimensional vector A = (x 1, x 2, x 3,..., x n ) For the matrix A ij, These are called Euclidean norms.

57 Other Useful Norms For Vectors - (the p - norm) - (sum of the absolute values of the element ) - (Maximum magnitude norm) For Matrices - (Column sum norm) - (row sum norm)

58 Properties of norms 1. || x || > 0 for x  0 2. || x || = 0 for x = 0 3. ||  x || = ||  ||. || x || 4. || x + y ||  || x || +|| y ||

59 Ill Conditioning Consider the system : The exact solution : x 1 = 201, x 2 = 100 Consider the slightly perturbed system : A 11 is increased by 1 percent A 22 is decreased by 0.25 percent (approx.) Solution of the perturbed system : x 1 = 50, x 2 = Near 75% change in the solution.

60  When the solution of system, is highly sensitive to the values of the coefficient matrix or the RHS vector, we say that the system is ill - conditioned or the matrix is ill - conditioned  Ill conditioned systems will result in large accumulated round off errors.

61 Condition number of a Matrix k(A) = || A || || A -1 || This is a measure of the relative sensitivity of the solution to small change in the RHS vector. If k is large, the system is regarded as being ill - conditioned # The row - sum norm is usually employed to find k(A)

62 Condition Number of the Matrix Row - sum norm of A = || A || = Max (300, 601) = 601. Row - sum norm of A -1 = || A -1 || = Max (6.01, 3) = Condition Number k(A) = 601 (6.01) = 3612 (large). A is ill - conditioned.

63  Equations that are not properly scaled may give rise to large condition numbers. Consider k (A) = || A || || A -1 || = 2000 × = 1001(large)

64 Equilibration : Scale each row of A by its largest element. K(A) = || A || || A -1 || = 2(1) = 2s Great Improvement in the condition number.

65 Checking the answer is not enough when the condition number is high Consider x = and y = 0.45 given 34 (-0.11) + 55 (0.45) - 21 = 0.01  0 55 (-0.11) + 89 (0.45) - 34 = 0.00  0

66 Interpolation × × × P1P1 P2P2 P3P3 P4P4 y x Given the data : (x k, y k ), k = 1, 2, 3,..., n, find a function f passing through each of the data samples. f(x k ) = y k, 1  k  n Once f is determined, we can use it to predict the value of y at points other than the samples. ×

67 Piecewise Linear Interpolation A straight line segment is used between each adjacent pair of the data points. x1x1 x2x2 x3x3 x4x4 x5x5 × × × × × 1  k  n

68 Polynomial Interpolation For the data set (x k, y k ), k = 1,..., n, we find a polynomial of degree (n - 1) that has n constants : The n interpolation constraints f(x k ) = y k gives Not feasible for large data sets, since the condition number increases rapidly with increasing n.

69 Lagrange Interpolating Polynomial The Lagrange interpolating polynomial of degree k, Q k (x) is constructed as follows :......

70 Each Q k (x) is a polynomial of degree (n - 1) Q k (x j ) = 1, j = k = 0, j  k The polynomial curve that passes through the data set (x k, y k ), k = 1, 2,..., n is f(x) = y 1 Q 1 (x) + y 2 Q 2 (x) y n Q n (x) Polynomial is written directly without having to solve a system of equations.

71 Lagrange interpolations (Pseudo Code) Choose x, the point where the function value is required y = 0 Do for i = 1 to n product = f 1 Do for j = 1 to n if ( i  j ) product = product (x - x j )/ (x i - x j ) end if end Do y = y + product End Do

72 Least Square Fit If the number of sample is large or if the dependent variable contains measurement noise, it is often letter to find a function which minimizes an error criterion such as A function that minimizes E is called the Least Square Fit

73 Straight Line Fit To fit a straight line through the n points (x 1, y 1 ), (x 2, y 2 ),...,(x n, y n ) Assume f(x) = a 1 + a 2 x Error Find a 1, a 2 which minimizes E.

74

75 Straight Line Fit (An example ) Fit a straight line through the five points (0, 2.10), (1, 2.83), (2, 1.10), (3, 3.20), (4, 3.90) a 11 = n = 5 a 21 = a 12 a 1 = 1.84, a 2 = 0.395, f(x) = x

76 Data Representation Integers - Fixed Point Numbers Decimal System - base 10 uses 0,1,2,…,9 (396) 10 = (6  10 0 ) + (9  10 1 ) + (3  10 2 ) = (396) 10 Binary System - base 2 uses 0,1 (11001) 2 = (1  2 0 ) + (0  2 1 ) + (0  2 2 ) + (1  2 3 ) + (1  2 4 ) = (25) 10

77 Decimal to Binary Conversion Convert (39) 10 to binary form base = Remainder Remainder Remainder Remainder Remainder Remainder 1 Put the remainder in reverse order (100111) 2 = (1  2 0 ) + (1  2 1 ) + (1  2 2 ) + (0  2 3 ) + (0  2 4 ) + (1  2 5 ) = (39) 10

78 Largest number that can be stored in m-digits base - 10 : (99999…9) = 10 m - 1 base - 2 : (11111…1) = 2 m - 1 m = 3 (999) = (111) = Limitation : Memory cells consist of 8 bits(1byte) multiples each position containing 1 binary digit.

79 Common cell lengths for integers : k = 16 bits k = 32 bits Sign - Magnitude Notation First bit is used for a sign 0 - non negative number 1 - negative number The remaining bits are used to store the binary magnitude of the number. Limit of 16 bit cell : (32767) 10 Limit of 32 bit cell : ( ) 10

80 Two’s Compliment notation Definition : The two’s complement of a negative integer I in a k - bit cell : Two’s Compliment of I = 2 k + I (Eg) : Two’s Complement of (-3) 10 in a 3 - bit cell = ( ) 10 = (5) 10 = (101) 2 (-3) 10 will be stored as 101

81 Storage Scheme for storing an integer I in a k - bit cell in Two’ Compliment notation Stored Value C = I, I  0, first bit = 0 2 k + I, I < 0

82 The Two’s Compliment notation admits one more negative number than the sign - magnitude notation.

83 (Eg) Take a 2 bit cell (k = 2) Range in Sign - magnitude notation : = 1 -1 = 11 1 = 01 Range in Two’s Compliment notation Two’s Compliment of -1 = = (3) 10 = (11) 2 Two’s Compliment of -2 = = (2) 10 = (10) 2 Two’s Compliment of -3 = = 0 - Not possible

84 Floating Point Numbers Integer Part + Fractional Part Decimal System - base Binary System - base Fractional Part (0.7846) 10 Fractional Part ( ) 2

85 Binary Fraction  Decimal Fraction (10.11) Integer Part (10) 2 = = 2 Fractional Part (11) 2 = Decimal Fraction = ( 2.75 ) 10

86 Decimal Fraction  Binary Fraction Convert (0.9) 10 to binary fraction 0.9  integer part 1  integer part 1  integer part 1  integer part 0  integer part 0 Repetition (0.9) 10 = ( ) 2

87 Scientific Notation (Decimal) = 7.47  =  10 9,700,000,000 = 9.7  10 9 Binary (10.01) 2 = (1.001) 2  2 1 (0.110) 2 = (1.10) 2  2 -1

88 Computer stores a binary approximation to x x   q  2 n q - mantissa, n exponent (-39.9) 10 = ( ) 2 = ( ) 2  (2 5 ) 10

89 Decimal Value of stored number (-39.9) 10 = bit 32 bits : First bit for sign Next 8 bits for exponent 23 bits for mantissa =

90 Round of Errors can be reduced by Efficient Programming Practice # The number of operations (multiplications and additions ) must be kept minimum. (Complexity theory)

91 An Example of Efficient Programming Problem : Evaluate the value of the Polynomial. P(x) = a 4 x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 at a given x. Requires 13 mulitiplications and 4 additions. Bad Programme !

92 An Efficient method (Horner’s method) P(x) = a 4 x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 = a 0 + x(a 1 + x(a 2 + x(a 3 + xa 4 ) )) Requires 4 multiplications and 4 additions. Psuedo code for an nth degree polynomial Input a’s, n,x p = 0 For i = n, n-1, …, 0 p = xP + a(i) End

93 Numerical Integration When do we need numerical integration ? Antiderivate of f(x) difficult to find. f(x) is available only in a tabular form

94 Trapezoidal Rule

95 Simpson’s rule

96 EULER’S METHOD- Pseudocode

97 Euler’s Method-Pseudo code

98 A Second order method: Taylor Series method

99 Taylor’s method

100 Taylor Series Method: An example

101 Taylor Series solution at x = 0.1

102 Finite Difference Method for Boundary Value Problems


Download ppt "CS-110 Computational Engineering Part 3 A. Avudainayagam Department of Mathematics."

Similar presentations


Ads by Google