Download presentation

Presentation is loading. Please wait.

Published byLeila Bulson Modified about 1 year ago

1
E. T. S. I. Caminos, Canales y Puertos1 Engineering Computation Part 3

2
E. T. S. I. Caminos, Canales y Puertos2 1. Motivate Study of Systems of Equations and particularly Systems of Linear Equations 2. Review steps of Gaussian Elimination 3. Examine how roundoff error can enter and be magnified in Gaussian Elimination 4. Introduce Pivoting and Scaling as defenses against roundoff. 5. Consider what an engineer can do to generate well formulated problems. Learning Objectives for Lecture

3
E. T. S. I. Caminos, Canales y Puertos3 Systems of Equations In Part 2 we have tried to determine the value x, satisfying f(x)=0. In this part we try to obtain the values x 1,x 2, x n, satisfying the system of equations: These systems can be linear or nonlinear, but in this part we deal with linear systems:

4
E. T. S. I. Caminos, Canales y Puertos4 Systems of Equations where a and b are constant coefficients, and n is the number of equations. Many of the engineering fundamental equations are based on conservation laws. In mathematical terms, these principles lead to balance or continuity equations relating the system behavior with respect to the amount of the magnitude being modelled and the extrenal stimuli acting on the system.

5
E. T. S. I. Caminos, Canales y Puertos5 Systems of Equations Matrices are rectangular sets of elements represented by a single symbol. If the set if horizontal it is called row, and if it is vertical, it is called column. Row 2 Column 3 Row vector Column vector

6
E. T. S. I. Caminos, Canales y Puertos6 Systems of Equations There are some special types of matrices: Symmetric matrix Identity matrix Diagonal matrix Upper triangular matrix

7
E. T. S. I. Caminos, Canales y Puertos7 Systems of Equations Banded matrix All elements are null with the exception of thoise in a band centered around the main diagonal. This matrix has a band width of 3 and has the name of tridiagonal. Half band width Lower triangular matrix

8
E. T. S. I. Caminos, Canales y Puertos8 Linear Algebraic Equations a 11 x 1 + a 12 x 2 + a 13 x 3 + … + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 + … + a 2n x n = b 2 ….. a n1 x 1 + a n2 x 2 + a n3 x 3 + … + a n x n = b n where all a ij 's and b i 's are constants. In matrix form: n x nn x 1n x 1 or simply [A]{x} = {b} Systems of Equations

9
E. T. S. I. Caminos, Canales y Puertos9 Systems of Equations Matrix representation of a system Matrix product: Resulting dimensions

10
E. T. S. I. Caminos, Canales y Puertos10 Systems of Equations Graphic Solution: Systems of equations are hyperplanes (straight lines, planes, etc.). The solution of a system is the intersection of these hyperplanes. Compatible and determined system. Vectors are linearly independent. Unique solution. Determinant of A is non-null.

11
E. T. S. I. Caminos, Canales y Puertos11 Systems of Equations Incompatible system, Linearly dependent vectors. Null determinant of A. There is no solution. Compatible but undetermined system. Linearly dependent vectors. Null determinant of A. There exists an infinite number of solutions.

12
E. T. S. I. Caminos, Canales y Puertos12 Systems of Equations Compatible and determined system. Linearly independent vectors. Nonnull determinant of A, but close to zero. There exists a solution but it is difficult to find precisely. It is an ill conditioned system leading to numerical errors.

13
E. T. S. I. Caminos, Canales y Puertos13 Gauss elimination Naive Gauss elimination method: The Gauss’ method has two phases: Forward elimination and backsustitution. In the first, the system is reduced to an upper triangular system: First, the unknown x 1 is eliminated. To this end, the first row is multiplied by - a 21 /a 11 and added to the second row. The same is done with all other succesive rows (n-1 times) until only the first equation contains the first unknown x 1. Pivot equation substract pivot

14
E. T. S. I. Caminos, Canales y Puertos14 Gauss elimination This operation is repeated with all variables xi, until an upper triangular matrix is obtained. Next, the system is solved by backsustitution. The number of operations (FLOPS) used in the Gauss method is: Pass 1Pass 2

15
E. T. S. I. Caminos, Canales y Puertos15 b. By elementary row manipulations, reduce [A|b] to [U|b'] where U is an upper triangular matrix: DO i = 1 to n-1 DO k = i+1 to n Row(k) = Row(k) - (a ki /a ii ) * Row(i) ENDDO 1. Forward Elimination (Row Manipulation): a. Form augmented matrix [A|b]: Gauss elimination

16
E. T. S. I. Caminos, Canales y Puertos16 Gauss elimination 2. Back Substitution Solve the upper triangular system [U]{x} = {b´} x n = b' n / u nn DO i = n-1 to 1 by (-1) END

17
E. T. S. I. Caminos, Canales y Puertos17 Consider the system of equations To 2 significant figures, the exact solution is: We will use 2 decimal digit arithmetic with rounding. Gauss elimination (example)

18
E. T. S. I. Caminos, Canales y Puertos18 Start with the augmented matrix: Multiply the first row by –1/50 and add to second row. Multiply the first row by –2/50 and add to third row: Multiply the second row by –6/40 and add to third row: Gauss elimination (example)

19
E. T. S. I. Caminos, Canales y Puertos19 Now backsolve: (vs , t = 2.2%) (vs , t = 2.5%) (vs , t = 0%) Gauss elimination (example)

20
E. T. S. I. Caminos, Canales y Puertos20 Consider an alternative solution interchanging rows: After forward elimination, we obtain: Now backsolve: x 3 = (vs , e t = 4.4%) x 2 = (vs , e t = 50%) x 1 = (vs , e t = 100%) Apparently, the order of the equations matters! Gauss elimination (example)

21
E. T. S. I. Caminos, Canales y Puertos21 WHAT HAPPENED? When we used 50 x x x 3 = 1 to solve for x 1, there was little change in other equations. When we used 2 x x x 3 = 3 to solve for x 1 it made BIG changes in the other equations. Some coefficients for other equations were lost! The second equation has little to do with x 1. It has mainly to do with x 3. As a result we obtained LARGE numbers in the table, significant roundoff error occurred and information was lost. Things didn't go well! If scaling factors | a ji / a ii | are 1 then the effect of roundoff errors is diminished. Gauss elimination (example)

22
E. T. S. I. Caminos, Canales y Puertos22 Effect of diagonal dominance: As a first approximation roots are: x i b i / a ii Consider the previous examples: Gauss elimination (example)

23
E. T. S. I. Caminos, Canales y Puertos23 Goals: 1. Best accuracy (i.e. minimize error) 2. Parsimony (i.e. minimize effort) Possible Problems: A. Zero on diagonal term ÷ by zero. B. Many floating point operations (flops) cause numerical precision problems and propagation of errors. C. System may be ill-conditioned: det[A] 0. D. No solution or an infinite # of solutions: det[A] = 0. Possible Remedies: A. Carry more significant figures (double precision). B. Pivot when the diagonal is close to zero. C. Scale to reduce round-off error. Gauss elimination (example)

24
E. T. S. I. Caminos, Canales y Puertos24 PIVOTING A. Row pivoting (Partial Pivoting) - In any good routine, at each step i, find max k | a ki | for k = i, i+1, i+2,..., n Move corresponding row to pivot position. (i) Avoids zero a ii (ii) Keeps numbers small & minimizes round-off, (iii) Uses an equation with large | a ki | to find x i Maintains diagonal dominance. Row pivoting does not affect the order of the variables. Included in any good Gaussian Elimination routine. Gauss elimination (pivoting)

25
E. T. S. I. Caminos, Canales y Puertos25 B. Column pivoting - Reorder remaining variables x j for j = i,...,n so get largest | a ji | Column pivoting changes the order of the unknowns, x i, and thus leads to complexity in the algorithm. Not usually done. C. Complete or Full pivoting Performing both row pivoting and column pivoting. (If [A] is symmetric, needed to preserve symmetry.) Gauss elimination (pivoting)

26
E. T. S. I. Caminos, Canales y Puertos26 How to fool pivoting: Multiply the third equation by 100 and then performing pivoting will yield: Forward elimination then yields (2-digit arithmetic): Backsolution yields: x 3 = (vs , e t = 4.4%) x 2 = (vs , e t = 50.0%) x 1 = (vs , e t = 100%) The order of the rows is still poor!! Gauss elimination (pivoting)

27
E. T. S. I. Caminos, Canales y Puertos27 SCALING A. Express all equations (and variables) in comparable units so all elements of [A] are about the same size. B. If that fails, and max j |a ij | varies widely across the rows, replace each row i by: a ij This makes the largest coefficient |a ij | of each equation equal to 1 and the largest element of [A] equal to 1 or -1 NOTE: Routines generally do not scale automatically; scaling can cause round-off error too! SOLUTIONS Don't actually scale, but use hypothetical scaling factors to determine what pivoting is necessary. Scale only by powers of 2: no roundoff or division required. Gauss elimination (scaling)

28
E. T. S. I. Caminos, Canales y Puertos28 If the units of x 1 were expressed in µg instead of mg the matrix might read: How to fool scaling: A poor choice of units can undermine the value of scaling. Begin with our original example: Scaling then yields: Which equation is used to determine x 1 ? Why bother to scale ? Gauss elimination (scaling)

29
E. T. S. I. Caminos, Canales y Puertos29 OPERATION COUNTING In numerical scientific calculations, the number of multiplies & divides often determines CPU time. (This represents the numerical effort!) One floating point multiply or divide (plus any associated adds or subtracts) is called a FLOP. (The adds/subtracts use little time compared to the multiplies/divides.) FLOP = FLoating point OPeration. Examples:a * x + b a / x – b Gauss elimination (operation counting)

30
E. T. S. I. Caminos, Canales y Puertos30 Useful identities in counting FLOPS: O(m n ) means that there are terms of order m n and lower. Gauss elimination (operation counting)

31
E. T. S. I. Caminos, Canales y Puertos31 Simple Example of Operation Counting: DO i = 1 to n Y(i) = X(i)/i – 1 ENDDO X(i) and Y(i) are arrays whose values change when i changes. In each iteration X(i)/i – 1 represents one FLOP because it requires one division (& one subtraction). The DO loop extends over i from 1 to n iterations: Gauss elimination (operation counting)

32
E. T. S. I. Caminos, Canales y Puertos32 Another Example of Operation Counting: DO i = 1 to n Y(i) = X(i) X(i) + 1 DO j = i to n Z(j) = [ Y(j) / X(i) ] Y(j) + X(i) ENDDO With nested loops, always start from the innermost loop. [Y(j)/X(i)] * Y(j) + X(i) represents 2 FLOPS Gauss elimination (operation counting)

33
E. T. S. I. Caminos, Canales y Puertos33 For the outer i-loop:X(i) X(i) + 1 represents 1 FLOP = 3n +2n 2 - n 2 - n = n 2 + 2n = n 2 + O(n) Gauss elimination (operation counting)

34
E. T. S. I. Caminos, Canales y Puertos34 Forward Elimination: DO k = 1 to n–1 DO i = k+1 to n r = A(i,k)/A(k,k) DO j = k+1 to n A(i,j)=A(i,j) – r*A(k,j) ENDDO B(i) = B(i) – r*B(k) ENDDO Gauss elimination (operation counting)

35
E. T. S. I. Caminos, Canales y Puertos35 Operation Counting for Gaussian Elimination Back Substitution: X(n) = B(n)/A(n,n) DO i = n–1 to 1 by –1 SUM = 0 DO j = i+1 to n SUM = SUM + A(i,j)*X(j) ENDDO X(i) = [B(i) – SUM]/A(i,i) ENDDO Gauss elimination (operation counting)

36
E. T. S. I. Caminos, Canales y Puertos36 Operation Counting for Gaussian Elimination Forward Elimination Inner loop: Second loop: = (n 2 + 2n) – 2(n + 1)k + k 2 Gauss elimination (operation counting)

37
E. T. S. I. Caminos, Canales y Puertos37 Operation Counting for Gaussian Elimination Forward Elimination (cont'd) Outer loop= Gauss elimination (operation counting)

38
E. T. S. I. Caminos, Canales y Puertos38 Operation Counting for Gaussian Elimination Back Substitution Inner Loop: Outer Loop: Gauss elimination (operation counting)

39
E. T. S. I. Caminos, Canales y Puertos39 Total flops = Forward Elimination + Back Substitution = n 3 /3 + O (n 2 )+n 2 /2 + O (n) n 3 /3 + O (n 2 ) To convert (A,b) to (U,b') requires n 3 /3, plus terms of order n 2 and smaller, flops. To back solve requires: n = n (n+1) / 2 flops; Grand Total: the entire effort requires n 3 /3 + O(n 2 ) flops altogether. Gauss elimination (operation counting)

40
E. T. S. I. Caminos, Canales y Puertos40 Diagonalization by both forward and backward elimination in each column. Perform elimination both backwards and forwards until: Operation count for Gauss-Jordan is: (slower than Gauss elimination) Gauss-Jordan Elimination

41
E. T. S. I. Caminos, Canales y Puertos41 Example (two-digit arithmetic): x 1 = (vs , t = 6.3%) x 2 = (vs , t = 0%) x 3 = (vs , t = 2.2%) Gauss-Jordan Elimination

42
E. T. S. I. Caminos, Canales y Puertos42 The solution of: [A]{x} = {b} is:{x} = [A] -1 {b} where [A] -1 is the inverse matrix of [A] Consider:[A] [A] -1 = [ I ] 1) Create the augmented matrix: [ A | I ] 2) Apply Gauss-Jordan elimination: ==> [ I | A -1 ] Gauss-Jordan Matrix Inversion

43
E. T. S. I. Caminos, Canales y Puertos43 Gauss-Jordan Matrix Inversion (with 2 digit arithmetic): MATRIX INVERSE [A -1 ] Gauss-Jordan Matrix Inversion

44
E. T. S. I. Caminos, Canales y Puertos44 CHECK: [ A ] [ A ] -1 = [ I ] [ A ] -1 { b } = { x } Gaussian Elimination Gauss-Jordan Matrix Inversion

45
E. T. S. I. Caminos, Canales y Puertos45 LU decomposition LU decomposition - The LU decomposition is a method that uses the elimination techniques to transform the matrix A in a product of triangular matrices. This is specially useful to solve systems with different vectors b, because the same decomposition of matrix A can be used to evaluate in an efficient form, by forward and backward sustitution, all cases.

46
E. T. S. I. Caminos, Canales y Puertos46 LU decomposition Decomposition Initial system Transformed system 1Substitution Transformed system 2 Forward sustitutionBackward sustitution

47
E. T. S. I. Caminos, Canales y Puertos47 LU decomposition LU decomposition is very much related to Gauss method, because the upper triangular matrix is also looked for in the LU decomposition. Thus, only the lower triangular matrix is needed. Surprisingly, during the Gauss elimination procedure, this matrix L is obtained, but one is not aware of this fact. The factors we use to get zeroes below the main diagonal are the elements of this matrix L. Substract

48
E. T. S. I. Caminos, Canales y Puertos48 LU decomposition resto

49
E. T. S. I. Caminos, Canales y Puertos49 Basic Approach Consider [A]{x} = {b} a) Gauss-type "decomposition" of [A] into [L][U] n 3 /3 flops [A]{x} = {b} becomes [L] [U]{x} = {b}; let [U]{x} {d} b) First solve [L] {d} = {b} for {d} by forward subst. n 2 /2 flops c) Then solve [U]{x} = {d} for {x} by back substitution n 2 /2 flops LU decomposition (Complexity)

50
E. T. S. I. Caminos, Canales y Puertos50 [A] = [L] + [U 0 ] [A] = [L 0 ] + [U] [A] = [L 0 ] + [U 0 ] + [D] [A] = [L 1 ] [U] [A] = [L] [U 1 ] LU Decompostion: notation

51
E. T. S. I. Caminos, Canales y Puertos51 LU Decomposition Variations Doolittle[L 1 ][U]General [A] Crout[L][U 1 ]General [A] Cholesky[L][L] T Pos. Def. Symmetric [A] Cholesky works only for Positive Definite Symmetric matrices Doolittle versus Crout: Doolittle just stores Gaussian elimination factors where Crout uses a different series of calculations (see C&C ). Both decompose [A] into [L] and [U] in n 3 /3 FLOPS Different location of diagonal of 1's Crout uses each element of [A] only once so the same array can be used for [A] and [L\U] saving computer memory! LU decomposition

52
E. T. S. I. Caminos, Canales y Puertos52 Matrix Inversion Definition of a matrix inverse: [A] [A] -1 = [ I ] ==> [A] {x} = {b} [A] -1 {b} = {x} First Rule: Don’t do it. (numerically unstable calculation) LU decomposition

53
E. T. S. I. Caminos, Canales y Puertos53 Matrix Inversion If you really must -- 1) Gaussian elimination: [A | I ] –> [U | B'] ==> A -1 2) Gauss-Jordan: [A | I ] ==> [I | A -1 ] Inversion will take n 3 + O(n 2 ) flops if one is careful about where zeros are (taking advantage of the sparseness of the matrix) Naive applications (without optimization) take 4n 3 /3 + O(n 2 ) flops. For example, LU decomposition requires n 3 /3 + O(n 2 ) flops. Back solving twice with n unit vectors e i : 2 n (n 2 /2) = n 3 flops. Altogether: n 3 /3 + n 3 = 4n 3 /3 + O(n 2 ) flops LU decomposition

54
E. T. S. I. Caminos, Canales y Puertos54 Summary FLOP Counts for Linear Algebraic Equations, [A]{x} = {b} Gaussian Elimination (1 r.h.s)n 3 /3 + O (n 2 ) Gauss-Jordan (1 r.h.s) n 3 /2 + O (n 2 ) LU decomposition n 3 /3 + O (n 2 ) Each extra LU right-hand-siden 2 Cholesky decomposition (symmetric A) n 3 /6 + O (n 2 ) Inversion (naive Gauss-Jordan)4n 3 /3 +O (n 2 ) Inversion (optimal Gauss-Jordan)n 3 + O (n 2 ) Solution by Cramer's Rulen! FLOP Counts for Linear Algebraic Equations

55
E. T. S. I. Caminos, Canales y Puertos55 System of Equations Errors in Solutions to Systems of Linear Equations Objective: Solve [A]{x} = {b} Problem: Round-off errors may accumulate and even be exaggerated by the solution procedure. Errors are often exaggerated if the system is ill-conditioned Possible remedies to minimize this effect: 1. Partial or complete pivoting 2. Work in double precision 3. Transform the problem into an equivalent system of linear equations by scaling or equilibrating Errors in Solutions to Systems of Linear Equations

56
E. T. S. I. Caminos, Canales y Puertos56 Ill-conditioning A system of equations is singular if det|A| = 0 If a system of equations is nearly singular it is ill-conditioned. Systems which are ill-conditioned are extremely sensitive to small changes in coefficients of [A] and {b}. These systems are inherently sensitive to round-off errors. Question: Can we develop a means for detecting these situations? Errors in Solutions to Systems of Linear Equations

57
E. T. S. I. Caminos, Canales y Puertos57 Ill-conditioning of [A]{x} = {b}: Consider the graphical interpretation for a 2-equation system: We can plot the two linear equations on a graph of x 1 vs. x 2. x1x1 x2x2 b 2 /a 22 b 1 /a 11 b 1 /a 12 a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 b 2 /a 21 Errors in Solutions to Systems of Linear Equations

58
E. T. S. I. Caminos, Canales y Puertos58 Ill-conditioning of [A]{x} = {b}: Consider the graphical interpretation for a 2-equation system: We can plot the two linear equations on a graph of x 1 vs. x 2. Well-conditioned x1x1 x2x2 Uncertainty in x 2 Ill-conditioned x1x1 x2x2 Uncertainty in x 2 Errors in Solutions to Systems of Linear Equations

59
E. T. S. I. Caminos, Canales y Puertos59 Ways to detect ill-conditioning: 1. Calculate {x}, make small change in [A] or {b} and determine change in solution {x}. 2. After forward elimination, examine diagonal of upper triangular matrix. If a ii << a jj, i.e. there is a relatively small value on diagonal, then this may indicate ill- conditioning. 3. Compare {x} single with {x} double 4. Estimate "condition number" for A. Substituting the calculated {x} into [A]{x} and checking this against {b} will not always work!!! Errors in Solutions to Systems of Linear Equations

60
E. T. S. I. Caminos, Canales y Puertos60 Ways to detect ill-conditioning: If det|A| = 0 the matrix is singular ==> the determinant may be an indicator of conditioning If det|A| is near zero is the matrix ill-conditioned? Consider: After scaling: ==> det|A| will provide an estimate of conditioning if it is normalized by the "magnitude" of the matrix. Errors in Solutions to Systems of Linear Equations

61
E. T. S. I. Caminos, Canales y Puertos61 Norms and the Condition Number We need a quantitative measure of ill-conditioning. This measure will then directly reflect the possible magnitude of round off effects. To do this we need to understand norms: Norm: Scalar measure of the magnitude of a matrix or vector ("how big" a vector is). Not to be confused with the dimension of a matrix. Norms

62
E. T. S. I. Caminos, Canales y Puertos62 Here are some vector norms for n x 1 vectors {x} with typical elements x i. Each is in the general form of a p norm defined by the general relationship: 1. Sum of the magnitudes: 2. Magnitude of largest element: (infinity norm) 3. Length or Euclidean norm: Vector Norms: Scalar measure of the magnitude of a vector Vector Norms

63
E. T. S. I. Caminos, Canales y Puertos63 Vector Norms Required Properties of vector norm: 1. ||x|| 0 and ||x|| = 0 if and only if [x]=0 2 ||kx|| = k ||x|| where k is any positive scalar 3. ||x+y|| ||x|| + ||y|| Triangle Inequality For the Euclidean vector norm we also have 4. ||xy|| ||x|| ||y|| because the dot product or inner product property satisfies: ||xy|| = ||x||||y|| |cos( )| ||x|| ||y||. Norms

64
E. T. S. I. Caminos, Canales y Puertos64 1. Largest column sum: (column sum norm) 2. Largest row sum: (row sum norm) (infinity norm) Matrix Norms: Scalar measure of the magnitude of a matrix. Matrix norms corresponding to vector norms above are defined by the general relationship: Matrix Norms

65
E. T. S. I. Caminos, Canales y Puertos65 3. Spectral norm: ||A|| 2 = (µ max ) 1/2 where µ max is the largest eigenvalue of [A] T [A] If [A] is symmetric, (µ max ) 1/2 = max, is the largest eigenvalue of [A]. (Note: this is not the same as the Euclidean or Frobenius norm, seldom used: Matrix norms

66
E. T. S. I. Caminos, Canales y Puertos66 Matrix Norms For matrix norms to be useful we require that 0. || Ax || || A || ||x || General properties of any matrix norm: 1. || A || 0 and || A || = 0 iff [A] = 0 2. || k A || = k || A || where k is any positive scalar 3. || A + B || || A || + || B || "Triangle Inequality" 4. || A B || || A || || B || Why are norms important? Norms permit us to express the accuracy of the solution {x} in terms of || x || Norms allow us to bound the magnitude of the product [ A ] {x} and the associated errors. Matrix norms

67
E. T. S. I. Caminos, Canales y Puertos67 Forward and backward error analysis can estimate the effect of truncation and roundoff errors on the precision of a result. The two approaches are alternative views: 1.Forward (a priori) error analysis tries to trace the accumulation of error through each process of the algorithm, comparing the calculated and exact values at every stage. 2.Backward (a posteriori) error analysis views the final solution as the exact solution to a perturbed problem. One can consider how different the perturbed problem is from the original problem. Here we use the condition number of a matrix [A] to specify the amount by which relative errors in [A] and/or {b} due to input, truncation, and rounding can be amplified by the linear system in the computation of {x}. Error Analysis

68
E. T. S. I. Caminos, Canales y Puertos68 Backward Error Analysis of [A]{x} = {b} for errors in {b} Suppose the coefficients {b} are not precisely represented. What might be the effect on the calculated value for {x + dx}? Lemma: [A]{x} = {b} yields ||A|| ||x|| ||b|| or Now an error in {b} yields a corresponding error in {x}: [A ]{x + dx} = {b + db} [A]{x} + [A]{ dx} = {b} + {db} Subtracting [A]{x} = {b} yields: [A]{dx} = {db} ––> {dx} = [A] -1 {db} Error Analysis

69
E. T. S. I. Caminos, Canales y Puertos69 Define the condition number as k = cond [A] ||A -1 || ||A|| 1 If k 1 or k is small, the system is well-conditioned If k >> 1, system is ill conditioned. 1 = || I || = || A -1 A || || A -1 || || A || = k = Cond(A) Taking norms we have: And using the lemma: we then have : Backward Error Analysis of [A]{x} = {b} for errors in {b} Error Analysis

70
E. T. S. I. Caminos, Canales y Puertos70 Backward Error Analysis of [A]{x} = {b} for errors in [A] If the coefficients in [A] are not precisely represented, what might be effect on the calculated value of {x+ dx}? [A + dA ]{x + dx} = {b} [A]{x} + [A]{ dx} + [dA]{x+dx} = {b} Subtracting [A]{x} = {b} yields: [A]{ dx} = – [dA]{x+dx} or {dx} = – [A] -1 [dA] {x+dx} Taking norms and multiplying by || A || / || A || yields : Error Analysis

71
E. T. S. I. Caminos, Canales y Puertos71 Linfield & Penny, 1999

72
E. T. S. I. Caminos, Canales y Puertos72 Linfield & Penny, 1999

73
E. T. S. I. Caminos, Canales y Puertos73 Estimate of Loss of Significance: Consider the possible impact of errors [dA] on the precision of {x}. Error Analysis implies that if Or, taking log of both sides: s > p - log 10 ( ) log 10 ( ) is the loss in decimal precision; i.e., we start with p decimal figures and end-up with s decimal figures. It is not always necessary to find [A] -1 to estimate k = cond[A]. Instead, use an estimate based upon iteration of inverse matrix using LU decomposition.

74
E. T. S. I. Caminos, Canales y Puertos74 Impetus for Iterative Schemes: 1. May be more rapid if coefficient matrix is "sparse" 2. May be more economical with respect to memory 3. May also be applied to solve nonlinear systems Disadvantages: 1. May not converge or may converge slowly 2. Not appropriate for all systems Error bounds apply to solutions obtained by direct and iterative methods because they address the specification of [dA] and {db}. Iterative Solution Methods

75
E. T. S. I. Caminos, Canales y Puertos75 Basic Mechanics: Starting with: a 11 x 1 + a 12 x 2 + a 13 x a 1n x n =b 1 a 21 x 1 + a 22 x 2 + a 23 x a 2n x n =b 2 a 31 x 1 + a 32 x 2 + a 33 x a 3n x n =b 3: a n1 x 1 + a n2 x 2 + a n3 x a nn x n =b n Solve each equation for one variable: x 1 = [b 1 – (a 12 x 2 + a 13 x a 1n x n )} / a 11 x 2 = [b 2 – (a 21 x 1 + a 23 x a 2n x n )} / a 22 x 3 = [b 3 – (a 31 x 1 + a 32 x a 3n x n )} / a 33 : x n = [b n – (a n1 x 2 + a n2 x a n,n-1 x n-1 )} / a nn Iterative Solution Methods

76
E. T. S. I. Caminos, Canales y Puertos76 Start with initial estimate of {x} 0. Substitute into the right-hand side of all the equations. Generate new approximation {x} 1. This is a multivariate one-point iteration: {x} j+1 = {g({x} j )} Repeat process until the maximum number of iterations reached or until: || x j+1 – x j || d + e || x j+1 || Iterative Solution Methods

77
E. T. S. I. Caminos, Canales y Puertos77 To solve[A]{x} = {b} Separate [A] into:[A] = [L o ] + [D] + [U o ] [D] = diagonal (a ii ) [L o ] = lower triangular with 0's on diagonal [U o ]= upper triangular with 0's on diagonal Rewrite system: [A]{x} = ( [L o ] + [D] + [U o ] ){x} = {b} [D]{x} + ( [L o ] + [U o ] ){x} = {b} Iterate: [D]{x} j+1 = {b} – ( [L o ]+[U o ] ) {x} j {x} j+1 = [D] -1 {b} – [D] -1 ( [L o ]+[U o ] ) {x} j Iterations converge if: || [D] -1 ( [L o ] + [U o ] ) || < 1 (sufficient if equations are diagonally dominant) Convergence

78
E. T. S. I. Caminos, Canales y Puertos78 Iterative Solution Methods – the Jacobi Method

79
E. T. S. I. Caminos, Canales y Puertos79 In most cases using the newest values within the right-hand side equations will provide better estimates of the next value. If this is done, then we are using the Gauss-Seidel Method: ( [L o ]+[D] ){x} j+1 = {b} – [U o ] {x} j or explicitly: If this is done, then we are using the Gauss-Seidel Method Iterative Solution Methods -- Gauss-Seidel

80
E. T. S. I. Caminos, Canales y Puertos80 If either method is going to converge, Gauss-Seidel will converge faster than Jacobi. Why use Jacobi at all? Because you can separate the n-equations into n independent tasks, it is very well suited computers with parallel processors. Iterative Solution Methods -- Gauss-Seidel

81
E. T. S. I. Caminos, Canales y Puertos81 Rewrite given system:[A]{x} = { [B] + [E] } {x} = {b} where [B] is diagonal, or triangular so we can solve [B]{y} = {g} quickly. Thus, [B] {x} j+1 = {b}– [E] {x} j which is effectively:{x} j+1 = [B] -1 ({b} – [E] {x} j ) True solution {x}c satisfies:{x} c = [B] -1 ({b} – [E] {x} c ) Subtracting yields:{x} c – {x} j+1 = – [B] -1 [E] [{x} c – {x} j ] So ||{x} c – {x} j+1 || ||[B] -1 [E]|| ||{x} c – {x} j || Iterations converge linearly if || [B] -1 [E] || || ([D] + [L o ]) -1 [U o ] || < 1 For Gauss-Seidel => || [D] -1 ([L o ] + [U o ]) || < 1 For Jacobi Convergence of Iterative Solution Methods

82
E. T. S. I. Caminos, Canales y Puertos82 Iterative methods will not converge for all systems of equations, nor for all possible rearrangements. If the system is diagonally dominant, i.e., | a ii | > | a ij | where i j then with all < 1.0, i.e., small slopes. Convergence of Iterative Solution Methods

83
E. T. S. I. Caminos, Canales y Puertos83 A sufficient condition for convergence exists: Notes: 1. If the above does not hold, still may converge. 2. This looks similar to infinity norm of [A] Convergence of Iterative Solution Methods

84
E. T. S. I. Caminos, Canales y Puertos84 Relaxation Schemes: where 0.0 < 2.0 (Usually the value of l is close to 1) Underrelaxation ( 0.0 < < 1.0 ) More weight is placed on the previous value. Often used to: - make non-convergent system convergent or - to expedite convergence by damping out oscillations. Overrelaxation ( 1.0 < 2.0 ) More weight is placed on the new value. Assumes that the new value is heading in right direction, and hence pushes new value close to true solution. The choice of is highly problem-dependent and is empirical, so relaxation is usually only used for often repeated calculations of a particular class. Improving Rate of Convergence of G-S Iteration

85
E. T. S. I. Caminos, Canales y Puertos85 We often need to solve [A]{x} = {b} where n = 1000's Description of a building or airframe, Finite-Difference approximations to PDE's. Most of A's elements will be zero; a finite-difference approximation to Laplace's equation will have five a ij 0 in each row of A. Direct method (Gaussian elimination) Requires n 3 /3 flops (say n = 5000; n 3 /3 = 4 x flops) Fills in many of n 2 -5n zero elements of A Iterative methods (Jacobi or Gauss-Seidel) Never store [A] (say n = 5000; [A] would need 4n 2 = 100 Mb) Only need to compute [A-B] {x}; and to solve [B]{x t+1} = {b} Why Iterative Solutions?

86
E. T. S. I. Caminos, Canales y Puertos86 Effort: Suppose [B] is diagonal, solving [B] {v} = {b}n flops Computing [A-B] x4n flops For m iterations 5mn flops For n = m = 5000, 5mn = 1.25x10 8 At worst O(n 2 ). Why Iterative Solutions?

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google