Download presentation

Presentation is loading. Please wait.

Published byLisandro Oats Modified over 2 years ago

1
E. T. S. I. Caminos, Canales y Puertos1 Engineering Computation Lecture 7

2
E. T. S. I. Caminos, Canales y Puertos2 System of Equations Errors in Solutions to Systems of Linear Equations Objective: Solve [A]{x} = {b} Problem: Round-off errors may accumulate and even be exaggerated by the solution procedure. Errors are often exaggerated if the system is ill-conditioned Possible remedies to minimize this effect: 1. Partial or complete pivoting 2. Work in double precision 3. Transform the problem into an equivalent system of linear equations by scaling or equilibrating Errors in Solutions to Systems of Linear Equations

3
E. T. S. I. Caminos, Canales y Puertos3 Ill-conditioning A system of equations is singular if det|A| = 0 If a system of equations is nearly singular it is ill-conditioned. Systems which are ill-conditioned are extremely sensitive to small changes in coefficients of [A] and {b}. These systems are inherently sensitive to round-off errors. Question: Can we develop a means for detecting these situations? Errors in Solutions to Systems of Linear Equations

4
E. T. S. I. Caminos, Canales y Puertos4 Ill-conditioning of [A]{x} = {b}: Consider the graphical interpretation for a 2-equation system: We can plot the two linear equations on a graph of x 1 vs. x 2. x1x1 x2x2 b 2 /a 22 b 1 /a 11 b 1 /a 12 a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 b 2 /a 21 Errors in Solutions to Systems of Linear Equations

5
E. T. S. I. Caminos, Canales y Puertos5 Ill-conditioning of [A]{x} = {b}: Consider the graphical interpretation for a 2-equation system: We can plot the two linear equations on a graph of x 1 vs. x 2. Well-conditioned x1x1 x2x2 Uncertainty in x 2 Ill-conditioned x1x1 x2x2 Uncertainty in x 2 Errors in Solutions to Systems of Linear Equations

6
E. T. S. I. Caminos, Canales y Puertos6 Ways to detect ill-conditioning: 1. Calculate {x}, make small change in [A] or {b} and determine the change in the solution {x}. 2. After forward elimination, examine diagonal of upper triangular matrix. If a ii << a jj, i.e. there is a relatively small value on diagonal, then this may indicate ill- conditioning. 3. Compare {x} single with {x} double 4. Estimate the "condition number" for A. Substituting the calculated {x} into [A]{x} and checking this against {b} will not always work!!! Errors in Solutions to Systems of Linear Equations

7
E. T. S. I. Caminos, Canales y Puertos7 Ways to detect ill-conditioning: If det|A| = 0 the matrix is singular ==> the determinant may be an indicator of conditioning If det|A| is near zero is the matrix ill-conditioned? Consider: After scaling: ==> det|A| will provide an estimate of conditioning if it is normalized by the "magnitude" of the matrix. Errors in Solutions to Systems of Linear Equations

8
E. T. S. I. Caminos, Canales y Puertos8 Norms and the Condition Number We need a quantitative measure of ill-conditioning. This measure will then directly reflect the possible magnitude of round off effects. To do this we need to understand norms: Norm: Scalar measure of the magnitude of a matrix or vector ("how big" a vector is). Not to be confused with the dimension of a matrix. Norms

9
E. T. S. I. Caminos, Canales y Puertos9 Here are some vector norms for n x 1 vectors {x} with typical elements x i. Each is in the general form of a p norm defined by the general relationship: 1. Sum of the magnitudes: 2. Magnitude of largest element: (infinity norm) 3. Length or Euclidean norm: Vector Norms: Scalar measure of the magnitude of a vector Vector Norms

10
E. T. S. I. Caminos, Canales y Puertos10 Vector Norms Required Properties of vector norm: 1. ||x|| 0 and ||x|| = 0 if and only if [x]=0 2 ||kx|| = k ||x|| where k is any positive scalar 3. ||x+y|| ||x|| + ||y|| Triangle Inequality For the Euclidean vector norm we also have 4. ||xy|| ||x|| ||y|| because the dot product or inner product property satisfies: ||xy|| = ||x||||y|| |cos( )| ||x|| ||y||. Norms

11
E. T. S. I. Caminos, Canales y Puertos11 1. Largest column sum: (column sum norm) 2. Largest row sum: (row sum norm) (infinity norm) Matrix Norms: Scalar measure of the magnitude of a matrix. Matrix norms corresponding to vector norms above are defined by the general relationship: Matrix Norms

12
E. T. S. I. Caminos, Canales y Puertos12 3. Spectral norm: ||A|| 2 = (µ max ) 1/2 where µ max is the largest eigenvalue of [A] T [A] If [A] is symmetric, (µ max ) 1/2 = max, is the largest eigenvalue of [A]. (Note: this is not the same as the Euclidean or Frobenius norm, seldom used: Matrix norms

13
E. T. S. I. Caminos, Canales y Puertos13 Matrix Norms For matrix norms to be useful we require that 0. || Ax || || A || ||x || General properties of any matrix norm: 1. || A || 0 and || A || = 0 iff [A] = 0 2. || k A || = k || A || where k is any positive scalar 3. || A + B || || A || + || B || "Triangle Inequality" 4. || A B || || A || || B || Why are norms important? Norms permit us to express the accuracy of the solution {x} in terms of || x || Norms allow us to bound the magnitude of the product [ A ] {x} and the associated errors. Matrix norms

14
E. T. S. I. Caminos, Canales y Puertos14 Forward and backward error analysis can estimate the effect of truncation and roundoff errors on the precision of a result. The two approaches are alternative views: 1.Forward (a priori) error analysis tries to trace the accumulation of error through each process of the algorithm, comparing the calculated and exact values at every stage. 2.Backward (a posteriori) error analysis views the final solution as the exact solution to a perturbed problem. One can consider how different the perturbed problem is from the original problem. Here we use the condition number of a matrix [A] to specify the amount by which relative errors in [A] and/or {b} due to input, truncation, and rounding can be amplified by the linear system in the computation of {x}. Error Analysis

15
E. T. S. I. Caminos, Canales y Puertos15 Backward Error Analysis of [A]{x} = {b} for errors in {b} Suppose the coefficients {b} are not precisely represented. What might be the effect on the calculated value for {x + dx}? Lemma: [A]{x} = {b} yields ||A|| ||x|| ||b|| or Now an error in {b} yields a corresponding error in {x}: [A ]{x + dx} = {b + db} [A]{x} + [A]{ dx} = {b} + {db} Subtracting [A]{x} = {b} yields: [A]{dx} = {db} ––> {dx} = [A] -1 {db} Error Analysis

16
E. T. S. I. Caminos, Canales y Puertos16 Define the condition number as k = cond [A] ||A -1 || ||A|| 1 If k 1 or k is small, the system is well-conditioned If k >> 1, system is ill conditioned. 1 = || I || = || A -1 A || || A -1 || || A || = k = Cond(A) Taking norms we have: And using the lemma: we then have : Backward Error Analysis of [A]{x} = {b} for errors in {b} Error Analysis

17
E. T. S. I. Caminos, Canales y Puertos17 Backward Error Analysis of [A]{x} = {b} for errors in [A] If the coefficients in [A] are not precisely represented, what might be effect on the calculated value of {x+ dx}? [A + dA ]{x + dx} = {b} [A]{x} + [A]{ dx} + [dA]{x+dx} = {b} Subtracting [A]{x} = {b} yields: [A]{ dx} = – [dA]{x+dx} or {dx} = – [A] -1 [dA] {x+dx} Taking norms and multiplying by || A || / || A || yields : Error Analysis

18
E. T. S. I. Caminos, Canales y Puertos18 Estimate of Loss of Significance: Consider the possible impact of errors [dA] on the precision of {x}. Error Analysis implies that if Or, taking log of both sides: s > p - log 10 ( ) log 10 ( ) is the loss in decimal precision; i.e., we start with p decimal figures and end-up with s decimal figures. It is not always necessary to find [A] -1 to estimate k = cond[A]. Instead, use an estimate based upon iteration of inverse matrix using LU decomposition.

19
E. T. S. I. Caminos, Canales y Puertos19 Impetus for Iterative Schemes: 1. May be more rapid if coefficient matrix is "sparse" 2. May be more economical with respect to memory 3. May also be applied to solve nonlinear systems Disadvantages: 1. May not converge or may converge slowly 2. Not appropriate for all systems Error bounds apply to solutions obtained by direct and iterative methods because they address the specification of [dA] and {db}. Iterative Solution Methods

20
E. T. S. I. Caminos, Canales y Puertos20 Basic Mechanics: Starting with: a 11 x 1 + a 12 x 2 + a 13 x 3 +... + a 1n x n =b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 +... + a 2n x n =b 2 a 31 x 1 + a 32 x 2 + a 33 x 3 +... + a 3n x n =b 3: a n1 x 1 + a n2 x 2 + a n3 x 3 +... + a nn x n =b n Solve each equation for one variable: x 1 = [b 1 – (a 12 x 2 + a 13 x 3 +... + a 1n x n )} / a 11 x 2 = [b 2 – (a 21 x 1 + a 23 x 3 +... + a 2n x n )} / a 22 x 3 = [b 3 – (a 31 x 1 + a 32 x 2 +... + a 3n x n )} / a 33 : x n = [b n – (a n1 x 2 + a n2 x 3 +... + a n,n-1 x n-1 )} / a nn Iterative Solution Methods

21
E. T. S. I. Caminos, Canales y Puertos21 Start with initial estimate of {x} 0. Substitute into the right-hand side of all the equations. Generate new approximation {x} 1. This is a multivariate one-point iteration: {x} j+1 = {g({x} j )} Repeat process until the maximum number of iterations is reached or until: || x j+1 – x j || d + e || x j+1 || Iterative Solution Methods

22
E. T. S. I. Caminos, Canales y Puertos22 To solve[A]{x} = {b} Separate [A] into:[A] = [L o ] + [D] + [U o ] [D] = diagonal (a ii ) [L o ] = lower triangular with 0's on diagonal [U o ]= upper triangular with 0's on diagonal Rewrite system: [A]{x} = ( [L o ] + [D] + [U o ] ){x} = {b} [D]{x} + ( [L o ] + [U o ] ){x} = {b} Iterate: [D]{x} j+1 = {b} – ( [L o ]+[U o ] ) {x} j {x} j+1 = [D] -1 {b} – [D] -1 ( [L o ]+[U o ] ) {x} j Iterations converge if: || [D] -1 ( [L o ] + [U o ] ) || < 1 (sufficient if equations are diagonally dominant) Convergence

23
E. T. S. I. Caminos, Canales y Puertos23 Iterative Solution Methods – the Jacobi Method

24
E. T. S. I. Caminos, Canales y Puertos24 In most cases using the newest values within the right-hand side equations will provide better estimates of the next value. If this is done, then we are using the Gauss-Seidel Method: ( [L o ]+[D] ){x} j+1 = {b} – [U o ] {x} j or explicitly: If this is done, then we are using the Gauss-Seidel Method Iterative Solution Methods -- Gauss-Seidel

25
E. T. S. I. Caminos, Canales y Puertos25 If either method is going to converge, Gauss-Seidel will converge faster than Jacobi. Why use Jacobi at all? Because you can separate the n-equations into n independent tasks, it is very well suited computers with parallel processors. Iterative Solution Methods -- Gauss-Seidel

26
E. T. S. I. Caminos, Canales y Puertos26 Rewrite given system:[A]{x} = { [B] + [E] } {x} = {b} where [B] is diagonal, or triangular so we can solve [B]{y} = {g} quickly. Thus, [B] {x} j+1 = {b}– [E] {x} j which is effectively:{x} j+1 = [B] -1 ({b} – [E] {x} j ) True solution {x}c satisfies:{x} c = [B] -1 ({b} – [E] {x} c ) Subtracting yields:{x} c – {x} j+1 = – [B] -1 [E] [{x} c – {x} j ] So ||{x} c – {x} j+1 || ||[B] -1 [E]|| ||{x} c – {x} j || Iterations converge linearly if || [B] -1 [E] || || ([D] + [L o ]) -1 [U o ] || < 1 For Gauss-Seidel => || [D] -1 ([L o ] + [U o ]) || < 1 For Jacobi Convergence of Iterative Solution Methods

27
E. T. S. I. Caminos, Canales y Puertos27 Iterative methods will not converge for all systems of equations, nor for all possible rearrangements. If the system is diagonally dominant, i.e., | a ii | > | a ij | where i j then with all < 1.0, i.e., small slopes. Convergence of Iterative Solution Methods

28
E. T. S. I. Caminos, Canales y Puertos28 A sufficient condition for convergence exists: Notes: 1. If the above does not hold, still may converge. 2. This looks similar to infinity norm of [A] Convergence of Iterative Solution Methods

29
E. T. S. I. Caminos, Canales y Puertos29 Relaxation Schemes: where 0.0 < 2.0 (Usually the value of l is close to 1) Underrelaxation ( 0.0 < < 1.0 ) More weight is placed on the previous value. Often used to: - make non-convergent system convergent or - to expedite convergence by damping out oscillations. Overrelaxation ( 1.0 < 2.0 ) More weight is placed on the new value. Assumes that the new value is heading in right direction, and hence pushes new value close to true solution. The choice of is highly problem-dependent and is empirical, so relaxation is usually only used for often repeated calculations of a particular class. Improving Rate of Convergence of G-S Iteration

30
E. T. S. I. Caminos, Canales y Puertos30 We often need to solve [A]{x} = {b} where n = 1000's Description of a building or airframe, Finite-Difference approximations to PDE's. Most of A's elements will be zero; a finite-difference approximation to Laplace's equation will have five a ij 0 in each row of A. Direct method (Gaussian elimination) Requires n 3 /3 flops (say n = 5000; n 3 /3 = 4 x 10 10 flops) Fills in many of n 2 -5n zero elements of A Iterative methods (Jacobi or Gauss-Seidel) Never store [A] (say n = 5000; [A] would need 4n 2 = 100 Mb) Only need to compute [A-B] {x}; and to solve [B]{x t+1} = {b} Why Iterative Solutions?

31
E. T. S. I. Caminos, Canales y Puertos31 Effort: Suppose [B] is diagonal, solving [B] {v} = {b}n flops Computing [A-B] x4n flops For m iterations 5mn flops For n = m = 5000, 5mn = 1.25x10 8 At worst O(n 2 ). Why Iterative Solutions?

Similar presentations

OK

Unit #1 Linear Systems Fall 2010-2011 Dr. Jehad Al Dallal.

Unit #1 Linear Systems Fall 2010-2011 Dr. Jehad Al Dallal.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google