Download presentation
Presentation is loading. Please wait.
Published byDuane Henderson Modified over 5 years ago
1
Ax = b Methods for Solution of the System of Equations (ReCap):
Direct Methods: one obtains the exact solution (ignoring the round-off errors) in a finite number of steps. These group of methods are more efficient for dense and banded matrices. Gauss Elimination; Gauss-Jordon Elimination LU-Decomposition Thomas Algorithm (for tri-diagonal banded matrix) Iterative Methods: solution is obtained through successive approximation. Number of computations is a function of desired accuracy/precision of the solution and are not known apriori. More efficient for sparse matrices. Jacobi Iterations Gauss Seidal Iterations with Successive Over/Under Relaxation
2
Gauss Elimination for the matrix equation Ax = b:
๐ 11 ๐ 12 โฆ ๐ 1๐ โฆ ๐ 1๐ ๐ 21 ๐ 22 โฆ ๐ 2๐ โฆ ๐ 2๐ โฎ โฎ โฎ โฎ โฎ โฎ ๐ ๐1 ๐ ๐2 โฆ ๐ ๐๐ โฆ ๐ ๐๐ โฎ โฎ โฎ โฎ โฑ โฎ ๐ ๐1 ๐ ๐2 โฆ ๐ ๐๐ โฆ ๐ ๐๐ ๐ฅ 1 ๐ฅ 2 โฎ ๐ฅ ๐ โฎ ๐ฅ ๐ = ๐ 1 ๐ 2 โฎ ๐ ๐ โฎ ๐ ๐ Approach in two steps: Operating on rows of matrix A and vector b, transform the matrix A to an upper triangular matrix. Solve the system using Back substitution algorithm. Indices: i: Row index j: Column index k: Step index
3
Matrix after the kth Step:
We only need to perform steps up to k = n - 1 in order to make the matrix upper triangular
4
Gauss Elimination Algorithm
Forward Elimination: For k = 1, 2, โฆ. (n - 1) Define multiplication factors: ๐ ๐๐ = ๐ ๐๐ ๐ ๐๐ Compute: ๐ ๐๐ = ๐ ๐๐ - ๐ ๐๐ ๐ ๐๐ ; ๐ ๐ = ๐ ๐ โ ๐ ๐๐ ๐ ๐ for i = k+1, k+2, โฆ.n and j = k+1, k+2, โฆ.n Resulting System of equation is upper triangular. Solve it using the Back-Substitution algorithm: ๐ฅ ๐ = ๐ ๐ ๐ ๐๐ ; ๐ฅ ๐ = ๐ ๐ โ ๐=๐+1 ๐ ๐ ๐๐ ๐ฅ ๐ ๐ ๐๐ ; ๐= ๐โ1 , ๐โ2 , โฆ3, 2, 1
6
For large n: Number of Floating Point Operations required to solve a system of equation using Gauss elimination is โ 2n3/3 (*of the order of*) When is the Gauss Elimination algorithm going to fail ? For k = 1, 2, โฆ. (n - 1) ๐ ๐๐ = ๐ ๐๐ ๐ ๐๐ ; ๐ ๐๐ = ๐ ๐๐ - ๐ ๐๐ ๐ ๐๐ ; ๐ ๐ = ๐ ๐ โ ๐ ๐๐ ๐ ๐ for i = k+1, k+2, โฆ.n and j = k+1, k+2, โฆ.n If akk is zero at any step! The akkโs are called โPivotsโ or โPivotal Elementโ If this happens at some step, solve the system by exchanging rows such that akk is non-zero.
7
Gauss-Jordon Elimination for the matrix equation Ax = b:
๐ 11 ๐ 12 โฆ ๐ 1๐ โฆ ๐ 1๐ ๐ 21 ๐ 22 โฆ ๐ 2๐ โฆ ๐ 2๐ โฎ โฎ โฎ โฎ โฎ โฎ ๐ ๐1 ๐ ๐2 โฆ ๐ ๐๐ โฆ ๐ ๐๐ โฎ โฎ โฎ โฎ โฑ โฎ ๐ ๐1 ๐ ๐2 โฆ ๐ ๐๐ โฆ ๐ ๐๐ ๐ฅ 1 ๐ฅ 2 โฎ ๐ฅ ๐ โฎ ๐ฅ ๐ = ๐ 1 ๐ 2 โฎ ๐ ๐ โฎ ๐ ๐ Approach: Operating on rows of matrix A and vector b, transform the matrix A to an identity matrix. The vector b transforms into the solution vector. Indices: i: Row index j: Column index k: Step index
8
Gauss-Jordon Algorithm: For k = 1, 2, โฆn
1 0 โฆ 0 โฆ โฆ 0 โฆ 0 โฎ โฎ โฎ โฎ โฎ โฎ 0 0 โฆ 1 โฆ 0 โฎ โฎ โฎ โฎ โฑ โฎ 0 0 โฆ 0 โฆ ๐ฅ 1 ๐ฅ 2 โฎ ๐ฅ ๐ โฎ ๐ฅ ๐ = ๐ 1 ๐ 2 โฎ ๐ ๐ โฎ ๐ ๐ Gauss-Jordon Algorithm: For k = 1, 2, โฆn ๐ ๐๐ = ๐ ๐๐ ๐ ๐๐ ; ๐ ๐ = ๐ ๐ ๐ ๐๐ ; ๐=๐,โฆ๐ ๐ ๐๐ = ๐ ๐๐ - ๐ ๐๐ ๐ ๐๐ ; ๐ ๐ = ๐ ๐ โ ๐ ๐๐ ๐ ๐ for i = 1, 2, 3, โฆ.n (โ k) and j = k,โฆn Final b vector is the solution. If we work with the augmented matrix: ๐ ๐๐ = ๐ ๐๐ ๐ ๐๐ ๐=๐,โฆ๐+1 ๐ ๐๐ = ๐ ๐๐ - ๐ ๐๐ ๐ ๐๐ i = 1, 2, 3, โฆ.n (โ k) and j = k,โฆโฆn + 1 (n+1)th column is the solution vector
9
Homework: Calculate the number of floating point operations required for the solution using the Gauss- Jordon Algorithm! When is the Gauss-Jordon algorithm going to fail ? Inverse of a matrix (n ร n) can be computed using the Gauss- Jordon Algorithm: Augment an identity matrix of order n with the matrix to be inverted. Resulting matrix will be (n ร 2n) Carry out the operations using Gauss-Jordon Algorithm Original matrix will become an identity matrix and the augmented identity matrix will become its inverse!
10
๐ 11 ๐ 12 โฆ ๐ 1๐ โฆ ๐ 1๐ | ๐ 21 ๐ 22 โฆ ๐ 2๐ โฆ ๐ 2๐ | โฎ โฎ โฎ โฎ โฎ โฎ | ๐ ๐1 ๐ ๐2 โฆ ๐ ๐๐ โฆ ๐ ๐๐ | โฎ โฎ โฎ โฎ โฑ โฎ | ๐ ๐1 ๐ ๐2 โฆ ๐ ๐๐ โฆ ๐ ๐๐ | 1 0 โฆ 0 โฆ โฆ 0 โฆ 0 โฎ โฎ โฎ โฎ โฎ โฎ 0 0 โฆ 1 โฆ 0 โฎ โฎ โฎ โฎ โฑ โฎ 0 0 โฆ 0 โฆ 1 Gauss-Jordon Algorithm: For k = 1, 2, โฆn ๐ ๐๐ = ๐ ๐๐ ๐ ๐๐ ๐=๐,โฆ2๐ ๐ ๐๐ = ๐ ๐๐ - ๐ ๐๐ ๐ ๐๐ i = 1, 2, 3, โฆ.n (โ k) and j = k,โฆโฆ2n Can you see why this inversion algorithm works?
11
Ax = b LU-Decomposition: A general method
In most engineering problems, the matrix A remains constant while the vector b changes with time. The matrix A describes the system and the vector b describes the external forcing. e.g., all network problems (pipes, electrical, canal, road, reactors, etc.); structural frames; many financial analyses. If all bโs are available together, one can solve the system by augmented matrix but in practice, they are not! Instead of performing โ n3 floating point operations to solve whenever a new b becomes available, it is possible to solve the system by performing โ n2 floating point operations if a LU Decomposition is available for matrix A LU-decomposition requires โ n3 floating point operations!
12
Ax = b Consider the system: (b changes!)
Perform a decomposition of the form A = LU, where L is a lower-triangular and U is an upper-triangular matrix! LU-decomposition requires โ n3 floating point operations! For any given b, solve Ax = LUx = b This is equivalent to solving two triangular systems: Solve Ly = b using forward substitution to obtain y (~n2 operations) Solve Ux = y using back substitution to obtain x (~n2 operations) Most frequently used method for engineering applications! We will derive LU-decomposition from Gauss Elimination!
13
An example of gauss elimination (four decimal places shown): 3 โ1 1 โ2 โ โ3 ๐ฅ 1 ๐ฅ 2 ๐ฅ 3 = 2 โ โ1 1 0 โ โ ๐ฅ 1 ๐ฅ 2 ๐ฅ 3 = 2 โ โ1 1 0 โ โ ๐ฅ 1 ๐ฅ 2 ๐ฅ 3 = 2 โ โ At this point, you may solve the system using back-substitution to obtain x1 = 1, x2 = 3 and x3 = 2. Check the following matrix identity: โ โ โ1 1 0 โ โ = 3 โ1 1 โ2 โ โ3 One can derive the general algorithm of LU-Decomposition by carefully studying Gauss Elimination! l21 = -2/3 = l31 = 1/3 = l32 = /5.6667 =
14
Matrix after the kth Step:
Changed 1-time and became zero Changed 2-times and became zero Changed k- times and became zero Changed 0-times Changed 1-time Changed 2-times Changed 3-times Changed (k-1)-times
15
Gauss-Elimination Steps (example 4ร4 matrix): ๐ 11 (1) ๐ 12 (1) ๐ 13 (1) ๐ 14 (1) ๐ 21 (1) ๐ 22 (1) ๐ 23 (1) ๐ 24 (1) ๐ 31 (1) ๐ 32 (1) ๐ 33 (1) ๐ 34 (1) ๐ 41 (1) ๐ 42 (1) ๐ 43 (1) ๐ 44 (1) ๐ 11 (1) ๐ 12 (1) ๐ 13 (1) ๐ 14 (1) 0 ๐ 22 (2) ๐ 23 (2) ๐ 24 (2) 0 ๐ 32 (2) ๐ 33 (2) ๐ 34 (2) 0 ๐ 42 (2) ๐ 43 (2) ๐ 44 (2) ๐ 11 (1) ๐ 12 (1) ๐ 13 (1) ๐ 14 (1) 0 ๐ 22 (2) ๐ 23 (2) ๐ 24 (2) 0 0 ๐ 33 (3) ๐ 34 (3) ๐ 44 (4) ๐ 11 (1) ๐ 12 (1) ๐ 13 (1) ๐ 14 (1) 0 ๐ 22 (2) ๐ 23 (2) ๐ 24 (2) 0 0 ๐ 33 (3) ๐ 34 (3) 0 0 ๐ 43 (3) ๐ 44 (3) Step 1 Step 2 Step 3
16
For elements above and on the diagonal i โค j:
Changed 1-time and became zero Changed 2-times and became zero Changed k- times and became zero Changed 0-times Changed 1-time Changed 2-times Changed 3-times Changed (k-1)-times For elements above and on the diagonal i โค j: aij is actively modified for the first (i - 1) steps and remains constant for the rest (n - i) steps For elements below on the diagonal j < i: aij is actively modified for the first j steps and remains at zero for the rest (n - j) steps Combined statement: Any element aij is actively modified for the first p steps where, p = min {(i-1), j}
17
Any element aij is actively modified for p-steps where, p = min {(i-1), j}
Modification formula: ๐ ๐๐ ๐+1 = ๐ ๐๐ ๐ - ๐ ๐๐ ๐ ๐๐ ๐ Summing over p-steps: ๐=1 ๐ ๐ ๐๐ ๐+1 โ ๐ ๐๐ ๐ =โ ๐=1 ๐ ๐ ๐๐ ๐ ๐๐ ๐ ๐= min ๐โ1 ,๐ For i โค j, p = i - 1 ๐ ๐๐ ๐ โ ๐ ๐๐ 1 =โ ๐=1 ๐โ1 ๐ ๐๐ ๐ ๐๐ ๐ ๐ ๐๐ 1 = ๐ ๐๐ = ๐ ๐๐ ๐ + ๐=1 ๐โ1 ๐ ๐๐ ๐ ๐๐ ๐ Define: ๐ ๐๐ = ๐ ๐๐ =1 โน ๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ ๐๐ ๐
18
For i โค j, p = i - 1 ๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ ๐๐ ๐ For j < i, p = j ๐ ๐๐ ๐+1 โ ๐ ๐๐ 1 =โ ๐=1 ๐ ๐ ๐๐ ๐ ๐๐ ๐ But ๐ ๐๐ ๐+1 =0 โ ๐ ๐๐ 1 = ๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ ๐๐ ๐ Combining two statements: ๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ ๐๐ ๐ ๐= min ๐,๐
19
๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ ๐๐ ๐ ๐= min ๐,๐
Define the elements of matrix L as: ๐ ๐๐ = ๐ ๐๐ Define the elements of matrix U as: ๐ข ๐๐ = ๐ ๐๐ ๐ โ ๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ข ๐๐ ๐= min ๐,๐ This is equivalent to matrix multiplication: A = LU Can you see it?
20
๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ข ๐๐ ๐= min ๐,๐
๐ 11 ๐ 12 ๐ 13 ๐ 21 ๐ 22 ๐ 23 ๐ 31 ๐ 32 ๐ 33 = ๐ ๐ 21 ๐ ๐ 31 ๐ 32 ๐ ๐ข 11 ๐ข 12 ๐ข ๐ข 22 ๐ข ๐ข 33 ๐ 11 = ๐ 11 ๐ข ๐ 12 = ๐ 11 ๐ข ๐ 13 = ๐ 11 ๐ข 13 ๐ 21 = ๐ 21 ๐ข ๐ 22 = ๐ 21 ๐ข 12 + ๐ 22 ๐ข 22 ๐ 23 = ๐ 21 ๐ข 13 + ๐ 22 ๐ข 23 ๐ 31 = ๐ 21 ๐ข ๐ 32 = ๐ 31 ๐ข 12 + ๐ 32 ๐ข 22 ๐ 33 = ๐ 31 ๐ข 13 + ๐ 32 ๐ข 23 + ๐ 33 ๐ข 33 12 Unknowns and 9 equations! 3 free entries! In general, n2 equations and n2 + n unknows! n free entries!
21
Doolittleโs Algorithm:
Define: ๐ ๐๐ = ๐ ๐๐ =1 The U matrix: i โค j ๐ ๐๐ = ๐ ๐๐ ๐ + ๐=1 ๐โ1 ๐ ๐๐ ๐ ๐๐ ๐ โ ๐ ๐๐ = ๐ข ๐๐ + ๐=1 ๐โ1 ๐ ๐๐ ๐ข ๐๐ ๐ข ๐๐ = ๐ ๐๐ โ ๐=1 ๐โ1 ๐ ๐๐ ๐ข ๐๐ ๐=1,2,โฆ๐; ๐=๐,๐+1,โฆ๐ The L matrix: j < i ๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ ๐๐ ๐ โ ๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ข ๐๐ ๐ ๐๐ = ๐ ๐๐ โ ๐=1 ๐โ1 ๐ ๐๐ ๐ข ๐๐ ๐ข ๐๐ ๐=๐+1,โฆ๐; ๐=1,2,โฆ๐
22
Croutโs Algorithm: ๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ข ๐๐ ๐= min ๐,๐ Define: ๐ข ๐๐ = ๐ข ๐๐ =1 The L matrix: j โค i ๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ข ๐๐ โ ๐ ๐๐ = ๐ ๐๐ โ ๐=1 ๐โ1 ๐ ๐๐ ๐ข ๐๐ ๐=1,2,โฆ๐; ๐=๐, ๐+1,โฆ๐ The U matrix: i < j ๐ ๐๐ = ๐=1 ๐ ๐ ๐๐ ๐ข ๐๐ โ ๐ข ๐๐ = ๐ ๐๐ โ ๐=1 ๐โ1 ๐ ๐๐ ๐ข ๐๐ ๐ ๐๐ ๐=1,2,โฆ๐; ๐=๐+1,โฆ๐
23
Doolittleโs Algorithm (3ร3 example): The L matrix: j < i
๐ 11 = ๐ 22 = ๐ 33 =1 ๐ ๐๐ = ๐ ๐๐ โ ๐=1 ๐โ1 ๐ ๐๐ ๐ข ๐๐ ๐ข ๐๐ ๐=๐+1,โฆ๐; ๐=1,2,โฆ๐โ1 j = 1, i = 2, 3: ๐ 21 = ๐ 21 ๐ข 11 , ๐ 31 = ๐ 31 ๐ข 11 j = 2, i = 3: ๐ 32 = ๐ 32 โ ๐ 31 ๐ข 12 ๐ข 22 The U matrix: i โค j ๐ข ๐๐ = ๐ ๐๐ โ ๐=1 ๐โ1 ๐ ๐๐ ๐ข ๐๐ ๐=1,2,โฆ๐; ๐=๐,๐+1,โฆ๐ i = 1, j = 1, 2, 3: ๐ข 11 = ๐ 11 , ๐ข 12 = ๐ 12 , ๐ข 13 = ๐ 13 i = 2, j = 2, 3: ๐ข 22 = ๐ 22 โ ๐ 21 ๐ข 12 , ๐ข 23 = ๐ 23 โ ๐ 21 ๐ข 13 i = 3, j = 3: ๐ข 33 = ๐ 23 โ ๐ 31 ๐ข 13 โ ๐ 32 ๐ข 23 ๐ ๐ 31 ๐ 32 1 2 4 ๐ข 11 ๐ข 12 ๐ข ๐ข 22 ๐ข ๐ข 33 1 3 5
24
Verify this computation sequence!
Croutโs Algorithm (3ร3 example): Define: ๐ข ๐๐ = ๐ข ๐๐ =1 The L matrix: j โค i ๐ ๐๐ = ๐ ๐๐ โ ๐=1 ๐โ1 ๐ ๐๐ ๐ข ๐๐ ๐=1,2,โฆ๐; ๐=๐, ๐+1,โฆ๐ The U matrix: i < j ๐ข 11 = ๐ข 22 = ๐ข 33 =1 ๐ข ๐๐ = ๐ ๐๐ โ ๐=1 ๐โ1 ๐ ๐๐ ๐ข ๐๐ ๐ ๐๐ ๐=1,2,โฆ๐โ1; ๐=๐+1,โฆ๐ Verify this computation sequence! ๐ ๐ 21 ๐ ๐ 31 ๐ 32 ๐ 33 1 3 5 1 ๐ข 12 ๐ข ๐ข 2 4
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.