Presentation is loading. Please wait.

Presentation is loading. Please wait.

Systems of Linear Equations Iterative Methods

Similar presentations


Presentation on theme: "Systems of Linear Equations Iterative Methods"— Presentation transcript:

1 Systems of Linear Equations Iterative Methods
Daniel Baur ETH Zurich, Institut für Chemie- und Bioingenieurwissenschaften ETH Hönggerberg / HCI F128 – Zürich Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

2 Iterative Methods The main idea behind iterative methods is that we can reformulate the problem in the following way Now we can proceed iteratively This is advantageous if S has a structure that makes it particularly easy to factorize or solve (e.g. diagonal, tridiagonal, triangular) and if S can be considered constant Even if b or A (and S) change slightly, the solution will be similar and thus the iteration will converge quickly if the previous solution is used as a starting point Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

3 Convergence of Iterative Methods
Let us rewrite the convergence loop as As we have seen with the convergence loop of a predictor corrector ODE-solver scheme, such a loop will only converge if the spectral radius of the iteration matrix is smaller than 1, i.e. This is the case if S and A are similar in some sense: Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

4 Solution Procedure If we look at the iteration equation we can formulate it in a more convenient way by treating the right hand side as a constant vector Since we can choose S to have a structure that is easy to solve, there is no need to calculate S-1 or do Gauss elimination to solve this equation system; We can use more direct (even analytical) approaches instead Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

5 Jacobi’s Method The Jacobi method chooses S to be a diagonal matrix
The procedure is therefore given by This method is guaranteed to converge if A is diagonally dominant, i.e. Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

6 The Gauss-Seidel Method
In this case, S is chosen to be lower triangular The procedure is therefore given by This method is guaranteed to converge if A is symmetric positive-definite or diagonally dominant Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

7 Another way of Implementation
At every iteration, a linear system of equations is solved: This is equivalent to It is therefore possible to compute M and c and once in the beginning and then do simple matrix multiplications and vector additions for the iteration Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

8 The Conjugate Gradient Method
We say that two non-zero vectors are conjugate (wrt A) if If A is symmetric and positive definite, this corresponds to an inner product of u and v If we now find a series of N mutually conjugate vectors p, the solution of the linear system Ax = b must be contained in the space that these vectors span, i.e. With the coefficients Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

9 The Conjugate Gradient Method (Continued)
If we find the vectors p one after the other, we can proceed iteratively The main idea for the transformation into an iterative method is that the unique minimizer of the cost function is the same as the solution of the equation Ax = b The negative gradient of f in point xk, which denotes the steepest descent direction, reads However to ensure that our next step be conjugate to the previous one, we will have to correct this search direction Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

10 Stability and Convergence of the CG-Method
The CG-Method is only stable if A is symmetric and positive definite; However, if we have an asymmetric matrix, we can use the equivalent normal equations where C = ATA is symmetric positive definite for any non-singular matrix A The convergence speed of the CG-Method is determined by the condition number of A; The larger it is, the slower the method converges and unfortunately Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

11 Preconditioned Conjugate Gradient
We can improve the convergence speed by preconditioning the equation system, i.e. replace the system by an equivalent system so that κ(M-1A) < κ(A) Some choices of M are Jacobi-preconditioning where M = D; D is the diagonal matrix of A Incomplete Cholesky factorization where M = KKT and KKT ≈ A SSOR-preconditioning where for ω = (0,...,2) where L is the strictly lower triangular matrix of A Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

12 Algorithm for the PCG-Method
Choose a starting point x0 and calculate Proceed from xk to xk+1 using pk as the direction Correct the search direction Iterate 2 and 3 until norm(rk+1) or norm(Axk – b) is sufficiently small, or 5*length(A) steps have been taken Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

13 Exercise: 2-D Heat Transport
Consider a spherical catalyst particle where a chemical reaction takes place Assumptions: The reaction rate is independent of concentration and temperature Thermal diffusivity is independent on temperature No convective heat transport Perfect heat sink at the boundary, i.e. T = 0 λ Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

14 Exercise (Continued) The heat transfer can be described as a PDE: where is the Laplace operator, k is the heat produced by the reaction and r is the particle radius Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

15 Exercise (Continued) Substituting the temperature:
At steady state we get This equation can be solved by using a discretized Laplace operator: Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

16 Assignment Implement the PCG-algorithm as a funciton (see slide 12)
Use it to solve the discrete steady state Laplace equation for the catalyst particle, discretizing with N = 50 points Assume Create a disk shaped grid using: G = numgrid('D',N); Check the grid with spy(G) Use D = delsq(G); to create a negative 5th-order discrete Laplace operator, take a look at the operator again with spy(D) Create the right hand side using b = ones(size(D,1),1); Choose an initial guess, e.g. x0 = zeros(size(D,1),1); Solve the system without preconditioning Set M = eye(size(D)); Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods

17 Assignment (continued)
Plot the solution using U = G; U(G>0) = full(x(G(G>0))); clabel(contour(U)) Try out different preconditioning methods: Jacobi preconditioning, i.e. M = diag(diag(D)); SSOR preconditioning with ω = 1.5; Use L = tril(D,-1); to get the strictly lower triagonal part and E = diag(diag(D)); to get the diagonal matrix of D Incomplete cholesky factorization; This can be done using M = C*C'; with C = ichol(D); OR C = cholinc(D,'0'); How many iterations are needed in all cases? Daniel Baur / Numerical Methods for Chemical Engineers / Iterative Methods


Download ppt "Systems of Linear Equations Iterative Methods"

Similar presentations


Ads by Google