Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE 245: Computer Aided Circuit Simulation and Verification

Similar presentations


Presentation on theme: "CSE 245: Computer Aided Circuit Simulation and Verification"— Presentation transcript:

1 CSE 245: Computer Aided Circuit Simulation and Verification
Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng

2 Outline Introduction Direct Methods Iterative Methods Formulations
Projection Methods Krylov Space Methods Preconditioned Iterations Multigrid Methods Domain Decomposition Methods

3 Introduction Iterative Methods Direct Method LU Decomposition
Domain Decomposition General and Robust but can be complicated if N>= 1M Preconditioning Conjugate Gradient GMRES Jacobi Gauss-Seidel Multigrid Excellent choice for SPD matrices Remain an art for arbitrary matrices

4 Formulation Error in A norm Minimal Residue
Matrix A is SPD (Symmetric and Positive Definite Minimal Residue Matrix A can be arbitrary

5 Formulation: Error in A norm
Min (x-x*)TA(x-x*), where Ax* =b, and A is SPD. Min E(x)=1/2 xTAx - bTx Search space: x=x0+Vy, where x0 is an initial solution, matrix Vnxm has m bases of subspace K vector ym contains the m variables.

6 Solution: Error in A norm
Min E(x)=1/2xTAx - bTx, Search space: x=x0+Vy, r=b-Ax VTAV is nonsingular if A is SPD, V is full ranked y=(VTAV)-1VTr0 x=x0+V(VTAV)-1VTr0 VTr=0 E(x)=E(x0)-1/2 r0TV(VTAV)-1VTr0

7 Solution: Error in A norm
For any V’=VW, where V is nxm and W is a nonsingular mxm matrix, the solution x remains the same. Search space: x=x0+V’y, r=b-Ax V’TAV’ is nonsingular if A is SPD & V is full ranked x=x0+V’(V’TAV’)-1V’Tr0 =x0+V(VTAV)-1VTr0 V’Tr=WTVTr=0 E(x)=E(x0)-1/2 r0TV(VTAV)-1VTr0

8 Steepest Descent: Error in A norm
Min E(x)=1/2 xTAx - bTx, Gradient: Ax-b= -r Set x=x0+yr0 r0TAr0 is nonsingular if A is SPD y=(r0TAr0)-1r0Tr0 x=x0+r0(r0TAr0)-1r0Tr0 r0Tr=0 E(x)=E(x0)-1/2 (r0Tr0)2/(r0TAr0)

9 Lanczos: Error in A norm
Min E(x)=1/2 xTAx-bTx, Set x=x0+Vy v1=r0 vi is in K{r0, A, i} V=[v1,v2, …,vm] is orthogonal AV=VHm+vm+1 emT VTAV=Hm Note since A is SPD, Hm is Tm Tridiagonal

10 Conjugate Gradient: Error in A norm
Min E(x)=1/2 xTAx-bTx, Set x=x0+Vy v1=r0 vi is in K{r0, A, i} V=[v1,v2, …,vm] is orthogonal in A norm, i.e. VTAV= [diag(viTAvi)] yi= (viTAvi)-1

11 Formulation: Residual
Min |r|2=|b-Ax|2, for an arbitrary square matrix A Min R(x)=(b-Ax)T(b-Ax) Search space: x=x0+Vy where x0 is an initial solution, matrix Vnxm has m bases of subspace K vector ym contains the m variables.

12 Solution: Residual Min R(x)=(b-Ax)T(b-Ax) Search space: x=x0+Vy
VTATAV is nonsingular if A is nonsingular and V is full ranked. y=(VTATAV)-1VTATr0 x=x0+V(VTATAV)-1VTATr0 VTATr= 0 R(x)=R(x0)-r0TAV(VTATAV)-1VTATr0

13 Steepest Descent: Residual
Min R(x)=(b-Ax)T(b-Ax) Gradient: -2AT(b-Ax)=-2ATr Let x=x0+yATr0 VTATAV is nonsingular if A is nonsingular where V=ATr0. y=(VTATAV)-1VTATr0 x=x0+V(VTATAV)-1VTATr0 VTATr= 0 R(x)=R(x0)-r0TAV(VTATAV)-1VTATr0

14 GMRES: Residual Min R(x)=(b-Ax)T(b-Ax) Gradient: -2AT(b-Ax)=-2ATr
Let x=x0+yATr0 v1=r0 vi is in K{r0, A, i} V=[v1,v2, …,vm] is orthogonal AV=VHm+vm+1 emT=Vm+1Hm x=x0+V(VTATAV)-1VTATr0 =x0+V(HmTHm)-1 HmTe1|r0|2

15 Conjugate Residual: Residual
Min R(x)=(b-Ax)T(b-Ax) Gradient: -2AT(b-Ax)=-2ATr Let x=x0+yATr0 v1=r0 vi is in K{r0, A, i} (AV)TAV= D Diagonal Matrix x=x0+V(VTATAV)-1VTATr0 =x0+VD-1VTATr0

16 Conjugate Gradient Method
Steepest Descent Repeat search direction Why take exact one step for each direction? Search direction of Steepest descent method

17 Orthogonal Direction We don’t know !!!
Pick orthogonal search direction: We like to leave the error of xi+1 orthogonal to the search direction di, i.e. We don’t know !!!

18 Orthogonal  A-orthogonal
Instead of orthogonal search direction, we make search direction A –orthogonal (conjugate)

19 Search Step Size

20 Iteration finish in n steps
Initial error: A-orthogonal The error component at direction dj is eliminated at step j. After n steps, all errors are eliminated.

21 Conjugate Search Direction
How to construct A-orthogonal search directions, given a set of n linear independent vectors. Since the residue vector in steepest descent method is orthogonal, a good candidate to start with

22 Construct Search Direction -1
In Steepest Descent Method New residue is just a linear combination of previous residue and Let We have Krylov SubSpace: repeatedly applying a matrix to a vector

23 Construct Search Direction -2
let For i > 0

24 Construct Search Direction -3
can get next direction from the previous one, without saving them all. let then

25 Conjugate Gradient Algorithm
Given x0, iterate until residue is smaller than error tolerance

26 Conjugate gradient: Convergence
In exact arithmetic, CG converges in n steps (completely unrealistic!!) Accuracy after k steps of CG is related to: consider polynomials of degree k that is equal to 1 at step 0. how small can such a polynomial be at all the eigenvalues of A? Eigenvalues close together are good. Condition number: κ(A) = ||A||2 ||A-1||2 = λmax(A) / λmin(A) Residual is reduced by a constant factor by O(κ1/2(A)) iterations of CG.

27 Other Krylov subspace methods
Nonsymmetric linear systems: GMRES: for i = 1, 2, 3, find xi  Ki (A, b) such that ri = (Axi – b)  Ki (A, b) But, no short recurrence => save old vectors => lots more space (Usually “restarted” every k iterations to use less space.) BiCGStab, QMR, etc.: Two spaces Ki (A, b) and Ki (AT, b) w/ mutually orthogonal bases Short recurrences => O(n) space, but less robust Convergence and preconditioning more delicate than CG Active area of current research Eigenvalues: Lanczos (symmetric), Arnoldi (nonsymmetric)

28 Preconditioners Suppose you had a matrix B such that:
condition number κ(B-1A) is small By = z is easy to solve Then you could solve (B-1A)x = B-1b instead of Ax = b B = A is great for (1), not for (2) B = I is great for (2), not for (1) Domain-specific approximations sometimes work B = diagonal of A sometimes works Better: blend in some direct-methods ideas. . .

29 Preconditioned conjugate gradient iteration
x0 = 0, r0 = b, d0 = B-1 r0, y0 = B-1 r0 for k = 1, 2, 3, . . . αk = (yTk-1rk-1) / (dTk-1Adk-1) step length xk = xk-1 + αk dk approx solution rk = rk-1 – αk Adk residual yk = B-1 rk preconditioning solve βk = (yTk rk) / (yTk-1rk-1) improvement dk = yk + βk dk search direction One matrix-vector multiplication per iteration One solve with preconditioner per iteration

30 Outline Iterative Method Stationary Iterative Method (SOR, GS,Jacob)
Krylov Method (CG, GMRES) Multigrid Method

31 What is the multigrid A multilevel iterative method to solve
Ax=b Originated in PDEs on geometric grids Expend the multigrid idea to unstructured problem – Algebraic MG Geometric multigrid for presenting the basic ideas of the multigrid method.

32 The model problem + v1 v2 v3 v4 v5 v6 v7 v8 vs Ax = b

33 Simple iterative method
x(0) -> x(1) -> … -> x(k) Jacobi iteration Matrix form : x(k) = Rjx(k-1) + Cj General form: x(k) = Rx(k-1) + C (1) Stationary: x* = Rx* + C (2)

34 Error and Convergence Definition: error e = x* - x (3)
residual r = b – Ax (4) e, r relation: Ae = r (5) ((3)+(4)) e(1) = x*-x(1) = Rx* + C – Rx(0) – C =Re(0) Error equation e(k) = Rke(0) (6) ((1)+(2)+(3)) Convergence:

35 Error of diffenent frequency
Wavenumber k and frequency  = k/n High frequency error is more oscillatory between points k= 1 k= 4 k= 2

36 Iteration reduce low frequency error efficiently
Smoothing iteration reduce high frequency error efficiently, but not low frequency error Error k = 1 k = 2 k = 4 Iterations

37 Multigrid – a first glance
Two levels : coarse and fine grid 2h A2hx2h=b2h 1 2 3 4 h Ahxh=bh 1 2 3 4 5 6 7 8 Ax=b

38 Idea 1: the V-cycle iteration
Also called the nested iteration Start with 2h A2hx2h = b2h A2hx2h = b2h Iterate => Prolongation:  Restriction:  h Iterate to get Ahxh = bh Question 1: Why we need the coarse grid ?

39 Prolongation Prolongation (interpolation) operator xh = x2h 1 2 3 4 5
6 7 8

40 Restriction Restriction operator xh = x2h 1 2 3 4 5 6 7 8

41 Smoothing The basic iterations in each level In ph: xphold  xphnew
Iteration reduces the error, makes the error smooth geometrically. So the iteration is called smoothing.

42 Why multilevel ? Coarse lever iteration is cheap. More than this…
Coarse level smoothing reduces the error more efficiently than fine level in some way . Why ? ( Question 2 )

43 Error restriction Map error to coarse grid will make the error more oscillatory K = 4,  =  K = 4,  = /2

44 Idea 2: Residual correction
Known current solution x Solve Ax=b eq. to MG do NOT map x directly between levels Map residual equation to coarse level Calculate rh b2h= Ih2h rh ( Restriction ) eh = Ih2h x2h ( Prolongation ) xh = xh + eh

45 Why residual correction ?
Error is smooth at fine level, but the actual solution may not be. Prolongation results in a smooth error in fine level, which is suppose to be a good evaluation of the fine level error. If the solution is not smooth in fine level, prolongation will introduce more high frequency error.

46 Revised V-cycle with idea 2
2h h Smoothing on xh Calculate rh b2h= Ih2h rh Smoothing on x2h eh = Ih2h x2h Correct: xh = xh + eh ` Restriction Prolongation

47 What is A2h Galerkin condition

48 Going to multilevels V-cycle and W-cycle Full Multigrid V-cycle h 2h

49 Performance of Multigrid
Complexity comparison Gaussian elimination O(N2) Jacobi iteration O(N2log) Gauss-Seidel SOR O(N3/2log) Conjugate gradient Multigrid ( iterative ) O(Nlog) Multigrid ( FMG ) O(N)

50 Summary of MG ideas Important ideas of MG Hierarchical iteration
Residual correction Galerkin condition Smoothing the error: high frequency : fine grid low frequency : coarse grid

51 AMG :for unstructured grids
Ax=b, no regular grid structure Fine grid defined from A 1 2 3 4 5 6

52 Three questions for AMG
How to choose coarse grid How to define the smoothness of errors How are interpolation and prolongation done

53 How to choose coarse grid
Idea: C/F splitting As few coarse grid point as possible For each F-node, at least one of its neighbor is a C-node Choose node with strong coupling to other nodes as C-node 1 2 4 3 5 6

54 How to define the smoothness of error
AMG fundamental concept: Smooth error = small residuals ||r|| << ||e||

55 How are Prolongation and Restriction done
Prolongation is based on smooth error and strong connections Common practice: I

56 AMG Prolongation (2)

57 AMG Prolongation (3) Restriction :

58 Summary Multigrid is a multilevel iterative method.
Advantage: scalable If no geometrical grid is available, try Algebraic multigrid method

59 The landscape of Solvers
Direct A = LU Iterative y’ = Ay More Robust More General Non- symmetric Symmetric positive definite More Robust Less Storage (if sparse)

60 References G.H. Golub and C.F. Van Loan, Matrix Computataions, Third Edition, Johns Hopkins, 1996 Y. Saad, Iterative Methods for Sparse Linear Systems, Second Edition, SIAM, 2003.


Download ppt "CSE 245: Computer Aided Circuit Simulation and Verification"

Similar presentations


Ads by Google