Presentation is loading. Please wait.

Presentation is loading. Please wait.

Krylov-Subspace Methods - I

Similar presentations


Presentation on theme: "Krylov-Subspace Methods - I"— Presentation transcript:

1 Krylov-Subspace Methods - I
Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

2 Last lecture review Iterative Methods Overview
Stationary Non Stationary QR factorization to solve Mx=b Modified Gram-Schmidt Algorithm QR Pivoting Minimization View of QR Basic Minimization approach Orthogonalized Search Directions Pointer to Krylov Subspace Methods

3 Last lecture reminder QR Factorization – By picture

4 QR Factorization – Minimization View Minimization Algorithm
For i = 1 to N “For each Target Column” For j = 1 to i “For each Source Column left of target” end Orthogonalize Search Direction Normalize

5 Iterative Methods Solve Mx=b minimizing the residual r=b-Mx
Stationary: x(k+1) =Gx(k)+c Jacobi Gauss-Seidel Successive Overrelaxation Non Stationary: x(k+1) =x(k)+akpk CG (Conjugate Gradient)  A symmetric and positive definite GCR (Generalized Conjugate Residual) GMRES, etc etc

6 Iterative Methods - CG Convergence is related to: Why ? How?
Number of distinct eigenvalues Ratio between max and min eigenvalue

7 Outline General Subspace Minimization Algorithm
Review orthogonalization and projection formulas Generalized Conjugate Residual Algorithm Krylov-subspace Simplification in the symmetric case. Convergence properties Eigenvalue and Eigenvector Review Norms and Spectral Radius Spectral Mapping Theorem

8 Arbitrary Subspace Methods Residual Minimization

9 Arbitrary Subspace Methods Residual Minimization
Use Gram-Schmidt on Mwi’s!

10 Arbitrary Subspace Methods Orthogonalization

11 Arbitrary Subspace Solution Algorithm
Given M, b and a set of search directions: {w0,…,wk} Make wi’s MMT orthogonal and get new search directions: {p0,…,pk} Minimize the residual:

12 Arbitrary Subspace Solution Algorithm
For i = 0 to k For j = 1 to i-1 end Orthogonalize Search Direction Normalize Update Solution

13 Krylov Subspace How about the initial set of search directions {w0,…,wk} ? A particular choice that is commonly used is: {w0,…,wk}  {b, Mb, M2b…} Km(A,v)  span{v, Av, A2v, …, Am-1v} is called Krylov Subspace

14 Krylov Subspace Methods
kth order polynomial

15 Krylov Subspace Methods Subspace Generation
The set of residuals also can be used as a representation of the Krylov-Subspace Generalized Conjugate Residual Algorithm Nice because the residuals generate next search directions

16 Krylov-Subspace Methods Generalized Conjugate Residual Method (k-th step)
Determine optimal stepsize in kth search direction Update the solution (trying to minimize residual) and the residual Compute the new orthogonalized search direction (by using the most recent residual)

17 Krylov-Subspace Methods Generalized Conjugate Residual Method (Computational Complexity for k-th step) Vector inner products, O(n) Matrix-vector product, O(n) if sparse Vector Adds, O(n) O(k) inner products, total cost O(nk) If M is sparse, as k (# of iters) approaches n, Better Converge Fast!

18 Krylov-Subspace Methods Generalized Conjugate Residual Method (Symmetric Case – Conjugate Gradient Method) An Amazing fact that will not be derived Orthogonalization in one step If k (# of iters )  n, then symmetric, sparse, GCR is O(n2 ) Better Converge Fast!

19 Summary What is an iterative non stationary method: x(k+1) =x(k)+akpk
How search to calculate: Search directions (pk) Step along search directions (ak) Krylov Subspace  GCR GCR is O(k2n) Better converge fast!  Now look at convergence properties of GCR

20 Krylov Methods Convergence Analysis Basic properties

21 Krylov Methods Convergence Analysis Optimality of GCR poly
GCR optimality property (key property of the algorithm): GCR picks the best (k+1)-th order polynomial minimizing and subject to: 

22 Krylov Methods Convergence Analysis Optimality of GCR poly
GCR Optimality Property Therefore Any polynomial which satisfies the constraints can be used to get an upper bound on

23 Eigenvalues and eigenvectors review Basic definitions
Eigenvalues and eigenvectors of a matrix M satisfy eigenvalue eigenvector

24 Eigenvalues and eigenvectors review A symplifying assumption
Almost all NxN matrices have N linearly independent Eigenvectors The set of all eigenvalues of M is known as the Spectrum of M

25 Eigenvalues and eigenvectors review A symplifying assumption
Almost all NxN matrices have N linearly independent Eigenvectors

26 Eigenvalues and eigenvectors review Spectral radius
The spectral Radius of M is the radius of the smallest circle, centered at the origin, which encloses all of M’s eigenvalues

27 Eigenvalues and eigenvectors review Vector norms
L2 (Euclidean) norm : Unit circle L1 norm : 1 1 L norm : Unit square

28 Eigenvalues and eigenvectors review Matrix norms
Vector induced norm : Induced norm of A is the maximum “magnification” of by = max abs column sum = max abs row sum = (largest eigenvalue of ATA)1/2

29 Eigenvalues and eigenvectors review Induced norms
Theorem: Any induced norm is a bound on the spectral radius Proof:

30 Useful Eigenproperties Spectral Mapping Theorem
Given a polynomial Apply the polynomial to a matrix Then

31 Krylov Methods Convergence Analysis Overview
Matrix norm property GCR optimality property where is any (k+1)-th order polynomial subject to:  may be used to get an upper bound on

32 Krylov Methods Convergence Analysis Overview
Review on eigenvalues and eigenvectors Induced norms: relate matrix eigenvalues to the matrix norms Spectral mapping theorem: relate matrix eigenvalues to matrix polynomials Now ready to relate the convergence properties of Krylov Subspace methods to eigenvalues of M

33 Summary Generalized Conjugate Residual Algorithm
Krylov-subspace Simplification in the symmetric case Convergence properties Eigenvalue and Eigenvector Review Norms and Spectral Radius Spectral Mapping Theorem


Download ppt "Krylov-Subspace Methods - I"

Similar presentations


Ads by Google