Krylov-Subspace Methods - I

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

3D Geometry for Computer Graphics
Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Extremum Properties of Orthogonal Quotients Matrices By Achiya Dax Hydrological Service, Jerusalem, Israel
Solving Linear Systems (Numerical Recipes, Chap 2)
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
QR Factorization –Direct Method to solve linear systems Problems that generate Singular matrices –Modified Gram-Schmidt Algorithm –QR Pivoting Matrix must.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
CS240A: Conjugate Gradients and the Model Problem.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
CSE 245: Computer Aided Circuit Simulation and Verification
Dirac Notation and Spectral decomposition
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
Stats & Linear Models.
5.1 Orthogonality.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Linear algebra: matrix Eigen-value Problems
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Orthogonalization via Deflation By Achiya Dax Hydrological Service Jerusalem, Israel
A Note on Rectangular Quotients By Achiya Dax Hydrological Service Jerusalem, Israel
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Algorithms 2005 Ramesh Hariharan. Algebraic Methods.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Signal & Weight Vector Spaces
Stats & Summary. The Woodbury Theorem where the inverses.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Solving Scalar Linear Systems A Little Theory For Jacobi Iteration
MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2.
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
ALGEBRAIC EIGEN VALUE PROBLEMS
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Iterative Solution Methods
Lecture 11 Alessandra Nardi
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Direct Methods for Sparse Linear Systems
Solving Linear Systems Ax=b
Matrices and Vectors Review Objective
Euclidean Inner Product on Rn
Orthogonality and Least Squares
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
Jianping Fan Dept of CS UNC-Charlotte
Singular Value Decomposition
CSE 245: Computer Aided Circuit Simulation and Verification
Conjugate Gradient Method
Orthogonality and Least Squares
CS5321 Numerical Optimization
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Lecture 13 Operations in Graphics and Geometric Modeling I: Projection, Rotation, and Reflection Shang-Hua Teng.
Symmetric Matrices and Quadratic Forms
Lecture 13 Operations in Graphics and Geometric Modeling I: Projection, Rotation, and Reflection Shang-Hua Teng.
Administrivia: November 9, 2009
Linear Algebra Lecture 41.
Performance Optimization
Lecture 20 SVD and Its Applications
Orthogonality and Least Squares
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

Last lecture review Iterative Methods Overview Stationary Non Stationary QR factorization to solve Mx=b Modified Gram-Schmidt Algorithm QR Pivoting Minimization View of QR Basic Minimization approach Orthogonalized Search Directions Pointer to Krylov Subspace Methods

Last lecture reminder QR Factorization – By picture

QR Factorization – Minimization View Minimization Algorithm For i = 1 to N “For each Target Column” For j = 1 to i-1 “For each Source Column left of target” end Orthogonalize Search Direction Normalize

Iterative Methods Solve Mx=b minimizing the residual r=b-Mx Stationary: x(k+1) =Gx(k)+c Jacobi Gauss-Seidel Successive Overrelaxation Non Stationary: x(k+1) =x(k)+akpk CG (Conjugate Gradient)  A symmetric and positive definite GCR (Generalized Conjugate Residual) GMRES, etc etc

Iterative Methods - CG Convergence is related to: Why ? How? Number of distinct eigenvalues Ratio between max and min eigenvalue

Outline General Subspace Minimization Algorithm Review orthogonalization and projection formulas Generalized Conjugate Residual Algorithm Krylov-subspace Simplification in the symmetric case. Convergence properties Eigenvalue and Eigenvector Review Norms and Spectral Radius Spectral Mapping Theorem

Arbitrary Subspace Methods Residual Minimization

Arbitrary Subspace Methods Residual Minimization Use Gram-Schmidt on Mwi’s!

Arbitrary Subspace Methods Orthogonalization

Arbitrary Subspace Solution Algorithm Given M, b and a set of search directions: {w0,…,wk} Make wi’s MMT orthogonal and get new search directions: {p0,…,pk} Minimize the residual:

Arbitrary Subspace Solution Algorithm For i = 0 to k For j = 1 to i-1 end Orthogonalize Search Direction Normalize Update Solution

Krylov Subspace How about the initial set of search directions {w0,…,wk} ? A particular choice that is commonly used is: {w0,…,wk}  {b, Mb, M2b…} Km(A,v)  span{v, Av, A2v, …, Am-1v} is called Krylov Subspace

Krylov Subspace Methods kth order polynomial

Krylov Subspace Methods Subspace Generation The set of residuals also can be used as a representation of the Krylov-Subspace Generalized Conjugate Residual Algorithm Nice because the residuals generate next search directions

Krylov-Subspace Methods Generalized Conjugate Residual Method (k-th step) Determine optimal stepsize in kth search direction Update the solution (trying to minimize residual) and the residual Compute the new orthogonalized search direction (by using the most recent residual)

Krylov-Subspace Methods Generalized Conjugate Residual Method (Computational Complexity for k-th step) Vector inner products, O(n) Matrix-vector product, O(n) if sparse Vector Adds, O(n) O(k) inner products, total cost O(nk) If M is sparse, as k (# of iters) approaches n, Better Converge Fast!

Krylov-Subspace Methods Generalized Conjugate Residual Method (Symmetric Case – Conjugate Gradient Method) An Amazing fact that will not be derived Orthogonalization in one step If k (# of iters )  n, then symmetric, sparse, GCR is O(n2 ) Better Converge Fast!

Summary What is an iterative non stationary method: x(k+1) =x(k)+akpk How search to calculate: Search directions (pk) Step along search directions (ak) Krylov Subspace  GCR GCR is O(k2n) Better converge fast!  Now look at convergence properties of GCR

Krylov Methods Convergence Analysis Basic properties

Krylov Methods Convergence Analysis Optimality of GCR poly GCR optimality property (key property of the algorithm): GCR picks the best (k+1)-th order polynomial minimizing and subject to: 

Krylov Methods Convergence Analysis Optimality of GCR poly GCR Optimality Property Therefore Any polynomial which satisfies the constraints can be used to get an upper bound on

Eigenvalues and eigenvectors review Basic definitions Eigenvalues and eigenvectors of a matrix M satisfy eigenvalue eigenvector

Eigenvalues and eigenvectors review A symplifying assumption Almost all NxN matrices have N linearly independent Eigenvectors The set of all eigenvalues of M is known as the Spectrum of M

Eigenvalues and eigenvectors review A symplifying assumption Almost all NxN matrices have N linearly independent Eigenvectors

Eigenvalues and eigenvectors review Spectral radius The spectral Radius of M is the radius of the smallest circle, centered at the origin, which encloses all of M’s eigenvalues

Eigenvalues and eigenvectors review Vector norms L2 (Euclidean) norm : Unit circle L1 norm : 1 1 L norm : Unit square

Eigenvalues and eigenvectors review Matrix norms Vector induced norm : Induced norm of A is the maximum “magnification” of by = max abs column sum = max abs row sum = (largest eigenvalue of ATA)1/2

Eigenvalues and eigenvectors review Induced norms Theorem: Any induced norm is a bound on the spectral radius Proof:

Useful Eigenproperties Spectral Mapping Theorem Given a polynomial Apply the polynomial to a matrix Then

Krylov Methods Convergence Analysis Overview Matrix norm property GCR optimality property where is any (k+1)-th order polynomial subject to:  may be used to get an upper bound on

Krylov Methods Convergence Analysis Overview Review on eigenvalues and eigenvectors Induced norms: relate matrix eigenvalues to the matrix norms Spectral mapping theorem: relate matrix eigenvalues to matrix polynomials Now ready to relate the convergence properties of Krylov Subspace methods to eigenvalues of M

Summary Generalized Conjugate Residual Algorithm Krylov-subspace Simplification in the symmetric case Convergence properties Eigenvalue and Eigenvector Review Norms and Spectral Radius Spectral Mapping Theorem