Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Optimization.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Modified Newton Methods Lecture 9 Alessandra Nardi Thanks to Prof. Jacob White, Jaime Peraire, Michal Rewienski, and Karen Veroy.
Online Social Networks and Media. Graph partitioning The general problem – Input: a graph G=(V,E) edge (u,v) denotes similarity between u and v weighted.
Solving Linear Systems (Numerical Recipes, Chap 2)
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Gradient Methods April Preview Background Steepest Descent Conjugate Gradient.
Symmetric Matrices and Quadratic Forms
QR Factorization –Direct Method to solve linear systems Problems that generate Singular matrices –Modified Gram-Schmidt Algorithm –QR Pivoting Matrix must.
Gradient Methods May Preview Background Steepest Descent Conjugate Gradient.
Chapter 3 Determinants and Matrices
Lecture 12 Least Square Approximation Shang-Hua Teng.
Lecture 13 Operations in Graphics and Geometric Modeling I: Projection, Rotation, and Reflection Shang-Hua Teng.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
CS240A: Conjugate Gradients and the Model Problem.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights.
CSE 245: Computer Aided Circuit Simulation and Verification
Dirac Notation and Spectral decomposition
PETE 603 Lecture Session #29 Thursday, 7/29/ Iterative Solution Methods Older methods, such as PSOR, and LSOR require user supplied iteration.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Example: Introduction to Krylov Subspace Methods DEF: Krylov sequence
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Gram-Schmidt Orthogonalization
Linear algebra: matrix Eigen-value Problems
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
Orthogonalization via Deflation By Achiya Dax Hydrological Service Jerusalem, Israel
A Note on Rectangular Quotients By Achiya Dax Hydrological Service Jerusalem, Israel
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Large Timestep Issues Lecture 12 Alessandra Nardi Thanks to Prof. Sangiovanni, Prof. Newton, Prof. White, Deepak Ramaswamy, Michal Rewienski, and Karen.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Al Parker July 19, 2011 Polynomial Accelerated Iterative Sampling of Normal Distributions.
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 12 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
23/5/20051 ICCS congres, Atlanta, USA May 23, 2005 The Deflation Accelerated Schwarz Method for CFD C. Vuik Delft University of Technology
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
1 Instituto Tecnológico de Aeronáutica Prof. Maurício Vicente Donadon AE-256 NUMERICAL METHODS IN APPLIED STRUCTURAL MECHANICS Lecture notes: Prof. Maurício.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Lecture 11 Alessandra Nardi
Krylov-Subspace Methods - I
Direct Methods for Sparse Linear Systems
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
6-4 Symmetric Matrices By毛.
Conjugate Gradient Method
CS5321 Numerical Optimization
Lecture 13 Operations in Graphics and Geometric Modeling I: Projection, Rotation, and Reflection Shang-Hua Teng.
Symmetric Matrices and Quadratic Forms
CIS 700: “algorithms for Big Data”
Administrivia: November 9, 2009
Performance Optimization
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

Last lectures review Overview of Iterative Methods to solve Mx=b –Stationary –Non Stationary QR factorization –Modified Gram-Schmidt Algorithm –Minimization View of QR General Subspace Minimization Algorithm Generalized Conjugate Residual Algorithm –Krylov-subspace –Simplification in the symmetric case –Convergence properties Eigenvalue and Eigenvector Review –Norms and Spectral Radius –Spectral Mapping Theorem

Arbitrary Subspace Methods Residual Minimization

Use Gram-Schmidt on Mw i’s ! Arbitrary Subspace Methods Residual Minimization

kth order polynomial Krylov Subspace Methods Krylov Subspace

Krylov Subspace Methods Subspace Generation The set of residuals also can be used as a representation of the Krylov-Subspace Generalized Conjugate Residual Algorithm Nice because the residuals generate next search directions

Determine optimal stepsize in kth search direction Update the solution (trying to minimize residual) and the residual Compute the new orthogonalized search direction (by using the most recent residual) Krylov-Subspace Methods Generalized Conjugate Residual Method (k-th step)

Vector inner products, O(n) Matrix-vector product, O(n) if sparse Vector Adds, O(n) O(k) inner products, total cost O(nk) If M is sparse, as k (# of iters) approaches n, Better Converge Fast! Krylov-Subspace Methods Generalized Conjugate Residual Method (Computational Complexity for k-th step)

Summary What is an iterative non stationary method: x (k+1) =x (k) +a k p k How search to calculate: –Search directions (p k ) –Step along search directions (a k ) Krylov Subspace  GCR GCR is O(k 2 n) –Better converge fast!  Now look at convergence properties of GCR

Krylov Methods Convergence Analysis Basic properties

GCR Optimality Property Therefore Any polynomial which satisfies the constraints can be used to get an upper bound on Krylov Methods Convergence Analysis Optimality of GCR poly

Theorem: Any induced norm is a bound on the spectral radius Proof: Eigenvalues and eigenvectors review Induced norms

Given a polynomial Apply the polynomial to a matrix Then Useful Eigenproperties Spectral Mapping Theorem

Krylov Methods Convergence Analysis Overview where is any (k+1)-th order polynomial subject to:  may be used to get an upper bound on Matrix norm propertyGCR optimality property

Review on eigenvalues and eigenvectors –Induced norms: relate matrix eigenvalues to the matrix norms –Spectral mapping theorem: relate matrix eigenvalues to matrix polynomials Now ready to relate the convergence properties of Krylov Subspace methods to eigenvalues of M Krylov Methods Convergence Analysis Overview

Cond(V) Krylov Methods Convergence Analysis Norm of matrix polynomials

1) The GCR Algorithm converges to the exact solution in at most n steps 2) If M has only q distinct eigenvalues, the GCR Algorithm converges in at most q steps Krylov Methods Convergence Analysis Important observations

If M = M T then 2) M has real eigenvalues 1) M has orthonormal eigenvectors Krylov Methods Convergence Analysis Convergence for M T =M - Residual Polynomial

* = evals(M) - = 5th order poly - = 8th order poly 1 Krylov Methods Convergence Analysis Residual Polynomial Picture (n=10)

Strategically place zeros of the poly Krylov Methods Convergence Analysis Residual Polynomial Picture (n=10)

Krylov Methods Convergence Analysis Convergence for M T =M – Polynomial min-max problem

= The Chebyshev Polynomial Krylov Methods Convergence Analysis Convergence for M T =M – Chebyshev solves min-max

Chebychev Polynomials minimizing over [1,10]

Krylov Methods Convergence Analysis Convergence for M T =M – Chebyshev bounds

Krylov Methods Convergence Analysis Convergence for M T =M – Chebyshev result

For which problem will GCR Converge Faster? Krylov Methods Convergence Analysis Examples

Iteration Which Convergence Curve is GCR?

GCR Algorithm can eliminate outlying eigenvalues by placing polynomial zeros directly on them. Krylov Methods Convergence Analysis Chebyshev is a bound

Iterative Methods - CG Convergence is related to: –Number of distinct eigenvalues –Ratio between max and min eigenvalue Why ? How? Now we know

Reminder about GCR –Residual minimizing solution –Krylov Subspace –Polynomial Connection Review Eigenvalues –Induced Norms bound Spectral Radius –Spectral mapping theorem Estimating Convergence Rate –Chebyshev Polynomials Summary