Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Modified Newton Methods Lecture 9 Alessandra Nardi Thanks to Prof. Jacob White, Jaime Peraire, Michal Rewienski, and Karen Veroy.
ECE 552 Numerical Circuit Analysis Chapter Four SPARSE MATRIX SOLUTION TECHNIQUES Copyright © I. Hajj 2012 All rights reserved.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Linear Systems of Equations
Solving Linear Systems (Numerical Recipes, Chap 2)
1.5 Elementary Matrices and a Method for Finding
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Asymptotic error expansion Example 1: Numerical differentiation –Truncation error via Taylor expansion.
Numerical Algorithms Matrix multiplication
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Lecture 6 Matrix Operations and Gaussian Elimination for Solving Linear Systems Shang-Hua Teng.
Solution of linear system of equations
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 3 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
QR Factorization –Direct Method to solve linear systems Problems that generate Singular matrices –Modified Gram-Schmidt Algorithm –QR Pivoting Matrix must.
1cs542g-term Notes  r 2 log r is technically not defined at r=0 but can be smoothly continued to =0 there  Question (not required in assignment):
Solutions for Nonlinear Equations
Lecture 12 Least Square Approximation Shang-Hua Teng.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Ordinary least squares regression (OLS)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Thomas algorithm to solve tridiagonal matrices
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Linear Least Squares QR Factorization. Systems of linear equations Problem to solve: M x = b Given M x = b : Is there a solution? Is the solution unique?
CSE 245: Computer Aided Circuit Simulation and Verification
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Iterative Methods for Solving Linear Systems of Equations ( part of the course given for the 2 nd grade at BGU, ME )
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
Scientific Computing Linear Systems – LU Factorization.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
AN ORTHOGONAL PROJECTION
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Lecture 8 Matrix Inverse and LU Decomposition
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
Linear Systems – Iterative methods
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Direct Methods for Linear Systems Lecture 3 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
CS 484. Iterative Methods n Gaussian elimination is considered to be a direct method to solve a system. n An indirect method produces a sequence of values.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
1 Instituto Tecnológico de Aeronáutica Prof. Maurício Vicente Donadon AE-256 NUMERICAL METHODS IN APPLIED STRUCTURAL MECHANICS Lecture notes: Prof. Maurício.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Linear Algebra Review Tuesday, September 7, 2010.
Solution of Sparse Linear Systems Numerical Simulation CSE245 Thanks to: Kin Sou, Deepak Ramaswamy, Michal Rewienski, Jacob White, Shihhsien Kuo and Karen.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
Lecture 11 Alessandra Nardi
Krylov-Subspace Methods - I
Direct Methods for Sparse Linear Systems
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
Numerical Analysis Lecture14.
Numerical Analysis Lecture13.
Lecture 8 Matrix Inverse and LU Decomposition
Linear Algebra Lecture 16.
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy

Last lecture review Solution of system of linear equations Mx=b Gaussian Elimination basics –LU factorization (M=LU) –Pivoting for accuracy enhancement –Error Mechanisms (Round-off) Ill-conditioning Numerical Stability –Complexity: O(N 3 ) Gaussian Elimination for Sparse Matrices –Improved computational cost: factor in O(N 1.5 ) –Data structure –Pivoting for sparsity (Markowitz Reordering) –Graph Based Approach

Solving Linear Systems Direct methods: find the exact solution in a finite number of steps –Gaussian Elimination Iterative methods: produce a sequence of approximate solutions hopefully converging to the exact solution –Stationary Jacobi Gauss-Seidel SOR (Successive Overrelaxation Method) –Non Stationary GCR, CG, GMRES…..

Iterative Methods Iterative methods can be expressed in the general form: x (k) =F(x (k-1) ) where s s.t. F(s)=s is called a Fixed Point Hopefully: x (k)  s (solution of my problem) Will it converge? How rapidly?

Iterative Methods Stationary: x (k+1) =Gx (k) +c where G and c do not depend on iteration count (k) Non Stationary: x (k+1) =x (k) +a k p (k) where computation involves information that change at each iteration

Iterative – Stationary Jacobi In the i-th equation solve for the value of x i while assuming the other entries of x remain fixed: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower-trg and strictly upper-trg parts of M

Iterative – Stationary Gauss-Seidel Like Jacobi, but now assume that previously computed results are used as soon as they are available: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower-trg and strictly upper-trg parts of M

Iterative – Stationary Successive Overrelaxation (SOR) Devised by extrapolation applied to Gauss-Seidel in the form of weighted average: In matrix terms the method becomes: where D, -L and -U represent the diagonal, the strictly lower-trg and strictly upper-trg parts of M w is chosen to increase convergence

Iterative – Non Stationary The iterates x (k) are updated in each iteration by a multiple a k of the search direction vector p (k) x (k+1) =x (k) +a k p (k) Convergence depends on matrix M spectral properties Where does all this come from? What are the search directions? How do I choose a k ? Will explore in detail in the next lectures

QR Factorization –Direct Method to solve linear systems Problems that generate Singular matrices –Modified Gram-Schmidt Algorithm –QR Pivoting Matrix must be singular, move zero column to end. –Minimization view point  Link to Iterative Non stationary Methods (Krylov Subspace) Outline

1 1 v1 v2v3 v4 The resulting nodal matrix is SINGULAR, but a solution exists! LU Factorization fails – Singular Example

The resulting nodal matrix is SINGULAR, but a solution exists! Solution (from picture): v 4 = -1 v 3 = -2 v 2 = anything you want  solutions v 1 = v LU Factorization fails – Singular Example One step GE

Recall weighted sum of columns view of systems of equations M is singular but b is in the span of the columns of M QR Factorization – Singular Example

Orthogonal columns implies: Multiplying the weighted columns equation by i-th column: Simplifying using orthogonality: QR Factorization – Key idea If M has orthogonal columns

Picture for the two-dimensional case Non-orthogonal Case Orthogonal Case QR Factorization - M orthonormal M is orthonormal if:

How to perform the conversion? QR Factorization – Key idea

QR Factorization – Projection formula

Formulas simplify if we normalize QR Factorization – Normalization

Mx=b  Qy=b  Mx=Qy QR Factorization – 2 x 2 case

Two Step Solve Given QR QR Factorization – 2 x 2 case

To Insure the third column is orthogonal QR Factorization – General case

In general, must solve NxN dense linear system for coefficients

To Orthogonalize the Nth Vector QR Factorization – General case

To Insure the third column is orthogonal QR Factorization – General case Modified Gram-Schmidt Algorithm

For i = 1 to N “For each Source Column”   For j = i+1 to N { “For each target Column right of source”   end  end Normalize QR Factorization Modified Gram-Schmidt Algorithm (Source-column oriented approach)

QR Factorization – By picture

Suppose only matrix-vector products were available? More convenient to use another approach QR Factorization – Matrix-Vector Product View

For i = 1 to N “For each Target Column” For j = 1 to i-1 “For each Source Column left of target” end Normalize QR Factorization Modified Gram-Schmidt Algorithm (Target-column oriented approach)

QR Factorization r 11 r 12 r 22 r 13 r 23 r 33 r 14 r 24 r 34 r 44 r 11 r 22 r 12 r 14 r 13 r 23 r 24 r 33 r 34 r 44

What if a Column becomes Zero? Matrix MUST BE Singular! 1)Do not try to normalize the column. 2)Do not use the column as a source for orthogonalization. 3) Perform backward substitution as well as possible QR Factorization – Zero Column

Resulting QR Factorization QR Factorization – Zero Column

Recall weighted sum of columns view of systems of equations M is singular but b is in the span of the columns of M QR Factorization – Zero Column

Reasons for QR Factorization QR factorization to solve Mx=b –Mx=b  QRx=b  Rx=Q T b where Q is orthogonal, R is upper trg O(N 3 ) as GE Nice for singular matrices –Least-Squares problem Mx=b where M: mxn and m>n Pointer to Krylov-Subspace Methods –through minimization point of view

Minimization More General! QR Factorization – Minimization View

One dimensional Minimization Normalization QR Factorization – Minimization View One-Dimensional Minimization

One dimensional minimization yields same result as projection on the column! QR Factorization – Minimization View One-Dimensional Minimization: Picture

Residual Minimization Coupling Term QR Factorization – Minimization View Two-Dimensional Minimization

QR Factorization – Minimization View Two-Dimensional Minimization: Residual Minimization Coupling Term To eliminate coupling term: we change search directions !!!

More General Search Directions Coupling Term QR Factorization – Minimization View Two-Dimensional Minimization

More General Search Directions QR Factorization – Minimization View Two-Dimensional Minimization Goal : find a set of search directions such that In this case minimization decouples !!! p i and p j are called M T M orthogonal

i-th search direction equals    orthogonalized unit vector Use previous orthogonalized Search directions QR Factorization – Minimization View Forming M T M orthogonal Minimization Directions

QR Factorization – Minimization View Minimizing in the Search Direction When search directions p j are M T M orthogonal, residual minimization becomes:

For i = 1 to N “For each Target Column” For j = 1 to i-1 “For each Source Column left of target” end Normalize Orthogonalize Search Direction QR Factorization – Minimization View Minimization Algorithm

Intuitive summary QR factorization  Minimization view (Direct)(Iterative) Compose vector x along search directions: –Direct: composition along Q i (orthonormalized columns of M)  need to factorize M –Iterative: composition along certain search directions  you can stop half way About the search directions: –Chosen so that it is easy to do the minimization (decoupling)  p j are M T M orthogonal –Each step: try to minimize the residual

MMM M T M Orthonormal Compare Minimization and QR

Summary Iterative Methods Overview –Stationary –Non Stationary QR factorization to solve Mx=b –Modified Gram-Schmidt Algorithm –QR Pivoting –Minimization View of QR Basic Minimization approach Orthogonalized Search Directions Pointer to Krylov Subspace Methods