Solving Linear Systems (Numerical Recipes, Chap 2)

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Linear Systems of Equations
Numerical Analysis – Linear Equations(I) Hanyang University Jong-Il Park.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Solution of linear system of equations
1cs542g-term Notes  Assignment 1 is out (questions?)
Linear Algebraic Equations
Dan Witzner Hansen  Groups?  Improvements – what is missing?
Special Matrices and Gauss-Siedel
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Matrices and Systems of Equations
Chapter 2 Matrices Definition of a matrix.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
3D Geometry for Computer Graphics
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Ordinary least squares regression (OLS)
1 Systems of Linear Equations (Optional) Special Matrices.
Thomas algorithm to solve tridiagonal matrices
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
A matrix having a single row is called a row matrix. e.g.,
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
SVD(Singular Value Decomposition) and Its Applications
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Square n-by-n Matrix.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
CPSC 491 Xin Liu Nov 17, Introduction Xin Liu PhD student of Dr. Rokne Contact Slides downloadable at pages.cpsc.ucalgary.ca/~liuxin.
SVD: Singular Value Decomposition
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Mathematical foundationsModern Seismology – Data processing and inversion 1 Some basic maths for seismic data processing and inverse problems (Refreshement.
Lecture 8 Matrix Inverse and LU Decomposition
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Solving Linear Systems of Equations
Elliptic PDEs and the Finite Difference Method
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Lesson 3 CSPP58001.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Matrices, Vectors, Determinants.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
ALGEBRAIC EIGEN VALUE PROBLEMS
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Part B. Linear Algebra, Vector Calculus
Solving Linear Systems Ax=b
Chapter 10: Solving Linear Systems of Equations
Some useful linear algebra
Matrix Methods Summary
~ Least Squares example
Lecture 13: Singular Value Decomposition (SVD)
~ Least Squares example
Lecture 8 Matrix Inverse and LU Decomposition
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

Solving Linear Systems (Numerical Recipes, Chap 2) Jehee Lee Seoul National University

A System of Linear Equations Consider a matrix equation There are many different methods for solving this problem We will not discuss how to solve it precisely We will discuss which method to choose for a given problem

What to Consider Size Most PCs are now sold with a memory of 1GB to 4GB The 1GB memory can accommodate only a single 10,000x10,000 matrix 10,000*10,000*8(byte) = 800(MB) Computing the inverse of A requires additional 800 MB In real applications, we have to deal with much bigger matrices

What to Consider Sparse vs. Dense Many of linear systems have a matrix A in which almost all the elements are zero Both the representation matrix and the solution method are affected Sparse matrix representation by linked lists There exist special algorithms designated to sparse matrices Matrix multiplication, linear system solvers

What to Consider Special properties Symmetric Tridiagonal Banded Positive definite

What to Consider Singularity (det A=0) Homogeneous systems There exist non-zero solutions Non-homogeneous systems Multiple solutions, or No solution

What to Consider Underdetermined Overdetermined Effectively fewer equations than unknowns A square singular matrix # of rows < # of columns Overdetermined More equations than unknowns # of rows > # of columns

Solution Methods Direct Methods Iterative Methods Terminate within a finite number of steps End with the exact solution provided that all arithmetic operations are exact Ex) Gauss elimination Iterative Methods Produce a sequence of approximations which hopefully converge to the solution Commonly used with large sparse systems

Gauss Elimination Reduce a matrix A to a triangular matrix Back substitution

Gauss Elimination Reduce a matrix A to a triangular matrix Back substitution

Gauss Elimination Reduce a matrix A to a triangular matrix Back substitution

Gauss Elimination Reduce a matrix A to a triangular matrix Back substitution

Gauss Elimination Reduce a matrix A to a triangular matrix Back substitution O(N^3) computation All right-hand sides must be known in advance

What does the right-hand sides mean ? For example, interpolation with a polynomial of degree two

LU Decomposition Decompose a matrix A as a product of lower and upper triangular matrices

LU Decomposition Forward and Backward substitution

LU Decomposition It is straight forward to find the determinant and the inverse of A LU decomposition is very efficient with tridiagonal and band diagonal systems LU decomposition and forward/backward substitution takes O(k²N) operations, where k is the band width

Cholesky Decomposition Suppose a matrix A is Symmetric Positive definite We can find Extremely stable numerically

QR Decomposition Decompose A such that R is upper triangular Q is orthogonal Generally, QR decomposition is slower than LU decomposition But, QR decomposition is useful for solving The least squares problem, and A series of problems where A varies according to

Jacobi Iteration Decompose a matrix A into three matrices Fixed-point iteration It converges if A is strictly diagonally dominant Upper triangular with zero diagonal Lower triangular with zero diagonal Identity matrix

Gauss-Seidel Iteration Make use of “up-to-date” information It converges if A is strictly diagonally dominant Usually faster than Jacobi iteration. There exist systems for which Jacobi converges, yet Gauss-Seidel doesn’t

Sparse Linear Systems

Conjugate Gradient Method Function minimization If A is symmetric and positive definite, it converges in N steps (it is a direct method) Otherwise, it is used as an iterative method Well suited for large sparse systems

Singular Value Decomposition Decompose a matrix A with M rows and N columns (M>=N) Diagonal Orthogonal Column Orthogonal

SVD for a Square Matrix If A is non-singular If A is singular Some of singular values are zero

Linear Transform A nonsingular matrix A

Null Space and Range Consider as a linear mapping from the vector space x to the vector space b The range of A is the subspace of b that can be “reached” by A The null space of A is some subspace of x that is mapped to zero The rank of A = the dimension of the range of A The nullity of A = the dimension of the null space of A Rank + Nullity = N

Null Space and Range A singular matrix A (nullity>1)

Null Space and Range The columns of U whose corresponding singular values are nonzero are an orthonormal set of basis vectors of the range The columns of V whose corresponding singular values are zero are an orthonormal set of basis vectors of the null space

SVD for Underdetermined Problem If A is singular and b is in the range, the linear system has multiple solutions We might want to pick the one with the smallest length Replace by zero if

SVD for Overdetermined Problems If A is singular and b is not in the range, the linear system has no solution We can get the least squares solution that minimize the residual Replace by zero if

SVD Summary SVD is very robust when A is singular or near-singular Underdetermined and overdetermined systems can be handled uniformly But, SVD is not an efficient solution

(Moore-Penrose) Pseudoinverse Overdetermined systems Underdetermined systems

(Moore-Penrose) Pseudoinverse Sometimes, is singular or near-singular SVD is a powerful tool, but slow for computing pseudoinverse QR decomposition is standard in libraries Rotation Back substitution

Summary The structure of linear systems is well-understood. If you can formulate your problem as a linear system, you can easily anticipate the performance and stability of the solution