Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.
Chapter 6 Eigenvalues and Eigenvectors
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Symmetric Matrices and Quadratic Forms
Sampling algorithms for l 2 regression and applications Michael W. Mahoney Yahoo Research (Joint work with P. Drineas.
Chapter 5 Orthogonality
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Singular Value Decomposition COS 323. Underconstrained Least Squares What if you have fewer data points than parameters in your function?What if you have.
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
Dirac Notation and Spectral decomposition Michele Mosca.
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Orthogonality and Least Squares
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Dirac Notation and Spectral decomposition
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Matrices CS485/685 Computer Vision Dr. George Bebis.
5.1 Orthogonality.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
SVD(Singular Value Decomposition) and Its Applications
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
CHAPTER SIX Eigenvalues
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Systems of Linear Equation and Matrices
AN ORTHOGONAL PROJECTION
SVD: Singular Value Decomposition
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Scientific Computing Singular Value Decomposition SVD.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
STROUD Worked examples and exercises are in the text PROGRAMME 5 MATRICES.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Singular Value Decomposition
Some useful linear algebra
SVD: Physical Interpretation and Applications
CS485/685 Computer Vision Dr. George Bebis
Symmetric Matrices and Quadratic Forms
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
The Elements of Linear Algebra
Subject :- Applied Mathematics
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Symmetric Matrices and Quadratic Forms
CSE 203B: Convex Optimization Week 2 Discuss Session
Presentation transcript:

Singular Value Decomposition and Numerical Rank

The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square matrices by Autonne for general rectangular matrices by Eckart & Young (autonne-Eckart-Young theorem) Theorem: Let. Then there exist orthogonal [unitary] matrices and such that where and with

Since, we have. Denoting by we can arrange that. Let be a corresponding set of orthonormal eigenvectors and let Then if we have where Also so that and thus Let. Then from we have Choose any such that is orthogonal. Then and so as desired.

The numbers together with are called the singular values of and they are positive square roots of the eigenvalues (which are non negative) of. The columns of are called the left singular vector of (the orthonormal eigenvectors of ) while the columns of are called the right singular vector of (the orthonormal eigenvectors of ). The matrix has singular values, the positive square roots of the eigenvalues of. The nonzero singular values of and are the same.

It is not generally a good idea to compute the singular values of by the first finding the eigenvalues of, tempting as that is. Ex: Let be a real number with (so that ) Let Then so we compute leading to the (erroneous) conclusion that the rank of is 1. If we could compute in infinite precision, we would have with and thus. The point is that by working with we have unnecessarily introduced into the computation.

It is clear from the definition that the number of nonzero singular values of determines its rank while the question is not nearly clear-cut in the context of computation on a digital computer, it is now generally acknowledged that the singular value decomposition is the only generally reliable method of determining rank numerically look at the “smallest non-zero singular value” of a matrix. Since that computed value is exact for a matrix near, it makes sense to consider the rank of all matrices in some -ball (w,r,t. the spectral norm say) around. The choice of may also be based on measurement errors incurred in estimating the coefficients of or the coefficients may be uncertain because of round off errors incurred in a previous computation to get them. The key quantity in rank determination is ‘ ’.

The smallest nonzero singular value gives a dependable measure of how far (in the sense) a matrix is from matrices of lesser rank. But alone is clearly sensitive to scale so that a better measure is. But so the important quantity is which turns out to be the reciprocal of the number, the so-called conditional number of A w.r.t. pseudo inversion. In the case when A is invertible, is usual spectral condition number w.r.t. inversion. Ref: Stewart “On the pertubation of pseudo-inverses, projections, and linear least squares problems,” SIAM Review, vol.19, pp , In solving the linear system, the condition number gives a measure of how much errors in A and/or may be magnified in the computed solution. Moreover, if, gives a measure of the “nearness” of A to singularity.

In fact measures the nearness of A to singularity in any matrix norm and for certain norms it is easy to construct explicitly a matrix E with and A+E singular. Ex: Consider the matrix is, in fact, very near singular and gets more nearly so as n increases. Adding to every element in the first column of A gives an exactly singular matrix. Rank determination, in the presence of round off error, is a highly nontrivial problem.