SVD: Singular Value Decomposition

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 19 Singular Value Decomposition
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Dan Witzner Hansen  Groups?  Improvements – what is missing?
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
Singular Value Decomposition
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
3D Geometry for Computer Graphics
3D Geometry for Computer Graphics
Math for CSTutorial 41 Contents: 1.Least squares solution for overcomplete linear systems. 2.… via normal equations 3.… via A = QR factorization 4.… via.
Lecture 20 SVD and Its Applications Shang-Hua Teng.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Matrices CS485/685 Computer Vision Dr. George Bebis.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
SVD(Singular Value Decomposition) and Its Applications
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
CPSC 491 Xin Liu Nov 17, Introduction Xin Liu PhD student of Dr. Rokne Contact Slides downloadable at pages.cpsc.ucalgary.ca/~liuxin.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Ch 6 Vector Spaces. Vector Space Axioms X,Y,Z elements of  and α, β elements of  Def of vector addition Def of multiplication of scalar and vector These.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
2.5 – Determinants and Multiplicative Inverses of Matrices.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Linear independence and matrix rank
Singular Value Decomposition
Some useful linear algebra
SVD: Physical Interpretation and Applications
CS485/685 Computer Vision Dr. George Bebis
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Symmetric Matrices and Quadratic Forms
Lecture 13: Singular Value Decomposition (SVD)
Subject :- Applied Mathematics
Lecture 20 SVD and Its Applications
Symmetric Matrices and Quadratic Forms
CSE 203B: Convex Optimization Week 2 Discuss Session
Presentation transcript:

SVD: Singular Value Decomposition

Motivation Assume A full rank Clearly the winner

Ideas Behind SVD Goal: for Am×n A There are many choices of basis in C(AT) and C(A), but we want the orthonormal ones Goal: for Am×n find orthonormal bases for C(AT) and C(A) orthonormal basis in C(A) orthonormal basis in C(AT) row space Ax=0 column space ATy=0 Review: A=matrix([1,1,2],[2,2,3]) 0. A as a mapping from R3 to R2 1.find four subspaces of A, in Rx, dimension, basis. A Rm Rn

I haven’t told you how to find vi’s (p.9) SVD (2X2) I haven’t told you how to find vi’s (p.9) s: represent the length of images; hence non-negative

SVD 2x2 (cont) AS=SL , A=SLS-1 A=QLQT Compare Another diagonalization using 2 sets of orthogonal bases Compare When A has complete set of e-vectors, we have AS=SL , A=SLS-1 but S in general is not orthogonal When A is symmetric, we have A=QLQT

Why are orthonormal bases good? Implication: Matrix inversion Ax=b

This is the key to solve SVD More on U and V V: eigenvectors of ATA U: eigenvectors of AAT [Find vi first, then use Avi to find ui This is the key to solve SVD

SVD: A=USVT The singular values are the diagonal entries of the S matrix and are arranged in descending order The singular values are always real (non-negative) numbers If A is real matrix, U and V are also real

Example (2x2, full rank) STEPS: Find e-vectors of ATA; normalize the basis Compute Avi, get si If si  0, get ui Else find ui from N(AT)

SVD Theory If sj=0, Avj=0vj is in N(A) Else, vj in C(AT) The corresponding uj in N(AT) [UTA=SVT=0] Else, vj in C(AT) The corresponding uj in C(A) #of nonzero sj = rank

Example (2x2, rank deficient) Can also be obtained from e-vectors of ATA

Example (cont) Bases of N(A) and N(AT) (u2 and v2 here) do not contribute the final result. The are computed to make U and V orthogonal.

Extend to Amxn Basis of N(A) Basis of N(AT) Dimension Check

Summation of r rank-one matrices! Extend to Amxn (cont) Summation of r rank-one matrices! Bases of N(A) and N(AT) do not contribute They are useful only for nullspace solutions

C(AT) N(A) C(A)

Summary SVD chooses the right basis for the 4 subspaces AV=US v1…vr: orthonormal basis in Rn for C(AT) vr+1…vn: N(A) u1…ur: in Rm C(A) ur+1…um: N(AT) These bases are not only , but also Avi=siui High points of Linear Algebra Dimension, rank, orthogonality, basis, diagonalization, …

SVD Applications Using SVD in computation, rather than A, has the advantage of being more robust to numerical error Many applications: Inverse of matrix A Conditions of matrix Image compression Solve Ax=b for all cases (unique, many, no solutions; least square solutions) rank determination, matrix approximation, … SVD usually found by iterative methods (see Numerical Recipe, Chap.2)

SVD and Ax=b (mn) Check for existence of solution

Ax=b (inconsistent) No solution!

Ax=b (underdetermined)

Pseudo Inverse (Sec7.4, p.395) The role of A: Takes a vector vi from row space to siui in the column space The role of A-1 (if it exists): Does the opposite: takes a vector ui from column space to row space vi

Pseudo Inverse (cont) While A-1 may not exist, a matrix that takes ui back to vi/si does exist. It is denoted as A+, the pseudo inverse A+: dimension n by m

Pseudo Inverse and Ax=b A panacea for Ax=b Overdetermined case: find the solution that minimize the error r=|Ax–b|, the least square solution

Ex: full rank

Ex: over-determined Will show this need not be computed…

Over-determined (cont) Same result!!

Ex: general case, no solution

Matrix Approximation making small s’s to zero and back substitute (see next page for application in image compression)

Image Compression As described in text p.352 For grey scale images: mn bytes Only need to store r(m+n+1) Original 6464 r = 1,3,5,10,16 (no perceivable difference afterwards)