1cs542g-term1-2007 Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: email me the URL of a page containing (links to)

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
PCA + SVD.
1cs542g-term Notes  In assignment 1, problem 2: smoothness = number of times differentiable.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Principal Component Analysis
Dan Witzner Hansen  Groups?  Improvements – what is missing?
Symmetric Matrices and Quadratic Forms
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
3D Geometry for Computer Graphics. 2 The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Singular Value Decomposition COS 323. Underconstrained Least Squares What if you have fewer data points than parameters in your function?What if you have.
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Computer Graphics Recitation The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Last lecture summary independent vectors x
Matrices CS485/685 Computer Vision Dr. George Bebis.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
SVD(Singular Value Decomposition) and Its Applications
Compiled By Raj G. Tiwari
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Chapter Content Real Vector Spaces Subspaces Linear Independence
AN ORTHOGONAL PROJECTION
SVD: Singular Value Decomposition
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Elementary Linear Algebra Anton & Rorres, 9th Edition
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
Algorithms 2005 Ramesh Hariharan. Algebraic Methods.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Unsupervised Learning II Feature Extraction
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
CSE 554 Lecture 8: Alignment
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Lecture: Face Recognition and Feature Reduction
Singular Value Decomposition
Some useful linear algebra
CS485/685 Computer Vision Dr. George Bebis
Principal Component Analysis
Feature space tansformation methods
Symmetric Matrices and Quadratic Forms
Principal Components What matters most?.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Lecture 13: Singular Value Decomposition (SVD)
Subject :- Applied Mathematics
Presentation transcript:

1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to) the answers and code  Compiling example assignment code on Solaris apparently doesn’t work: try Linux cs-grad machines instead Also: there is an installation of ATLAS (for an AMD architecture) at /ubc/cs/research/scl/sclpublic/public/atlas-3.6.0

2cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values individually)  In many applications, data is itself in high dimensional space Or there’s no real distinction between dependent (f) and independent (x) -- we just have data points  Assumption: data is actually organized along a smaller dimension manifold generated from smaller set of parameters than number of output variables  Huge topic: machine learning  Simplest: Principal Components Analysis (PCA)

3cs542g-term PCA  We have n data points from m dimensions: store as columns of an mxn matrix A  We’re looking for linear correlations between dimensions Roughly speaking, fitting lines or planes or hyperplanes through the origin to the data May want to subtract off the mean value along each dimension for this to make sense

4cs542g-term Reduction to 1D  Assume data points on a line through the origin (1D subspace)  In this case, say line is along unit vector u. (m-dimensional vector)  Each data point should be a multiple of u (call the scalar multiples w i ):  That is, A would be rank-1: A=uw T  Problem in general: find rank-1 matrix that best approximates A

5cs542g-term The rank-1 problem  Use Least-Squares formulation again:  Clean it up: take w=  v with  ≥0 and |v|=1  u and v are the first principal components of A

6cs542g-term Solving the rank-1 problem  Remember trace version of Frobenius norm:  Minimize with respect to w:  Then plug in to get a problem for u:

7cs542g-term Finding u  AA T is symmetric, thus has a complete set of orthonormal eigenvectors X, eigenvalues   Write u in this basis:  Then see:  Obviously pick u to be the eigenvector with largest eigenvalue

8cs542g-term Finding v and sigma  Similar argument gives v the eigenvector corresponding to max eigenvalue of A T A  Finally, knowing u and v, can find  that minimizes with the same approach:  We also know that

9cs542g-term Generalizing  In general, if we expect problem to have subspace dimension k, we want the closest rank-k matrix to A That is, express the data points as linear combinations of a set of k basis vectors (plus error) We want the optimal set of basis vectors and the optimal linear combinations:

10cs542g-term Finding W  Take the same approach as before:  Set gradient w.r.t. W equal to zero:

11cs542g-term Finding U  Plugging in W=A T U we get  AA T is symmetric, hence has a complete set of orthogonormal eigenvectors, say columns of X, and eigenvalues along the diagonal of M (sorted in decreasing order):

12cs542g-term Finding U cont’d  Our problem is now:  Note X and U are both orthogonal, so is X T U, which we can call Z:  Simplest solution: set Z=( I 0) T which means that U is the first k columns of X (first k eigenvectors of AA T )

13cs542g-term Back to W  We can write W=V∑ T for an orthogonal V, and square kxk ∑  Same argument as for U gives that V should be the first k eigenvectors of A T A  What is ∑?  Can derive that it is diagonal, containing the square-roots of the eigenvalues of AA T or A T A (they’re identical)

14cs542g-term The Singular Value Decomposition  Going all the way to k=m (or n) we get the Singular Value Decomposition (SVD) of A  A=U∑V T  The diagonal entries of ∑ are called the singular values  The columns of U (eigenvectors of AA T ) are the left singular vectors  The columns of V (eigenvectors of A T A) are the right singular vectors  Gives a formula for A as a sum of rank-1 matrices:

15cs542g-term Cool things about the SVD  2-norm:  Frobenius norm:  Rank(A)= # nonzero singular values Can make a sensible numerical estimate  Null(A) spanned by columns of U for zero singular values  Range(A) spanned by columns of V for nonzero singular values  For invertible A:

16cs542g-term Least Squares with SVD  Define pseudo-inverse for a general A:  Note if A T A is invertible, A + =(A T A) -1 A T I.e. solves the least squares problem]  If A T A is singular, pseudo-inverse defined: A + b is the x that minimizes ||b-Ax|| 2 and of all those that do so, has smallest ||x|| 2

17cs542g-term Solving Eigenproblems  Computing the SVD is another matter!  We can get U and V by solving the symmetric eigenproblem for AA T or A T A, but more specialized methods are more accurate  The unsymmetric eigenproblem is another related computation, with complications: May involve complex numbers even if A is real If A is not normal (AA T ≠A T A), it doesn’t have a full basis of eigenvectors Eigenvectors may not be orthogonal… Schur decomposition  Generalized problem: Ax= Bx  LAPACK provides routines for all these  We’ll examine symmetric problem in more detail

18cs542g-term The Symmetric Eigenproblem  Assume A is symmetric and real  Find orthogonal matrix V and diagonal matrix D s.t. AV=VD Diagonal entries of D are the eigenvalues, corresponding columns of V are the eigenvectors  Also put: A=VDV T or V T AV=D  There are a few strategies More if you only care about a few eigenpairs, not the complete set…  Also: finding eigenvalues of an nxn matrix is equivalent to solving a degree n polynomial No “analytic” solution with radicals in general for n≥5 Thus general algorithms are iterative