Dimensionality Reduction Shannon Quinn (with thanks to William Cohen of Carnegie Mellon University, and J. Leskovec, A. Rajaraman, and J. Ullman of Stanford.

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

3D Geometry for Computer Graphics
Dimensionality Reduction. High-dimensional == many features Find concepts/topics/genres: – Documents: Features: Thousands of words, millions of word pairs.
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Dimensionality Reduction PCA -- SVD
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 19 Singular Value Decomposition
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Hinrich Schütze and Christina Lioma
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) – FastMap Dimensionality Reductions or data projections.
Sampling algorithms for l 2 regression and applications Michael W. Mahoney Yahoo Research (Joint work with P. Drineas.
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 18: Latent Semantic Indexing 1.
Clustering In Large Graphs And Matrices Petros Drineas, Alan Frieze, Ravi Kannan, Santosh Vempala, V. Vinay Presented by Eric Anderson.
Computing Sketches of Matrices Efficiently & (Privacy Preserving) Data Mining Petros Drineas Rensselaer Polytechnic Institute (joint.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 6 May 7, 2006
3D Geometry for Computer Graphics
10-603/15-826A: Multimedia Databases and Data Mining SVD - part I (definitions) C. Faloutsos.
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Ordinary least squares regression (OLS)
Multimedia Databases LSI and SVD. Text - Detailed outline text problem full text scanning inversion signature files clustering information filtering and.
Singular Value Decomposition and Data Management
E.G.M. PetrakisDimensionality Reduction1  Given N vectors in n dims, find the k most important axes to project them  k is user defined (k < n)  Applications:
DATA MINING LECTURE 7 Dimensionality Reduction PCA – SVD
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Matrices CS485/685 Computer Vision Dr. George Bebis.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
SVD(Singular Value Decomposition) and Its Applications
Principle Component Analysis (PCA) Networks (§ 5.8) PCA: a statistical procedure –Reduce dimensionality of input vectors Too many features, some of them.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
Non Negative Matrix Factorization
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
D. van Alphen1 ECE 455 – Lecture 12 Orthogonal Matrices Singular Value Decomposition (SVD) SVD for Image Compression Recall: If vectors a and b have the.
CpSc 881: Information Retrieval. 2 Recall: Term-document matrix This matrix is the basis for computing the similarity between documents and queries. Today:
Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad.
Latent Semantic Indexing and Probabilistic (Bayesian) Information Retrieval.
CMU SCS : Multimedia Databases and Data Mining Lecture #18: SVD - part I (definitions) C. Faloutsos.
Dimensionality Reduction
Online Social Networks and Media Recommender Systems Collaborative Filtering Social recommendations Thanks to: Jure Leskovec, Anand Rajaraman, Jeff Ullman.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Spectral Clustering Shannon Quinn (with thanks to William Cohen of Carnegie Mellon University, and J. Leskovec, A. Rajaraman, and J. Ullman of Stanford.
Jeffrey D. Ullman Stanford University.  Often, our data can be represented by an m-by-n matrix.  And this matrix can be closely approximated by the.
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
Matrix Factorization 1. Recovering latent factors in a matrix m columns v11 … …… vij … vnm n rows 2.
Chapter 13 Discrete Image Transforms
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Unsupervised Learning II Feature Extraction
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
UV Decomposition Singular-Value Decomposition CUR Decomposition
CS479/679 Pattern Recognition Dr. George Bebis
Lecture: Face Recognition and Feature Reduction
DATA MINING LECTURE 6 Dimensionality Reduction PCA – SVD
LSI, SVD and Data Management
Matrix Factorization.
Dimensionality Reduction: SVD & CUR
Parallelization of Sparse Coding & Dictionary Learning
Symmetric Matrices and Quadratic Forms
Lecture 20 SVD and Its Applications
Presentation transcript:

Dimensionality Reduction Shannon Quinn (with thanks to William Cohen of Carnegie Mellon University, and J. Leskovec, A. Rajaraman, and J. Ullman of Stanford University)

Dimensionality Reduction Assumption: Data lies on or near a low d-dimensional subspace Axes of this subspace are effective representation of the data J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 2

Rank of a Matrix Q: What is rank of a matrix A? A: Number of linearly independent columns of A For example: – Matrix A = has rank r=2 Why? The first two rows are linearly independent, so the rank is at least 2, but all three rows are linearly dependent (the first is equal to the sum of the second and third) so the rank must be less than 3. Why do we care about low rank? – We can write A as two “basis” vectors: [1 2 1] [ ] – And new coordinates of : [1 0] [0 1] [1 1] J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 3

Rank is “Dimensionality” Cloud of points 3D space: – Think of point positions as a matrix: We can rewrite coordinates more efficiently! – Old basis vectors: [1 0 0] [0 1 0] [0 0 1] – New basis vectors: [1 2 1] [ ] – Then A has new coordinates: [1 0]. B: [0 1], C: [1 1] Notice: We reduced the number of coordinates! 1 row per point: ABCABC A J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 4

Dimensionality Reduction Goal of dimensionality reduction is to discover the axis of data! Rather than representing every point with 2 coordinates we represent each point with 1 coordinate (corresponding to the position of the point on the red line). By doing this we incur a bit of error as the points do not exactly lie on the line J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 5

Why Reduce Dimensions? Why reduce dimensions? Discover hidden correlations/topics – Words that occur commonly together Remove redundant and noisy features – Not all words are useful Interpretation and visualization Easier storage and processing of the data J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 6

7 SVD - Definition A [m x n] = U [m x r]   r x r] (V [n x r] ) T A: Input data matrix – m x n matrix (e.g., m documents, n terms) U: Left singular vectors – m x r matrix (m documents, r concepts)  : Singular values – r x r diagonal matrix (strength of each ‘concept’) (r : rank of the matrix A) V: Right singular vectors – n x r matrix (n terms, r concepts)

SVD 8 A m n  m n U VTVT  J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, T

SVD 9 A m n  + 1u1v11u1v1 2u2v22u2v2 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, σ i … scalar u i … vector v i … vector T

SVD - Properties It is always possible to decompose a real matrix A into A = U  V T, where U, , V: unique U, V: column orthonormal – U T U = I; V T V = I (I: identity matrix) – (Columns are orthogonal unit vectors)  : diagonal – Entries (singular values) are positive, and sorted in decreasing order ( σ 1  σ 2 ...  0) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 10 Nice proof of uniqueness:

SVD – Example: Users-to-Movies A = U  V T - example: Users to Movies J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 11 = SciFi Romance Matrix Alien Serenity Casablanca Amelie  m n U VTVT “Concepts” AKA Latent dimensions AKA Latent factors

SVD – Example: Users-to-Movies A = U  V T - example: Users to Movies J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 12 = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

SVD – Example: Users-to-Movies A = U  V T - example: Users to Movies J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 13 SciFi-concept Romance-concept = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

SVD – Example: Users-to-Movies A = U  V T - example: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 14 Romance-concept U is “user-to-concept” similarity matrix SciFi-concept = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

SVD – Example: Users-to-Movies A = U  V T - example: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 15 SciFi Romnce SciFi-concept “strength” of the SciFi-concept = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

SVD – Example: Users-to-Movies A = U  V T - example: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 16 SciFi-concept V is “movie-to-concept” similarity matrix SciFi-concept = SciFi Romnce x x Matrix Alien Serenity Casablanca Amelie

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 17 SVD - Interpretation #1 ‘movies’, ‘users’ and ‘concepts’: U: user-to-concept similarity matrix V: movie-to-concept similarity matrix  : its diagonal elements: ‘strength’ of each concept

SVD - Interpretation #2 More details Q: How exactly is dim. reduction done? A: Set smallest singular values to zero J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 18 = x x

SVD - Interpretation #2 More details Q: How exactly is dim. reduction done? A: Set smallest singular values to zero J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 19 x x 

SVD - Interpretation #2 More details Q: How exactly is dim. reduction done? A: Set smallest singular values to zero J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 20 x x 

SVD - Interpretation #2 More details Q: How exactly is dim. reduction done? A: Set smallest singular values to zero J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 21  x x

SVD - Interpretation #2 More details Q: How exactly is dim. reduction done? A: Set smallest singular values to zero J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 22  Frobenius norm: ǁ M ǁ F =  Σ ij M ij 2 ǁ A-B ǁ F =  Σ ij (A ij -B ij ) 2 is “small”

SVD – Best Low Rank Approx. J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 23 A U Sigma VTVT = B U VTVT = B is best approximation of A

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 24 SVD - Interpretation #2 Equivalent: ‘spectral decomposition’ of the matrix: = xx u1u1 u2u2 σ1σ1 σ2σ2 v1v1 v2v

SVD - Interpretation #2 Equivalent: ‘spectral decomposition’ of the matrix J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 25 = u1u1 σ1σ1 vT1vT1 u2u2 σ2σ2 vT2vT n m n x 1 1 x m k terms Assume: σ 1  σ 2  σ 3 ...  0 Why is setting small σ i to 0 the right thing to do? Vectors u i and v i are unit length, so σ i scales them. So, zeroing small σ i introduces less error

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 26 SVD - Interpretation #2 = u1u1 σ1σ1 vT1vT1 u2u2 σ2σ2 vT2vT n m Assume: σ 1  σ 2  σ 3 

Computing SVD (especially in a MapReduce setting…) 1.Stochastic Gradient Descent (today) 2.Stochastic SVD (not today) 3.Other methods (not today)

talk pilfered from  ….. KDD 2011

for image denoising

Matrix factorization as SGD step size

Matrix factorization as SGD - why does this work? step size

Matrix factorization as SGD - why does this work? Here’s the key claim:

Checking the claim Think for SGD for logistic regression LR loss = compare y and ŷ = dot(w,x) similar but now update w (user weights) and x (movie weight) Think for SGD for logistic regression LR loss = compare y and ŷ = dot(w,x) similar but now update w (user weights) and x (movie weight)

ALS = alternating least squares

Similar to McDonnell et al with perceptron learning

Slow convergence…..

More detail…. Randomly permute rows/cols of matrix Chop V,W,H into blocks of size d x d – m/d blocks in W, n/d blocks in H Group the data: – Pick a set of blocks with no overlapping rows or columns (a stratum) – Repeat until all blocks in V are covered Train the SGD – Process strata in series – Process blocks within a stratum in parallel

More detail…. Z was V

More detail…. Initialize W,H randomly – not at zero Choose a random ordering (random sort) of the points in a stratum in each “sub-epoch” Pick strata sequence by permuting rows and columns of M, and using M’[k,i] as column index of row i in subepoch k Use “bold driver” to set step size: – increase step size when loss decreases (in an epoch) – decrease step size when loss increases Implemented in Hadoop and R/Snowfall M=

Varying rank 100 epochs for all

Hadoop scalability Hadoop process setup time starts to dominate

SVD - Conclusions so far SVD: A= U  V T : unique – U: user-to-concept similarities – V: movie-to-concept similarities –  : strength of each concept Dimensionality reduction: – keep the few largest singular values (80-90% of ‘energy’) – SVD: picks up linear correlations J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 53

Relation to Eigen-decomposition SVD gives us: – A = U  V T Eigen-decomposition: – A = X  X T A is symmetric U, V, X are orthonormal (U T U=I),  are diagonal Now let’s calculate: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 54

Relation to Eigen-decomposition J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 55 X   X T Shows how to compute SVD using eigenvalue decomposition! SVD gives us: – A = U  V T Eigen-decomposition: – A = X  X T A is symmetric U, V, X are orthonormal (U T U=I),  are diagonal Now let’s calculate:

SVD: Drawbacks + Optimal low-rank approximation in terms of Frobenius norm - Interpretability problem: – A singular vector specifies a linear combination of all input columns or rows - Lack of sparsity: – Singular vectors are dense! J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 56 U  VTVT

CUR Decomposition Goal: Express A as a product of matrices C,U,R Make ǁ A-C·U·R ǁ F small “Constraints” on C and R: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 57 AC UR Frobenius norm: ǁ X ǁ F =  Σ ij X ij 2

CUR Decomposition Goal: Express A as a product of matrices C,U,R Make ǁ A-C·U·R ǁ F small “Constraints” on C and R: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 58 Pseudo-inverse of the intersection of C and R AC UR Frobenius norm: ǁ X ǁ F =  Σ ij X ij 2

CUR: Provably good approx. to SVD Let: A k be the “best” rank k approximation to A (that is, A k is SVD of A) Theorem [Drineas et al.] CUR in O(m·n) time achieves – ǁ A-CUR ǁ F  ǁ A-A k ǁ F +  ǁ A ǁ F with probability at least 1- , by picking – O(k log(1/  )/  2 ) columns, and – O(k 2 log 3 (1/  )/  6 ) rows J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 59 In practice: Pick 4k cols/rows

CUR: How it Works Sampling columns (similarly for rows): J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 60 Note this is a randomized algorithm, same column can be sampled more than once

Computing U Let W be the “intersection” of sampled columns C and rows R – Let SVD of W = X Z Y T Then: U = W + = Y Z + X T –  + : reciprocals of non-zero singular values:  + ii  ii – W + is the “pseudoinverse” J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 61 A C R U = W + W  Why pseudoinverse works? W = X Z Y then W -1 = X -1 Z -1 Y -1 Due to orthonomality X -1 =X T and Y -1 =Y T Since Z is diagonal Z -1 = 1/Z ii Thus, if W is nonsingular, pseudoinverse is the true inverse

CUR: Pros & Cons + Easy interpretation Since the basis vectors are actual columns and rows + Sparse basis Since the basis vectors are actual columns and rows - Duplicate columns and rows Columns of large norms will be sampled many times J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 62 Singular vector Actual column

Solution If we want to get rid of the duplicates: – Throw them away – Scale (multiply) the columns/rows by the square root of the number of duplicates J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 63 A CdCd R d CsCs R s Construct a small U

SVD vs. CUR J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 64 SVD: A = U  V T Huge but sparse Big and dense CUR: A = C U R Huge but sparse Big but sparse dense but small sparse and small

Stochastic SVD “Randomized methods for computing low- rank approximations of matrices”, Nathan Halko 2012 “Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions”, Halko, Martinsson, and Tropp, 2011 “An Algorithm for the Principal Component Analysis of Large Data Sets”, Halko, Martinsson, Shkolnisky, and Tygert, 2010