 # Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction

## Presentation on theme: "Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction"— Presentation transcript:

Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction
Shang-Hua Teng

Singular Value Decomposition
where u1 …ur are the r orthonormal vectors that are basis of C(A) and v1 …vr are the r orthonormal vectors that are basis of C(AT )

Low Rank Approximation and Reduction

The Singular Value Decomposition
A U S VT = m x n m x m m x n n x n A U S VT = m x n m x r r x r r x n

The Singular Value Reduction
A U VT m x n m x r r x r r x n = S Ak Uk S VkT = m x n m x k k x k k x n

How Much Information Lost?

Distance between Two Matrices
Frobenius Norm of a matrix A. Distance between two matrices A and B

How Much Information Lost?

Approximation Theorem
[Schmidt 1907; Eckart and Young 1939] Among all m by n matrices B of rank at most k, Ak is the one that minimizes

Application: Image Compression
Uncompressed m by n pixel image: m×n numbers Rank k approximation of image: k singular values The first k columns of U (m-vectors) The first k columns of V (n-vectors) Total: k × (m + n + 1) numbers

Example: Yogi (Uncompressed)
Source: [Will] Yogi: Rock photographed by Sojourner Mars mission. 256 × 264 grayscale bitmap  256 × 264 matrix M Pixel values  [0,1] ~ numbers

Example: Yogi (Compressed)
M has 256 singular values Rank 81 approximation of M: 81 × ( ) = ~  numbers

Example: Yogi (Both)

Eigenface Patented by MIT
Utilizes two dimentional, global grayscale images Face is mapped to numbers Create an image subspace(face space) which best discriminated between faces Can be used in properly lit and only frontal images

The Face Database Set of normalized face images Used ORL Face DB

Two-dimensional Embedding
We also conducted the locality preserving projections on the face manifold, which are based on the face images of one person with different poses and expressions. The horizontal axis demonstrates the expression variation; The vertical axis shows the pose variation;

EigenFaces Eigenface (PCA) 2’
In this slide, the small images are the first 10 Eigenfaces, Fisherfaces and Laplacianfaces calculated from the face images in the YALE database. We can see the eigenfaces are somewhat like a face since they are derived in the sense of reconstruction; while the fisherfaces and laplacianfaces are not so clear as the eigenfaces. We conducted different experiments to evaluate our algorithm compared with the traditional linear subspace algorithms Eigenfaces, Fisherfaces. There are three databases are evaluated: YALE, PIE and the database collected at Microsoft Research Asia. For the PIE database, we only used the 5 poses near to the frontal view. On the Yale database, we conducted two types of experiments, Leave one out ant the experiment using 6 images each person for training and the other 5 for testing. we use the nearest neighbor classifier for classification. Since Laplacianface method preserves local neighborhood structure. We can see that in all the experiments, Fisherfaces outperform Eigenfaces, and Laplacianfaces perform better than Fisherfaces especially on the MSRA database. I think it is because that there are sufficient samples for each person in the database and manifold structure can be well represented by these samples.

Latent Semantic Analysis (LSA)
Latent Semantic Indexing (LSI) Principal Component Analysis (PCA)

Term-Document Matrix Index each document (by human or by computer)
fij counts, frequencies, weights, etc Each document can be regarded as a point in m dimensions

Document-Term Matrix Index each document (by human or by computer)
fij counts, frequencies, weights, etc Each document can be regarded as a point in n dimensions

Term Occurrence Matrix

c1 c2 c3 c4 c5 m1 m2 m3 m4 human 1 1 interface 1 1 computer 1 1 user 1 1 1 system 1 1 2 response 1 1 time 1 1 EPS 1 1 survey 1 1 trees 1 1 1 graph 1 1 1 minors 1 1

Another Example

Term Document Matrix

LSI using k=2… T D “applications & algorithms”
LSI Factor 2 LSI Factor 1 “differential equations” T Each term’s coordinates specified in first K values of its row. Each doc’s coordinates specified in first K values of its column. D

Positive Definite Matrices and Quadratic Shapes

Positive Definite Matrices and Quadratic Shapes
For any m x n matrix A, all eigenvalues of AAT and ATA are non-negative Symmetric matrices that have positive eigenvalues are called Positive Definite matrices Symmetric matrices that have non-negative eigenvalues are called Positive semi-definite matrices

Download ppt "Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction"

Similar presentations