Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 13: Singular Value Decomposition (SVD)

Similar presentations


Presentation on theme: "Lecture 13: Singular Value Decomposition (SVD)"โ€” Presentation transcript:

1 Lecture 13: Singular Value Decomposition (SVD)
Junghoo โ€œJohnโ€ Cho UCLA

2 Summary: Two Worlds World of vectors Vector ๐‘ฃ Linear transformation
basis vectors 1:1 mapping (=isomorphic) World of vectors Vector ๐‘ฃ Linear transformation ๐‘‡ ๐‘Ž ๐‘ฅ +๐‘ ๐‘ฆ =๐‘Ž๐‘‡ ๐‘ฅ +๐‘๐‘‡ ๐‘ฆ Orthogonal Stretching Stretching factor Stretching direction Rotation Stretching + Rotation World of numbers 1๐ท: 1, 2, 0 vector 2๐ท: โ‹ฏ 2 โ‹ฑ โ‹ฎ โ‹ฎ matrix Symmetric matrix Eigenvalue Eigenvector Orthonormal matrix

3 Singular Value Decomposition (SVD)
Any matrix ๐‘‡ can be decomposed to ๐‘‡= ๐‘„ 1 ๐ท ๐‘„ 2 ๐‘‡ where ๐ท is a diagonal matrix and ๐‘„ 1 and ๐‘„ 2 are orthonormal matrix Singular values: diagonal entries in ๐ท Example Q: What is this transformation? What does SVD mean? = โˆ’ โˆ’

4 Singular Value Decomposition (SVD)
Q: What does ๐‘„ 2 ๐‘‡ mean? ๐‘‡= ๐‘„ 1 ๐ท ๐‘„ 2 ๐‘‡ = โˆ’ โˆ’ Change of coordinates! New basis vectors are (4/5, 3/5) and (-3/5, 4/5)! Q: What does ๐‘„ 1 mean? Q: What does ๐ท mean? Rotation! Rotate first basis vector (4/5, 3/5) to (1/ 2 , 1/ 2 ) second basis vector (-3/5, 4/5) to (-1/ 2 , 1/ 2 ) Orthogonal stretching! Stretch x3 along first basis vector (4/5, 3/5) Stretch x2 along second basis vector (-3/5, 4/5)! SVD shows that any matrix (= linear transformation) is essentially a orthogonal stretching followed by a rotation

5 What about Non-Square Matrix ๐‘‡?
Q: When ๐‘‡ is an ๐‘›ร—๐‘š matrix, what are dimensions of ๐‘„ 1 , ๐ท, ๐‘„ 2 ๐‘‡ ? (๐‘‡) = (๐‘„ 1 ) (๐ท) ( ๐‘„ 2 ๐‘‡ ) For non-square matrix ๐‘‡, ๐ท becomes a non-square diagonal matrix When ๐‘›>๐‘š When ๐‘›<๐‘š ๐ท= ๐‘‘ ๐‘‘ โ‹ฎ โ‹ฎ ๐ท= ๐‘‘ โ‹ฏ 0 ๐‘‘ 2 0 โ‹ฏ ๐‘› ๐‘š ๐‘› ๐‘› ๐‘š ๐‘š โ€œdimension paddingโ€ Covert 2D to 3D by adding a third dimension, for example โ€œdimension reductionโ€ Convert 3D to 2D by discarding the third dimension, for example

6 Computing SVD Q: How can we perform SVD? ๐‘‡= ๐‘„ 1 ๐ท ๐‘„ 2 ๐‘‡
๐‘‡ ๐‘‡ ๐‘‡= (๐‘„ 1 ๐ท ๐‘„ 2 ๐‘‡ ) ๐‘‡ ๐‘„ 1 ๐ท ๐‘„ 2 ๐‘‡ = ๐‘„ 2 ๐ท ๐‘‡ ๐‘„ 1 ๐‘‡ ๐‘„ 1 ๐ท ๐‘„ 2 ๐‘‡ = ๐‘„ 2 ๐ท ๐‘‡ ๐ท ๐‘„ 2 ๐‘‡ Q: What kind of matrix is ๐‘‡ ๐‘‡ ๐‘‡? ๐‘‡ ๐‘‡ ๐‘‡ is a symmetric matrix Orthogonal stretching Diagonal entries of ๐ท ๐‘‡ ๐ท ~ ๐ท 2 are eigenvalues (i.e., stretching factor) Columns of ๐‘„ 2 are eigenvectors (i.e., stretching direction) We can compute ๐‘„ 2 of ๐‘‡= ๐‘„ 1 ๐ท ๐‘„ 2 ๐‘‡ by computing eigenvectors of ๐‘‡ ๐‘‡ ๐‘‡ Similarly ๐‘„ 1 is the eigenvectors of ๐‘‡๐‘‡ ๐‘‡ ๐ท ~ ๐ท ๐‘‡ ๐ท or ๐ท ๐ท ๐‘‡ SVD can be done by computing eigenvalues and eigenvectors of TTT and TTT

7 Example: SVD Q: What kind of linear transformation is ๐‘‡?
๐‘‡= โˆ’ โˆ’2 2 โˆ’ = โˆ’ โˆ’

8 Summary: Two Worlds World of vectors Vector ๐‘ฃ Linear transformation
basis vectors 1:1 mapping (=isomorphic) World of vectors Vector ๐‘ฃ Linear transformation ๐‘‡ ๐‘Ž ๐‘ฅ +๐‘ ๐‘ฆ =๐‘Ž๐‘‡ ๐‘ฅ +๐‘๐‘‡ ๐‘ฆ Orthogonal Stretching Stretching factor Stretching direction Rotation Stretching + Rotation World of numbers 1๐ท: 1, 2, 0 vector 2๐ท: โ‹ฏ 2 โ‹ฑ โ‹ฎ โ‹ฎ matrix Symmetric matrix Eigenvalue Eigenvector Orthonormal matrix Singular value decomposition

9 SVD: Application = X ๐‘› ๐‘š ๐‘› ๐‘š ๐‘˜ Rank-๐‘˜ approximation Q: Why?
Sometimes we may want to โ€œapproximateโ€ a large matrix as multiplication of two smaller matrices Q: Why? ๐‘› ๐‘š = X ๐‘› ๐‘š ๐‘˜

10 Rank-๐‘˜ Approximation Q: How can we โ€œdecomposeโ€ a matrix into multiplication of two matrices of rank-๐‘˜ in the best possible way? Minimize the โ€œL2 differenceโ€ (= Frobenius norm) between the original matrix and the approximation

11 SVD as Matrix Approximation
Q: If we want to reduce the rank of ๐‘‡ to 2, what will be a good choice? The best rank-๐‘˜ approximation of any matrix ๐‘‡ is to keep the first-๐‘˜ entries of its SVD. Minimizes L2 difference between the original and the rank-๐‘˜ approximation

12 SVD Approximation Example: 1000 x 1000 matrix with (0โ€ฆ255)
62 60 58 57 55 53 54 61 59 56 12

13 Image of original matrix 1000x1000
13

14 SVD. Rank 1 approximation
14

15 SVD. Rank 10 approximation
15

16 SVD. Rank 100 approximation
16

17 Original vs Rank 100 approximation
Q: How many numbers do we keep for each? 17

18 Dimensionality Reduction
A data with large dimension Example: 1M users with 10M items. 1M x 10M matrix Q: Can we represent each user with much fewer dimensions, say 1000, without losing too much information?


Download ppt "Lecture 13: Singular Value Decomposition (SVD)"

Similar presentations


Ads by Google