Presentation is loading. Please wait.

Presentation is loading. Please wait.

Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad.

Similar presentations


Presentation on theme: "Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad."— Presentation transcript:

1 Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad

2 Clustering documents (and terms) Latent Semantic Indexing Term-document matrices are very large But the number of topics that people talk about is small (in some sense) Clothes, movies, politics, … Can we represent the term-document space by a lower dimensional latent space?

3 Term-Document Matrix Represent each document as a numerical vector in the usual way. Align the vectors to form a matrix. Note that this is not a square matrix.

4 Term-Document Matrix Represent each document as a numerical vector in the usual way. Align the vectors to form a matrix. Note that this is not a square matrix. In a perfect world, the term-doc matrix might look like this:

5 Intuition from block matrices Block 1 Block 2 … Block k 0’s = Homogeneous non-zero blocks. M terms N documents What’s the rank of this matrix?

6 Intuition from block matrices Block 1 Block 2 … Block k 0’s M terms N documents Vocabulary partitioned into k topics (clusters); each doc discusses only one topic.

7 Intuition from block matrices Block 1 Block 2 … Block k Few nonzero entries wiper tire V6 car automobile 1 1 0 0 Likely there’s a good rank-k approximation to this matrix.

8 8 Dimension Reduction and Synonymy  Dimensionality reduction forces us to omit “details”.  We have to map different words (= different dimensions of the full space) to the same dimension in the reduced space.  The “cost” of mapping synonyms to the same dimension is much less than the cost of collapsing unrelated words.  We’ll select the “least costly” mapping.  Thus, will map synonyms to the same dimension.  But, will avoid doing that for unrelated words. 8

9 Formal Objectives Given a term-doc matrix, M, we want to find a matrix M’ that is “similar” to M but of rank k (where k is much smaller than the rank of M). So we need some formal measure of “similarity” between two matrices. And we need an algorithm for finding the matrix M’. Conveniently, there are some neat linear algebra tricks for this. So, let’s review a bit of linear algebra.

10 Eigenvalues & Eigenvectors Eigenvectors (for a square m  m matrix S ) How many eigenvalues are there at most? only has a non-zero solution if this is a m -th order equation in λ which can have at most m distinct solutions (roots of the characteristic polynomial) – can be complex even though S is real. eigenvalue(right) eigenvector Example

11 Useful Facts about Eigenvalues & Eigenvectors For symmetric matrices, eigenvectors for distinct eigenvalues are orthogonal All eigenvalues of a real symmetric matrix are real.

12 Example Let Then The eigenvalues are 1 and 3 (nonnegative, real). The eigenvectors are orthogonal (and real): Real, symmetric. Plug in these values and solve for eigenvectors. Prasad

13 Let be a square matrix with m linearly independent eigenvectors (a “non-defective” matrix) Theorem: Exists an eigen decomposition (cf. matrix diagonalization theorem) Columns of U are eigenvectors of S Diagonal elements of are eigenvalues of Eigen/diagonal Decomposition diagonal Unique for distinct eigen- values Prasad13L18LSI

14 Diagonal decomposition: why/how Let U have the eigenvectors as columns: Then, SU can be written And S=U  U –1. Thus SU=U , or U –1 SU= 

15 Key Point So Far We can decompose a square matrix into a product of matrices one of which is an eigenvalue diagonal matrix. But we’d like to say more: when the square matrix is also symmetric, we have a better theorem. Note that even that isn’t our ultimate destination, since the term-doc matrices we deal with aren’t even square matrices. One step at a time…

16 If is a symmetric matrix: Theorem: There exists a (unique) eigen decomposition where: Q -1 = Q T Columns of Q are normalized eigenvectors Columns are orthogonal. (everything is real) Symmetric Eigen Decomposition

17 Now… Let’s find some analogous theorem for non-square matrices.

18 Singular Value Decomposition MMMMMNMN V is N  N For an M  N matrix A of rank r there exists a factorization (Singular Value Decomposition = SVD) as follows: The columns of U are orthogonal eigenvectors of AA T. The columns of V are orthogonal eigenvectors of A T A. Singular values. Eigenvalues 1 … r of AA T are the eigenvalues of A T A. Prasad

19 Eigen Decomposition and SVD Note that AA T and A T A are symmetric square matrices. AA T = U  V T V  U T = U  2 U T That ’ s just the usual eigen decomposition for a symmetric square matrix. AA T and A T A have special relevance for us. a i j represents the dot-product similarity of row (column) i with row (column) j. (For docs, it ’ s the number of common terms; for terms, the number of common docs.)

20 Singular Value Decomposition Illustration of SVD dimensions and sparseness Prasad20L18LSI

21 21 Example of A = U Σ V T : The matrix A This is a standard term-document matrix. Actually, we use a non-weighted matrix here to simplify the example.

22 22 Example of A = U Σ V T : The matrix U One row per term, one column per min(M,N) where M is the number of terms and N is the number of documents. This is an orthonormal matrix: (i) Row vectors have unit length. (ii) Any two distinct row vectors are orthogonal to each other. Think of the dimensions (columns) as “semantic” dimensions that capture distinct topics like politics, sports, economics. Each number u ij in the matrix indicates how strongly related term i is to the topic represented by semantic dimension j.

23 23 Example of A = U Σ V T : The matrix Σ This is a square, diagonal matrix of dimensionality min(M,N) × min(M,N). The diagonal consists of the singular values of A. The magnitude of the singular value measures the importance of the corresponding semantic dimension. We’ll make use of this by omitting unimportant dimensions.

24 24 Example of A = U Σ V T : The matrix V T One column per document, one row per min(M,N) where M is the number of terms and N is the number of documents. Again: This is an orthonormal matrix: (i) Column vectors have unit length. (ii) Any two distinct column vectors are orthogonal to each other. These are again the semantic dimensions from the term matrix U that capture distinct topics like politics, sports, economics. Each number v ij in the matrix indicates how strongly related document i is to the topic represented by semantic dimension j.

25 25 Example of A = U Σ V T : All four matrices 25

26 26 LSI: Summary  We’ve decomposed the term-document matrix A into a product of three matrices.  The term matrix U – consists of one (row) vector for each term  The document matrix V T – consists of one (column) vector for each document  The singular value matrix Σ – diagonal matrix with singular values, reflecting importance of each dimension 26

27 SVD can be used to compute optimal low-rank approximations. Approximation problem: Find A k of rank k such that A k and X are both m  n matrices. Typically, want k << r. Low-rank Approximation Frobenius norm

28 Solution via SVD Low-rank Approximation set smallest r-k singular values to zero k

29 If we retain only k singular values, and set the rest to 0, then we don’t need the matrix parts in red Then Σ is k×k, U is M×k, V T is k×N, and A k is M×N This is referred to as the reduced SVD It is the convenient (space-saving) and usual form for computational applications Reduced SVD k 29

30 Approximation error How good (bad) is this approximation? It’s the best possible, measured by the Frobenius norm of the error: where the  i are ordered such that  i   i+1. Suggests why Frobenius error drops as k increases.

31 SVD low-rank approx. of term-doc matrices Whereas the term-doc matrix A may have M=50000, N=10 million (and rank close to 50000) For example, we can construct an approximation A 100 with rank 100. Of all rank 100 matrices, it would have the lowest Frobenius error. We can think of it as clustering our docs (or our terms) to 100 clusters. The low-dimensional space reflects semantic associations (latent semantic space). Similar terms map to similar location in low dimensional space.

32 Latent Semantic Indexing (LSI) Perform a low-rank approximation of document- term matrix (typical rank 100-300) General idea Map documents (and terms) to a low-dimensional representation. The low-dimensional space reflects semantic associations (latent semantic space). Similar terms map to similar location in low dimensional space

33 Some wild extrapolation The “dimensionality” of a corpus is the number of distinct topics represented in it. More mathematical wild extrapolation: if A has a rank k approximation of low Frobenius error, then there are no more than k distinct topics in the corpus. Prasad33L18LSI

34 34 Recall unreduced decomposition A=U Σ V T 34

35 35 Reducing the dimensionality to 2 35

36 36 Reducing the dimensionality to 2 Actually, we only zero out singular values in Σ. This has the effect of setting the corresponding dimensions in U and V T to zero when computing the product A = U Σ V T. 36

37 37 Original matrix A vs. reduced A 2 = U Σ 2 V T We can view A 2 as a two- dimensional representation of the matrix. We have performed a dimensionality reduction to two dimensions. 37

38 38 Why is the reduced matrix “better” 38 Similarity of d2 and d3 in the original space: 0. Similarity of d2 und d3 in the reduced space: 0.52 * 0.28 + 0.36 * 0.16 + 0.72 * 0.36 + 0.12 * 0.20 + - 0.39 * - 0.08 ≈ 0.52

39 39 Why the reduced matrix is “better” 39 “boat” and “ship” are semantically similar. The “reduced” similarity measure reflects this.

40 Toy Illustration Latent semantic space: illustrating example courtesy of Susan Dumais

41 LSI has many applications The general idea is quite standard linear algebra. It’s original application in comp ling was information retrieval (Deerwester, Dumais et al). In IR it overcomes two problems: polysemy and synonymy. In fact, it is rarely used in IR because most IR problems involve huge corpora and SVD algorithms aren’t efficient enough for use on such large corpora.

42 Extensions Subsequent work (Hoffman) extended LSI to probabilistic LSI. That was further extended (Blei, Ng & Jordan) to Latent Dirichlet Analysis (LDA).


Download ppt "Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad."

Similar presentations


Ads by Google