PrasadL18LSI1 Latent Semantic Indexing Adapted from Lectures by Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann.

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 1.
Advertisements

Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Dimensionality Reduction PCA -- SVD
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model.
Comparison of information retrieval techniques: Latent semantic indexing (LSI) and Concept indexing (CI) Jasminka Dobša Faculty of organization and informatics,
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
What is missing? Reasons that ideal effectiveness hard to achieve: 1. Users’ inability to describe queries precisely. 2. Document representation loses.
Hinrich Schütze and Christina Lioma
CS347 Lecture 4 April 18, 2001 ©Prabhakar Raghavan.
1 Latent Semantic Indexing Jieping Ye Department of Computer Science & Engineering Arizona State University
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 4 March 30, 2005
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
Hinrich Schütze and Christina Lioma
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
1/ 30. Problems for classical IR models Introduction & Background(LSI,SVD,..etc) Example Standard query method Analysis standard query method Seeking.
IR Models: Latent Semantic Analysis. IR Model Taxonomy Non-Overlapping Lists Proximal Nodes Structured Models U s e r T a s k Set Theoretic Fuzzy Extended.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 18: Latent Semantic Indexing 1.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 6 May 7, 2006
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Multimedia Databases LSI and SVD. Text - Detailed outline text problem full text scanning inversion signature files clustering information filtering and.
Orthogonality and Least Squares
E.G.M. PetrakisDimensionality Reduction1  Given N vectors in n dims, find the k most important axes to project them  k is user defined (k < n)  Applications:
DATA MINING LECTURE 7 Dimensionality Reduction PCA – SVD
1 CS 430 / INFO 430 Information Retrieval Lecture 9 Latent Semantic Indexing.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
CS276A Text Retrieval and Mining Lecture 15 Thanks to Thomas Hoffman, Brown University for sharing many of these slides.
Information Retrieval Latent Semantic Indexing. Speeding up cosine computation What if we could take our vectors and “pack” them into fewer dimensions.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Latent Semantic Indexing (mapping onto a smaller space of latent concepts) Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 18.
1 Vector Space Model Rong Jin. 2 Basic Issues in A Retrieval Model How to represent text objects What similarity function should be used? How to refine.
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
Latent Semantic Indexing Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata.
Automated Essay Grading Resources: Introduction to Information Retrieval, Manning, Raghavan, Schutze (Chapter 06 and 18) Automated Essay Scoring with e-rater.
Latent Semantic Indexing 1. Term-document matrices are very large, though most cells are “zeros” But the number of topics that people talk about is small.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Matrix Factorization and Latent Semantic Indexing 1 Lecture 13: Matrix Factorization and Latent Semantic Indexing Web Search and Mining.
Introduction to Information Retrieval Lecture 19 LSI Thanks to Thomas Hofmann for some slides.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Pandu Nayak.
AN ORTHOGONAL PROJECTION
Latent Semantic Analysis Hongning Wang Recap: vector space model Represent both doc and query by concept vectors – Each concept defines one dimension.
CpSc 881: Information Retrieval. 2 Recall: Term-document matrix This matrix is the basis for computing the similarity between documents and queries. Today:
Latent Semantic Indexing: A probabilistic Analysis Christos Papadimitriou Prabhakar Raghavan, Hisao Tamaki, Santosh Vempala.
Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad.
SINGULAR VALUE DECOMPOSITION (SVD)
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Pandu Nayak.
Latent Semantic Indexing
Modern information retreival Chapter. 02: Modeling (Latent Semantic Indexing)
1 Latent Concepts and the Number Orthogonal Factors in Latent Semantic Analysis Georges Dupret
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
1 CS 430: Information Discovery Lecture 11 Latent Semantic Indexing.
Web Search and Text Mining Lecture 5. Outline Review of VSM More on LSI through SVD Term relatedness Probabilistic LSI.
ITCS 6265 Information Retrieval & Web Mining Lecture 16 Latent semantic indexing Thanks to Thomas Hofmann for some slides.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Web Search and Data Mining Lecture 4 Adapted from Manning, Raghavan and Schuetze.
DATA MINING LECTURE 8 Sequence Segmentation Dimensionality Reduction.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
PrasadL18LSI1 Latent Semantic Indexing Adapted from Lectures by Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann.
Unsupervised Learning II Feature Extraction
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Vector Semantics Dense Vectors.
Singular Value Decomposition and its applications
Eigen & Singular Value Decomposition
LSI, SVD and Data Management
Latent Semantic Indexing
Latent Semantic Analysis
Presentation transcript:

PrasadL18LSI1 Latent Semantic Indexing Adapted from Lectures by Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann

Today’s topic Latent Semantic Indexing Term-document matrices are very large But the number of topics that people talk about is small (in some sense) Clothes, movies, politics, … Can we represent the term-document space by a lower dimensional latent space? Prasad2L18LSI

Linear Algebra Background Prasad3L18LSI

Eigenvalues & Eigenvectors Eigenvectors (for a square m  m matrix S ) How many eigenvalues are there at most? only has a non-zero solution if this is a m -th order equation in λ which can have at most m distinct solutions (roots of the characteristic polynomial) – can be complex even though S is real. eigenvalue(right) eigenvector Example 4

Matrix-vector multiplication has eigenvalues 30, 20, 1 with corresponding eigenvectors On each eigenvector, S acts as a multiple of the identity matrix: but as a different multiple on each. Any vector (say x= ) can be viewed as a combination of the eigenvectors: x = 2v 1 + 4v 2 + 6v 3

Matrix vector multiplication Thus a matrix-vector multiplication such as Sx (S, x as in the previous slide) can be rewritten in terms of the eigenvalues/vectors: Even though x is an arbitrary vector, the action of S on x is determined by the eigenvalues/vectors. Prasad6L18LSI

Matrix vector multiplication Observation: the effect of “small” eigenvalues is small. If we ignored the smallest eigenvalue (1), then instead of we would get These vectors are similar (in terms of cosine similarity), or close (in terms of Euclidean distance). Prasad7L18LSI

Eigenvalues & Eigenvectors For symmetric matrices, eigenvectors for distinct eigenvalues are orthogonal All eigenvalues of a real symmetric matrix are real. All eigenvalues of a positive semi-definite matrix are non-negative Prasad8L18LSI

Example Let Then The eigenvalues are 1 and 3 (nonnegative, real). The eigenvectors are orthogonal (and real): Real, symmetric. Plug in these values and solve for eigenvectors. Prasad

Let be a square matrix with m linearly independent eigenvectors (a “non-defective” matrix) Theorem: Exists an eigen decomposition (cf. matrix diagonalization theorem) Columns of U are eigenvectors of S Diagonal elements of are eigenvalues of Eigen/diagonal Decomposition diagonal Unique for distinct eigen- values Prasad10L18LSI

Diagonal decomposition: why/how Let U have the eigenvectors as columns: Then, SU can be written And S=U  U –1. Thus SU=U , or U –1 SU=  Prasad11

Diagonal decomposition - example Recall The eigenvectors and form Inverting, we have Then, S=U  U –1 = Recall UU –1 =1.

Example continued Let’s divide U (and multiply U –1 ) by Then, S= Q(Q -1 = Q T )  Why? Stay tuned … Prasad13L18LSI

If is a symmetric matrix: Theorem: There exists a (unique) eigen decomposition where Q is orthogonal: Q -1 = Q T Columns of Q are normalized eigenvectors Columns are orthogonal. (everything is real) Symmetric Eigen Decomposition Prasad14L18LSI

Exercise Examine the symmetric eigen decomposition, if any, for each of the following matrices: Prasad15L18LSI

Time out! I came to this class to learn about text retrieval and mining, not have my linear algebra past dredged up again … But if you want to dredge, Strang’s Applied Mathematics is a good place to start. What do these matrices have to do with text? Recall M  N term-document matrices … But everything so far needs square matrices – so … Prasad16L18LSI

Singular Value Decomposition MMMMMNMN V is N  N For an M  N matrix A of rank r there exists a factorization (Singular Value Decomposition = SVD) as follows: The columns of U are orthogonal eigenvectors of AA T. The columns of V are orthogonal eigenvectors of A T A. Singular values. Eigenvalues 1 … r of AA T are the eigenvalues of A T A. Prasad

Singular Value Decomposition Illustration of SVD dimensions and sparseness Prasad18L18LSI

SVD example Let Thus M=3, N=2. Its SVD is Typically, the singular values arranged in decreasing order.

SVD can be used to compute optimal low-rank approximations. Approximation problem: Find A k of rank k such that A k and X are both m  n matrices. Typically, want k << r. Low-rank Approximation Frobenius norm Prasad20L18LSI

Solution via SVD Low-rank Approximation set smallest r-k singular values to zero column notation: sum of rank 1 matrices k

If we retain only k singular values, and set the rest to 0, then we don’t need the matrix parts in red Then Σ is k×k, U is M×k, V T is k×N, and A k is M×N This is referred to as the reduced SVD It is the convenient (space-saving) and usual form for computational applications Reduced SVD k 22

Approximation error How good (bad) is this approximation? It’s the best possible, measured by the Frobenius norm of the error: where the  i are ordered such that  i   i+1. Suggests why Frobenius error drops as k increases. Prasad23L18LSI

SVD Low-rank approximation Whereas the term-doc matrix A may have M=50000, N=10 million (and rank close to 50000) We can construct an approximation A 100 with rank 100. Of all rank 100 matrices, it would have the lowest Frobenius error. Great … but why would we?? Answer: Latent Semantic Indexing C. Eckart, G. Young, The approximation of a matrix by another of lower rank. Psychometrika, 1, , 1936.

Latent Semantic Indexing via the SVD Prasad25L18LSI

What it is From term-doc matrix A, we compute the approximation A k. There is a row for each term and a column for each doc in A k Thus docs live in a space of k<<r dimensions These dimensions are not the original axes But why? Prasad26L18LSI

Vector Space Model: Pros Automatic selection of index terms Partial matching of queries and documents (dealing with the case where no document contains all search terms) Ranking according to similarity score (dealing with large result sets) Term weighting schemes (improves retrieval performance) Various extensions Document clustering Relevance feedback (modifying query vector) Geometric foundation Prasad27L18LSI

Problems with Lexical Semantics Ambiguity and association in natural language Polysemy: Words often have a multitude of meanings and different types of usage (more severe in very heterogeneous collections). The vector space model is unable to discriminate between different meanings of the same word. Prasad28

Problems with Lexical Semantics Synonymy: Different terms may have identical or similar meanings (weaker: words indicating the same topic). No associations between words are made in the vector space representation. Prasad29L18LSI

Polysemy and Context Document similarity on single word level: polysemy and context car company dodge ford meaning 2 ring jupiter space voyager meaning 1 … saturn... … planet... contribution to similarity, if used in 1 st meaning, but not if in 2 nd Prasad30L18LSI

Latent Semantic Indexing (LSI) Perform a low-rank approximation of document-term matrix (typical rank ) General idea Map documents (and terms) to a low- dimensional representation. Design a mapping such that the low-dimensional space reflects semantic associations (latent semantic space). Compute document similarity based on the inner product in this latent semantic space Prasad31L18LSI

Goals of LSI Similar terms map to similar location in low dimensional space Noise reduction by dimension reduction Prasad32L18LSI

Latent Semantic Analysis Latent semantic space: illustrating example courtesy of Susan Dumais Prasad33L18LSI

Performing the maps Each row and column of A gets mapped into the k-dimensional LSI space, by the SVD. Claim – this is not only the mapping with the best (Frobenius error) approximation to A, but in fact improves retrieval. A query q is also mapped into this space, by Query NOT a sparse vector. Prasad34L18LSI

Performing the maps A T A is the dot product of pairs of documents A T A ≈ A k T A k = (U k  k V k T ) T (U k  k V k T ) = V k  k U k T U k  k V k T = (V k  k ) (V k  k ) T Since V k = A k T U k  k -1 we should transform query q to q k as follows Sec

Empirical evidence Experiments on TREC 1/2/3 – Dumais Lanczos SVD code (available on netlib) due to Berry used in these expts Running times of ~ one day on tens of thousands of docs [still an obstacle to use] Dimensions – various values reported. Reducing k improves recall. (Under 200 reported unsatisfactory) Generally expect recall to improve – what about precision? 36L18LSI

Empirical evidence Precision at or above median TREC precision Top scorer on almost 20% of TREC topics Slightly better on average than straight vector spaces Effect of dimensionality: DimensionsPrecision Prasad37L18LSI

Failure modes Negated phrases TREC topics sometimes negate certain query/terms phrases – automatic conversion of topics to Boolean queries As usual, freetext/vector space syntax of LSI queries precludes (say) “Find any doc having to do with the following 5 companies” See Dumais for more. 38

But why is this clustering? We’ve talked about docs, queries, retrieval and precision here. What does this have to do with clustering? Intuition: Dimension reduction through LSI brings together “related” axes in the vector space. Prasad39L18LSI

Intuition from block matrices Block 1 Block 2 … Block k 0’s = Homogeneous non-zero blocks. M terms N documents What’s the rank of this matrix? Prasad

Intuition from block matrices Block 1 Block 2 … Block k 0’s M terms N documents Vocabulary partitioned into k topics (clusters); each doc discusses only one topic.

Intuition from block matrices Block 1 Block 2 … Block k Few nonzero entries wiper tire V6 car automobile Likely there’s a good rank-k approximation to this matrix.

Simplistic picture Topic 1 Topic 2 Topic 3 Prasad43

Some wild extrapolation The “dimensionality” of a corpus is the number of distinct topics represented in it. More mathematical wild extrapolation: if A has a rank k approximation of low Frobenius error, then there are no more than k distinct topics in the corpus. Prasad44L18LSI

LSI has many other applications In many settings in pattern recognition and retrieval, we have a feature-object matrix. For text, the terms are features and the docs are objects. Could be opinions and users … This matrix may be redundant in dimensionality. Can work with low-rank approximation. If entries are missing (e.g., users’ opinions), can recover if dimensionality is low. Powerful general analytical technique Close, principled analog to clustering methods. Prasad45

Hinrich Schütze and Christina Lioma Latent Semantic Indexing 46

Overview ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 47

Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 48

49 Recall: Term-document matrix This matrix is the basis for computing the similarity between documents and queries. Today: Can we transform this matrix, so that we get a better measure of similarity between documents and queries?... Anthony and Cleopatra Julius Caesar The Tempest HamletOthelloMacbeth anthony brutus caesar calpurnia cleopatra mercy worser

50 Latent semantic indexing: Overview  We decompose the term-document matrix into a product of matrices.  The particular decomposition we’ll use is: singular value decomposition (SVD).  SVD: C = U Σ V T (where C = term-document matrix)  We will then use the SVD to compute a new, improved term- document matrix C′.  We’ll get better similarity values out of C′ (compared to C).  Using SVD for this purpose is called latent semantic indexing or LSI.

51 Example of C = U Σ V T : The matrix C This is a standard term-document matrix. Actually, we use a non-weighted matrix here to simplify the example.

52 Example of C = U Σ V T : The matrix U One row per term, one column per min(M,N) where M is the number of terms and N is the number of documents. This is an orthonormal matrix: (i) Row vectors have unit length. (ii) Any two distinct row vectors are orthogonal to each other. Think of the dimensions (columns) as “semantic” dimensions that capture distinct topics like politics, sports, economics. Each number u ij in the matrix indicates how strongly related term i is to the topic represented by semantic dimension j.

53 Example of C = U Σ V T : The matrix Σ This is a square, diagonal matrix of dimensionality min(M,N) × min(M,N). The diagonal consists of the singular values of C. The magnitude of the singular value measures the importance of the corresponding semantic dimension. We’ll make use of this by omitting unimportant dimensions.

54 Example of C = U Σ V T : The matrix V T One column per document, one row per min(M,N) where M is the number of terms and N is the number of documents. Again: This is an orthonormal matrix: (i) Column vectors have unit length. (ii) Any two distinct column vectors are orthogonal to each other. These are again the semantic dimensions from the term matrix U that capture distinct topics like politics, sports, economics. Each number v ij in the matrix indicates how strongly related document i is to the topic represented by semantic dimension j.

55 Example of C = U Σ V T : All four matrices 55

56 LSI: Summary  We’ve decomposed the term-document matrix C into a product of three matrices.  The term matrix U – consists of one (row) vector for each term  The document matrix V T – consists of one (column) vector for each document  The singular value matrix Σ – diagonal matrix with singular values, reflecting importance of each dimension  Next: Why are we doing this? 56

Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 57

58 How we use the SVD in LSI  Key property: Each singular value tells us how important its dimension is.  By setting less important dimensions to zero, we keep the important information, but get rid of the “details”.  These details may  be noise – in that case, reduced LSI is a better representation because it is less noisy.  make things dissimilar that should be similar – again reduced LSI is a better representation because it represents similarity better.

59 How we use the SVD in LSI  Analogy for “fewer details is better”  Image of a bright red flower  Image of a black and white flower  Omitting color makes is easier to see similarity

60 Recall unreduced decomposition C=U Σ V T 60

61 Reducing the dimensionality to 2 61

62 Reducing the dimensionality to 2 Actually, we only zero out singular values in Σ. This has the effect of setting the corresponding dimensions in U and V T to zero when computing the product C = U Σ V T. 62

63 Original matrix C vs. reduced C 2 = U Σ 2 V T We can view C 2 as a two- dimensional representation of the matrix. We have performed a dimensionality reduction to two dimensions. 63

64 Why is the reduced matrix “better” 64 Similarity of d2 and d3 in the original space: 0. Similarity of d2 und d3 in the reduced space: 0.52 * * * * * ≈ 0.52

65 Why the reduced matrix is “better” 65 “boat” and “ship” are semantically similar. The “reduced” similarity measure reflects this. What property of the SVD reduction is responsible for improved similarity?

Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 66

67 Why we use LSI in information retrieval  LSI takes documents that are semantically similar (= talk about the same topics),... ... but are not similar in the vector space (because they use different words)... ... and re-represent them in a reduced vector space.. ... in which they have higher similarity.  Thus, LSI addresses the problems of synonymy and semantic relatedness.  Standard vector space: Synonyms contribute nothing to document similarity.  Desired effect of LSI: Synonyms contribute strongly to document similarity.

68 How LSI addresses synonymy and semantic relatedness  The dimensionality reduction forces us to omit “details”.  We have to map different words (= different dimensions of the full space) to the same dimension in the reduced space.  The “cost” of mapping synonyms to the same dimension is much less than the cost of collapsing unrelated words.  SVD selects the “least costly” mapping (see below).  Thus, it will map synonyms to the same dimension.  But, it will avoid doing that for unrelated words. 68

69 LSI: Comparison to other approaches  Recap: Relevance feedback and query expansion are used to increase recall in IR – if query and documents have (in the extreme case) no terms in common.  LSI increases recall and can hurt precision.  Thus, it addresses the same problems as (pseudo) relevance feedback and query expansion... ... and it has the same problems. 69

70 Implementation  Compute SVD of term-document matrix  Reduce the space and compute reduced document representations  Map the query into the reduced space  This follows from:  Compute similarity of q 2 with all reduced documents in V 2.  Output ranked list of documents as usual  Exercise: What is the fundamental problem with this approach?

71 Optimality  SVD is optimal in the following sense.  Keeping the k largest singular values and setting all others to zero gives you the optimal approximation of the original matrix C. Eckart-Young theorem  Optimal: no other matrix of the same rank (= with the same underlying dimensionality) approximates C better.  Measure of approximation is Frobenius norm:  So LSI uses the “best possible” matrix.  Caveat: There is only a tenuous relationship between the Frobenius norm and cosine similarity between documents.

Example from Dumais et al PrasadL18LSI72

Latent Semantic Indexing (LSI)

PrasadL18LSI74

PrasadL18LSI75

Reduced Model (K = 2) PrasadL18LSI76

PrasadL18LSI77

LSI, SVD, & Eigenvectors SVD decomposes: Term x Document matrix X as X=U  V T Where U,V left and right singular vector matrices, and  is a diagonal matrix of singular values Corresponds to eigenvector-eigenvalue decompostion: Y=VLV T Where V is orthonormal and L is diagonal U: matrix of eigenvectors of Y=XX T V: matrix of eigenvectors of Y=X T X  : diagonal matrix L of eigenvalues

Computing Similarity in LSI