From last time What’s the real point of using vector spaces?: A user’s query can be viewed as a (very) short document. Query becomes a vector in the same.

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 7: Scoring and results assembly.
Advertisements

Indexing. Efficient Retrieval Documents x terms matrix t 1 t 2... t j... t m nf d 1 w 11 w w 1j... w 1m 1/|d 1 | d 2 w 21 w w 2j... w 2m 1/|d.
Introduction to Information Retrieval
Latent Semantic Indexing (mapping onto a smaller space of latent concepts) Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 18.
Faster TF-IDF David Kauchak cs160 Fall 2009 adapted from:
Dimensionality Reduction PCA -- SVD
Information Retrieval Lecture 6bis. Recap: tf-idf weighting The tf-idf weight of a term is the product of its tf weight and its idf weight. Best known.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model.
CS347 Lecture 8 May 7, 2001 ©Prabhakar Raghavan. Today’s topic Clustering documents.
CS347 Lecture 4 April 18, 2001 ©Prabhakar Raghavan.
Hinrich Schütze and Christina Lioma
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
Vector Space Model Any text object can be represented by a term vector Examples: Documents, queries, sentences, …. A query is viewed as a short document.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 7: Scores in a Complete Search.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 6 May 7, 2006
Information Retrieval IR 6. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index support.
CpSc 881: Information Retrieval. 2 Why is ranking so important? Problems with unranked retrieval Users want to look at a few results – not thousands.
DATA MINING LECTURE 7 Dimensionality Reduction PCA – SVD
Information Retrieval Latent Semantic Indexing. Speeding up cosine computation What if we could take our vectors and “pack” them into fewer dimensions.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
CS276A Information Retrieval Lecture 7. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index.
Information Retrieval Basic Document Scoring. Similarity between binary vectors Document is binary vector X,Y in {0,1} v Score: overlap measure What’s.
Documents as vectors Each doc j can be viewed as a vector of tf.idf values, one component for each term So we have a vector space terms are axes docs live.
Latent Semantic Indexing (mapping onto a smaller space of latent concepts) Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 18.
1 Vector Space Model Rong Jin. 2 Basic Issues in A Retrieval Model How to represent text objects What similarity function should be used? How to refine.
Information Retrieval Lecture 6. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index support.
PrasadL09VSM-Ranking1 Vector Space Model : Ranking Revisited Adapted from Lectures by Prabhakar Raghavan (Google) and Christopher Manning (Stanford)
Computing Scores in a Complete Search System Lecture 8: Scoring and results assembly Web Search and Mining 1.
Information Retrieval and Web Search Lecture 7 1.
Term Weighting and Ranking Models Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata.
Indices Tomasz Bartoszewski. Inverted Index Search Construction Compression.
Information Retrieval Lecture 2 Introduction to Information Retrieval (Manning et al. 2007) Chapter 6 & 7 For the MSc Computer Science Programme Dell Zhang.
Latent Semantic Analysis Hongning Wang Recap: vector space model Represent both doc and query by concept vectors – Each concept defines one dimension.
1 Motivation Web query is usually two or three words long. –Prone to ambiguity –Example “keyboard” –Input device of computer –Musical instruments How can.
Term Frequency. Term frequency Two factors: – A term that appears just once in a document is probably not as significant as a term that appears a number.
Information Retrieval and Data Mining (AT71. 07) Comp. Sc. and Inf
Chapter 6: Information Retrieval and Web Search
Latent Semantic Indexing: A probabilistic Analysis Christos Papadimitriou Prabhakar Raghavan, Hisao Tamaki, Santosh Vempala.
1 ITCS 6265 Information Retrieval and Web Mining Lecture 7.
Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 7 Computing scores in a complete search system.
Vector Space Models.
Search Engines WS 2009 / 2010 Prof. Dr. Hannah Bast Chair of Algorithms and Data Structures Department of Computer Science University of Freiburg Lecture.
Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 7: Scoring and results assembly.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Chris Manning and Pandu Nayak Efficient.
PrasadL09VSM-Ranking1 Vector Space Model : Ranking Revisited Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)
Web Search and Data Mining Lecture 4 Adapted from Manning, Raghavan and Schuetze.
CS276 Information Retrieval and Web Search Lecture 7: Vector space retrieval.
Term weighting and Vector space retrieval
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar.
CS315 Introduction to Information Retrieval Boolean Search 1.
Information Retrieval and Web Search Lecture 7: Vector space retrieval.
Information Retrieval and Data Mining (AT71. 07) Comp. Sc. and Inf
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Information Retrieval and Web Search
Packing to fewer dimensions
CS276A Text Information Retrieval, Mining, and Exploitation
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Information Retrieval and Web Search
Implementation Issues & IR Systems
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Packing to fewer dimensions
8. Efficient Scoring Most slides were adapted from Stanford CS 276 course and University of Munich IR course.
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
CS276 Lecture 7: Scoring and results assembly
Packing to fewer dimensions
CS276 Information Retrieval and Web Search
Presentation transcript:

From last time What’s the real point of using vector spaces?: A user’s query can be viewed as a (very) short document. Query becomes a vector in the same space as the docs. Can measure each doc’s proximity to it. Natural measure of scores/ranking – no longer Boolean. Is it the only thing that people use? No; if you know that some words have special meaning in your own corpus, not justified by tf.idf, then manual editing of weights may take place E.g., “benefits” might be re-weighted by hand in the corpus of an HR department

Issues to consider How would you augment the inverted index to support cosine ranking computations? Walk through the steps of serving a query. The math of the vector space model is quite straightforward, but being able to do cosine ranking efficiently at runtime is nontrivial

Efficient cosine ranking Ranking consists of computing the k docs in the corpus “nearest” to the query  find k largest query-doc cosines. Efficient ranking: 1. Computing a single cosine efficiently. 2. Choosing the k largest cosine values efficiently. What an IR system does is in effect solve the k nearest neighbor problem for each query  We try to solve the 100 near neighbors problem in 100 dimensions! This in general cannot be done efficiently for high-dimensional spaces But it is solvable for short queries, and standard indexes are optimized to do this

Computing a single cosine For every term i, with each doc j, store term frequency tf ij. Some tradeoffs on whether to store term count, term weight, or weighted by idf i. Accumulate component-wise sum More on speeding up a single cosine later on If you’re indexing 2,469,940,685 documents (web search) an array of accumulators is infeasible Ideas?

Encoding document frequencies Add tf d,t to postings lists Almost always store as frequency – scale at runtime  Always store term and freq in main memory Unary code is very effective here  code (as in 1 st class) is an even better choice Overall, requires little additional space abacus 8 aargh 2 acacia 35 1,2 7,3 83,1 87,2 … 1,1 5,1 13,1 17,1 … 7,1 8,2 40,1 97,3 … Why? Freq’s Postings

Computing the k largest cosines Selection vs. Sorting Typically we want to retrieve the top k docs (in the cosine ranking for the query) not totally order all docs in the corpus just pick off docs with k highest cosines. Use heap for selecting top k Binary tree in which each node’s value > values of children Takes 2n operations to construct, then each of k log n “winners” read off in 2log n steps. For n=1M, k=100, this is about 10% of the cost of sorting.

Bottleneck Still need to first compute cosines from query to each of n docs  several seconds for n = 1M. Can select from only non-zero cosines Need union of postings lists accumulators (<<1M) Can further limit to documents with non-zero cosines on rare (high idf) words Enforce conjunctive search (a la Google): non-zero cosines on all words in query Need min of postings lists sizes accumulators But still potentially expensive

Can we avoid this? Yes, but may occasionally get an answer wrong a doc not in the top k may creep into the answer.

1 st Heuristic: Term-wise candidates Preprocess: Treat each term as a 1-term query Pre-compute, for each term/”query”, its k nearest docs. Lots of preprocessing. Result: “preferred list” for each term. Search: For a n-term query, take the union of their n preferred lists  Call this set S. Compute cosines from the query to only the docs in S, and choose top k.

Exercises Fill in the details of the calculation: Which docs go into the preferred list for a term? Devise a small example where this method gives an incorrect ranking. When some good candidate may be eliminated?

2 nd Heuristic: Sampling and pre-grouping First run a pre-processing phase: pick  n docs at random:  call these leaders For each other doc, pre-compute nearest leader  Docs attached to a leader: its followers;  Likely: each leader has ~  n followers. Due to random sampling Process a query as follows: Given query Q, find its nearest leader L. Seek k nearest docs from among L’s followers.

Visualization Query LeaderFollower

Why use random sampling Fast Leaders reflect data distribution

General variants Have each follower attached to a=3 (say) nearest leaders. From query, find b=4 (say) nearest leaders and their followers. Can recur on leader/follower construction.

Exercises To find the nearest leader in step 1, how many cosine computations do we do? What is the effect of the constants a,b on the previous slide? Devise an example where this is likely to fail – i.e., we miss one of the k nearest docs. Likely under random sampling.

Dimensionality reduction What if we could take our vectors and “pack” them into fewer dimensions (say 50,000  100) while preserving distances? (Well, almost.) Speeds up cosine computations. Two methods: “Latent semantic indexing”. Random projection.

Random projection onto k<<m axes Choose a random direction x 1 in the vector space. For i = 2 to k, Choose a random direction x i that is orthogonal to x 1, x 2, … x i– 1. Project each document vector into the subspace spanned by {x 1, x 2, …, x k }.

E.g., from 3 to 2 dimensions d2d2 d1d1 x1x1 t 3 x2x2 t 2 t 1 x1x1 x2x2 d2d2 d1d1 x 1 is a random direction in (t 1,t 2,t 3 ) space. x 2 is chosen randomly but orthogonal to x 1.

Guarantee With high probability, relative distances are (approximately) preserved by projection. Pointer to precise theorem in Resources. (Using random projection is a newer idea: it’s somewhat surprising, but it works very well.)

Computing the random projection Projecting n vectors from m dimensions down to k dimensions: Start with m  n matrix of terms  docs, A. Find random k  m orthogonal projection matrix R. Compute matrix product W = R  A. j th column of W is the vector corresponding to doc j, but now in k << m dimensions.

Cost of computation This takes a total of kmn multiplications. Expensive – see Resources for ways to do essentially the same thing, quicker. Exercise: by projecting from 50,000 dimensions down to 100, are we really going to make each cosine computation faster? Why?

Latent semantic indexing (LSI) Another technique for dimension reduction Random projection was data-independent LSI on the other hand is data-dependent Eliminate redundant axes Pull together “related” axes – hopefully  car and automobile More in lectures on Clustering