CS276 Information Retrieval and Web Search Lecture 7: Vector space retrieval.

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 7: Scoring and results assembly.
Advertisements

Faster TF-IDF David Kauchak cs160 Fall 2009 adapted from:
Information Retrieval Lecture 6bis. Recap: tf-idf weighting The tf-idf weight of a term is the product of its tf weight and its idf weight. Best known.
| 1 › Gertjan van Noord2014 Zoekmachines Lecture 4.
Ranking models in IR Key idea: We wish to return in order the documents most likely to be useful to the searcher To do this, we want to know which documents.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model.
TF-IDF David Kauchak cs160 Fall 2009 adapted from:
From last time What’s the real point of using vector spaces?: A user’s query can be viewed as a (very) short document. Query becomes a vector in the same.
DIMENSIONALITY REDUCTION BY RANDOM PROJECTION AND LATENT SEMANTIC INDEXING Jessica Lin and Dimitrios Gunopulos Ângelo Cardoso IST/UTL December
CS347 Lecture 4 April 18, 2001 ©Prabhakar Raghavan.
Scoring and Ranking 198:541. Scoring Thus far, our queries have all been Boolean Docs either match or not Good for expert users with precise understanding.
CS276 Information Retrieval and Web Mining
Hinrich Schütze and Christina Lioma
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 7: Scores in a Complete Search.
Information Retrieval IR 6. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index support.
CpSc 881: Information Retrieval. 2 Why is ranking so important? Problems with unranked retrieval Users want to look at a few results – not thousands.
CES 514 Data Mining March 11, 2010 Lecture 5: scoring, term weighting, vector space model (Ch 6)
Chapter 5: Information Retrieval and Web Search
Information Retrieval Latent Semantic Indexing. Speeding up cosine computation What if we could take our vectors and “pack” them into fewer dimensions.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
CS276A Information Retrieval Lecture 7. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index.
CS276A Text Information Retrieval, Mining, and Exploitation Lecture 4 15 Oct 2002.
Information Retrieval Basic Document Scoring. Similarity between binary vectors Document is binary vector X,Y in {0,1} v Score: overlap measure What’s.
Documents as vectors Each doc j can be viewed as a vector of tf.idf values, one component for each term So we have a vector space terms are axes docs live.
Latent Semantic Indexing (mapping onto a smaller space of latent concepts) Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 18.
Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1 Ad indexes.
Boolean and Vector Space Models
1 Vector Space Model Rong Jin. 2 Basic Issues in A Retrieval Model How to represent text objects What similarity function should be used? How to refine.
Document ranking Paolo Ferragina Dipartimento di Informatica Università di Pisa.
Scoring, Term Weighting, and Vector Space Model Lecture 7: Scoring, Term Weighting and the Vector Space Model Web Search and Mining 1.
Information Retrieval Lecture 6. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index support.
Computing Scores in a Complete Search System Lecture 8: Scoring and results assembly Web Search and Mining 1.
Information Retrieval and Web Search Lecture 7 1.
Term Weighting and Ranking Models Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata.
Information Retrieval Lecture 2 Introduction to Information Retrieval (Manning et al. 2007) Chapter 6 & 7 For the MSc Computer Science Programme Dell Zhang.
Basic ranking Models Boolean and Vector Space Models.
Advanced topics in Computer Science Jiaheng Lu Department of Computer Science Renmin University of China
Chapter 6: Information Retrieval and Web Search
Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: [Note: Some.
1 ITCS 6265 Information Retrieval and Web Mining Lecture 7.
Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1.
Introduction to Information Retrieval Introduction to Information Retrieval COMP4210: Information Retrieval and Search Engines Lecture 5: Scoring, Term.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 7 Computing scores in a complete search system.
Vector Space Models.
Search Engines WS 2009 / 2010 Prof. Dr. Hannah Bast Chair of Algorithms and Data Structures Department of Computer Science University of Freiburg Lecture.
Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 7: Scoring and results assembly.
Lecture 6: Scoring, Term Weighting and the Vector Space Model
Information Retrieval Techniques MS(CS) Lecture 7 AIR UNIVERSITY MULTAN CAMPUS Most of the slides adapted from IIR book.
Information Retrieval and Web Search IR models: Vector Space Model Instructor: Rada Mihalcea [Note: Some slides in this set were adapted from an IR course.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Chris Manning and Pandu Nayak Efficient.
PrasadL09VSM-Ranking1 Vector Space Model : Ranking Revisited Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)
Web Search and Data Mining Lecture 4 Adapted from Manning, Raghavan and Schuetze.
Term weighting and Vector space retrieval
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 9: Scoring, Term Weighting and the Vector Space Model.
Information Retrieval and Web Search Lecture 7: Vector space retrieval.
The Vector Space Models (VSM)
Information Retrieval and Web Search
CS276A Text Information Retrieval, Mining, and Exploitation
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Information Retrieval and Web Search
Information Retrieval and Web Search
8. Efficient Scoring Most slides were adapted from Stanford CS 276 course and University of Munich IR course.
CS276 Lecture 7: Scoring and results assembly
Term Frequency–Inverse Document Frequency
Presentation transcript:

CS276 Information Retrieval and Web Search Lecture 7: Vector space retrieval

Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index support for scoring tf  idf and vector spaces

This lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations

Documents as vectors At the end of Lecture 6 we said: Each doc d can now be viewed as a vector of wf  idf values, one component for each term So we have a vector space terms are axes docs live in this space even with stemming, may have 50,000+ dimensions

Why turn docs into vectors? First application: Query-by-example Given a doc d, find others “like” it. Now that d is a vector, find vectors (docs) “near” it.

Intuition Postulate: Documents that are “close together” in the vector space talk about the same things. t1t1 d2d2 d1d1 d3d3 d4d4 d5d5 t3t3 t2t2 θ φ

Desiderata for proximity If d 1 is near d 2, then d 2 is near d 1. If d 1 near d 2, and d 2 near d 3, then d 1 is not far from d 3. No doc is closer to d than d itself.

First cut Idea: Distance between d 1 and d 2 is the length of the vector |d 1 – d 2 |. Euclidean distance Why is this not a great idea? We still haven’t dealt with the issue of length normalization Short documents would be more similar to each other by virtue of length, not topic However, we can implicitly normalize by looking at angles instead

Cosine similarity Distance between vectors d 1 and d 2 captured by the cosine of the angle x between them. Note – this is similarity, not distance No triangle inequality for similarity. t 1 d 2 d 1 t 3 t 2 θ

Cosine similarity A vector can be normalized (given a length of 1) by dividing each of its components by its length – here we use the L 2 norm This maps vectors onto the unit sphere: Then, Longer documents don’t get more weight

Cosine similarity Cosine of angle between two vectors The denominator involves the lengths of the vectors. Normalization

Normalized vectors For normalized vectors, the cosine is simply the dot product:

Example Docs: Austen's Sense and Sensibility, Pride and Prejudice; Bronte's Wuthering Heights. tf weights cos(SAS, PAP) =.996 x x x 0.0 = cos(SAS, WH) =.996 x x x.254 = 0.889

Cosine similarity exercises Exercise: Rank the following by decreasing cosine similarity. Assume tf-idf weighting: Two docs that have only frequent words (the, a, an, of) in common. Two docs that have no words in common. Two docs that have many rare words in common (wingspan, tailfin).

Exercise Euclidean distance between vectors: Show that, for normalized vectors, Euclidean distance gives the same proximity ordering as the cosine measure

Queries in the vector space model Central idea: the query as a vector: We regard the query as short document We return the documents ranked by the closeness of their vectors to the query, also represented as a vector. Note that d q is very sparse!

Summary: What’s the point of using vector spaces? A well-formed algebraic space for retrieval Key: A user’s query can be viewed as a (very) short document. Query becomes a vector in the same space as the docs. Can measure each doc’s proximity to it. Natural measure of scores/ranking – no longer Boolean. Queries are expressed as bags of words Other similarity measures: see for a survey

Digression: spamming indices This was all invented before the days when people were in the business of spamming web search engines. Consider: Indexing a sensible passive document collection vs. An active document collection, where people (and indeed, service companies) are shaping documents in order to maximize scores Vector space similarity may not be as useful in this context.

Interaction: vectors and phrases Scoring phrases doesn’t fit naturally into the vector space world: “tangerine trees” “marmalade skies” Positional indexes don’t calculate or store tf.idf information for “tangerine trees” Biword indexes treat certain phrases as terms For these, we can pre-compute tf.idf. Theoretical problem of correlated dimensions Problem: we cannot expect end-user formulating queries to know what phrases are indexed We can use a positional index to boost or ensure phrase occurrence

Vectors and Boolean queries Vectors and Boolean queries really don’t work together very well In the space of terms, vector proximity selects by spheres: e.g., all docs having cosine similarity  0.5 to the query Boolean queries on the other hand, select by (hyper-)rectangles and their unions/intersections Round peg - square hole

Vectors and wild cards How about the query tan* marm*? Can we view this as a bag of words? Thought: expand each wild-card into the matching set of dictionary terms. Danger – unlike the Boolean case, we now have tfs and idfs to deal with. Net – not a good idea.

Vector spaces and other operators Vector space queries are apt for no-syntax, bag- of-words queries Clean metaphor for similar-document queries Not a good combination with Boolean, wild-card, positional query operators But …

Query language vs. scoring May allow user a certain query language, say Free text basic queries Phrase, wildcard etc. in Advanced Queries. For scoring (oblivious to user) may use all of the above, e.g. for a free text query Highest-ranked hits have query as a phrase Next, docs that have all query terms near each other Then, docs that have some query terms, or all of them spread out, with tf x idf weights for scoring

Exercises How would you augment the inverted index built in lectures 1–3 to support cosine ranking computations? Walk through the steps of serving a query. The math of the vector space model is quite straightforward, but being able to do cosine ranking efficiently at runtime is nontrivial

Efficient cosine ranking Find the k docs in the corpus “nearest” to the query  k largest query-doc cosines. Efficient ranking: Computing a single cosine efficiently. Choosing the k largest cosine values efficiently. Can we do this without computing all n cosines? n = number of documents in collection

Efficient cosine ranking What we’re doing in effect: solving the k-nearest neighbor problem for a query vector In general, we do not know how to do this efficiently for high-dimensional spaces But it is solvable for short queries, and standard indexes are optimized to do this

Computing a single cosine For every term i, with each doc j, store term frequency tf ij. Some tradeoffs on whether to store term count, term weight, or weighted by idf i. At query time, use an array of accumulators A j to accumulate component-wise sum If you’re indexing 5 billion documents (web search) an array of accumulators is infeasible Ideas?

Encoding document frequencies Add tf t,d to postings lists Now almost always as frequency – scale at runtime Unary code is quite effective here  code (Lecture 3) is an even better choice Overall, requires little additional space abacus 8 aargh 10 acacia 35 1,2 7,3 83,1 87,2 … 1,1 5,1 13,1 17,1 … 7,1 8,2 40,1 97,3 … Why?

Computing the k largest cosines: selection vs. sorting Typically we want to retrieve the top k docs (in the cosine ranking for the query) not to totally order all docs in the corpus Can we pick off docs with k highest cosines?

Use heap for selecting top k Binary tree in which each node’s value > the values of children Takes 2n operations to construct, then each of k log n “winners” read off in 2log n steps. For n=1M, k=100, this is about 10% of the cost of sorting

Bottleneck Still need to first compute cosines from query to each of n docs  several seconds for n = 1M. Can select from only non-zero cosines Need union of postings lists accumulators (<<1M): on the query aargh abacus would only do accumulators 1,5,7,13,17,83,87 (below). Better iff this set is < 20% of n abacus 8 aargh 10 acacia 35 1,2 7,3 83,1 87,2 … 1,1 5,1 13,1 17,1 … 7,1 8,2 40,1 97,3 …

Removing bottlenecks Can further limit to documents with non-zero cosines on rare (high idf) words Or enforce conjunctive search (a la Google): non- zero cosines on all words in query Get # accumulators down to {min of postings lists sizes} But in general still potentially expensive Sometimes have to fall back to (expensive) soft- conjunctive search: If no docs match a 4-term query, look for 3-term subsets, etc.

Can we avoid all this computation? Yes, but may occasionally get an answer wrong a doc not in the top k may creep into the answer.

Limiting the accumulators: Best m candidates Preprocess: Pre-compute, for each term, its m nearest docs. (Treat each term as a 1-term query.) lots of preprocessing. Result: “preferred list” for each term. Search: For a t-term query, take the union of their t preferred lists – call this set S, where |S|  mt. Compute cosines from the query to only the docs in S, and choose the top k. Need to pick m>k to work well empirically.

Exercises Fill in the details of the calculation: Which docs go into the preferred list for a term? Devise a small example where this method gives an incorrect ranking.

Limiting the accumulators: Frequency/impact ordered postings Idea: we only want to have accumulators for documents for which wf t,d is high enough We sort postings lists by this quantity We retrieve terms by idf, and then retrieve only one block of the postings list for each term We continue to process more blocks of postings until we have enough accumulators Can continue one that ended with highest wf t,d The number of accumulators is bounded Anh et al. 2001

Cluster pruning: preprocessing Pick  n docs at random: call these leaders For each other doc, pre-compute nearest leader Docs attached to a leader: its followers; Likely: each leader has ~  n followers.

Cluster pruning: query processing Process a query as follows: Given query Q, find its nearest leader L. Seek k nearest docs from among L’s followers.

Visualization Query LeaderFollower

Why use random sampling Fast Leaders reflect data distribution

General variants Have each follower attached to a=3 (say) nearest leaders. From query, find b=4 (say) nearest leaders and their followers. Can recur on leader/follower construction.

Exercises To find the nearest leader in step 1, how many cosine computations do we do? Why did we have  n in the first place? What is the effect of the constants a,b on the previous slide? Devise an example where this is likely to fail – i.e., we miss one of the k nearest docs. Likely under random sampling.

Dimensionality reduction What if we could take our vectors and “pack” them into fewer dimensions (say 50,000  100) while preserving distances? (Well, almost.) Speeds up cosine computations. Two methods: Random projection. “Latent semantic indexing”.

Random projection onto k<<m axes Choose a random direction x 1 in the vector space. For i = 2 to k, Choose a random direction x i that is orthogonal to x 1, x 2, … x i– 1. Project each document vector into the subspace spanned by {x 1, x 2, …, x k }.

E.g., from 3 to 2 dimensions d2d2 d1d1 x1x1 t 3 x2x2 t 2 t 1 x1x1 x2x2 d2d2 d1d1 x 1 is a random direction in (t 1,t 2,t 3 ) space. x 2 is chosen randomly but orthogonal to x 1. Dot product of x 1 and x 2 is zero.

Guarantee With high probability, relative distances are (approximately) preserved by projection. Pointer to precise theorem in Resources.

Computing the random projection Projecting n vectors from m dimensions down to k dimensions: Start with m  n matrix of terms  docs, A. Find random k  m orthogonal projection matrix R. Compute matrix product W = R  A. j th column of W is the vector corresponding to doc j, but now in k << m dimensions.

Cost of computation This takes a total of kmn multiplications. Expensive – see Resources for ways to do essentially the same thing, quicker. Question: by projecting from 50,000 dimensions down to 100, are we really going to make each cosine computation faster? Why?

Latent semantic indexing (LSI) Another technique for dimension reduction Random projection was data-independent LSI on the other hand is data-dependent Eliminate redundant axes Pull together “related” axes – hopefully car and automobile More on LSI when studying clustering, later in this course.

Resources IIR 7 MG Ch ; MIR 2.5, 2.7.2; FSNLP 15.4FSNLP 15.4 Anh, V.N., de Krester, O, and A. Moffat Vector-Space Ranking with Effective Early Termination", Proc. 24th Annual International ACM SIGIR Conference, Anh, V.N. and A. Moffat Pruned query evaluation using pre-computed impacts. SIGIR 2006, Random projection theorem – Dasgupta and Gupta. An elementary proof of the Johnson-Lindenstrauss Lemma (1999). Random projection theorem Faster random projection - A.M. Frieze, R. Kannan, S. Vempala. Fast Monte-Carlo Algorithms for finding low-rank approximations. IEEE Symposium on Foundations of Computer Science, Faster random projection