Information Retrieval and Web Search

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 7: Scoring and results assembly.
Advertisements

Text Categorization.
Boolean and Vector Space Retrieval Models
| 1 › Gertjan van Noord2014 Zoekmachines Lecture 4.
Ranking models in IR Key idea: We wish to return in order the documents most likely to be useful to the searcher To do this, we want to know which documents.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model.
TF-IDF David Kauchak cs160 Fall 2009 adapted from:
Learning for Text Categorization
IR Models: Overview, Boolean, and Vector
Query Operations: Automatic Local Analysis. Introduction Difficulty of formulating user queries –Insufficient knowledge of the collection –Insufficient.
Ch 4: Information Retrieval and Text Mining
CS276 Information Retrieval and Web Mining
Hinrich Schütze and Christina Lioma
Boolean, Vector Space, Probabilistic
Information Retrieval IR 6. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index support.
Retrieval Models II Vector Space, Probabilistic.  Allan, Ballesteros, Croft, and/or Turtle Properties of Inner Product The inner product is unbounded.
Recuperação de Informação. IR: representation, storage, organization of, and access to information items Emphasis is on the retrieval of information (not.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
CS276A Text Information Retrieval, Mining, and Exploitation Lecture 4 15 Oct 2002.
Documents as vectors Each doc j can be viewed as a vector of tf.idf values, one component for each term So we have a vector space terms are axes docs live.
Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1 Ad indexes.
Boolean and Vector Space Models
5 June 2006Polettini Nicola1 Term Weighting in Information Retrieval Polettini Nicola Monday, June 5, 2006 Web Information Retrieval.
PrasadL2IRModels1 Models for IR Adapted from Lectures by Berthier Ribeiro-Neto (Brazil), Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning.
Information Retrieval Chapter 2: Modeling 2.1, 2.2, 2.3, 2.4, 2.5.1, 2.5.2, Slides provided by the author, modified by L N Cassel September 2003.
Information Retrieval Lecture 2 Introduction to Information Retrieval (Manning et al. 2007) Chapter 6 & 7 For the MSc Computer Science Programme Dell Zhang.
Introduction to Digital Libraries Searching
Basic ranking Models Boolean and Vector Space Models.
Advanced topics in Computer Science Jiaheng Lu Department of Computer Science Renmin University of China
Weighting and Matching against Indices. Zipf’s Law In any corpus, such as the AIT, we can count how often each word occurs in the corpus as a whole =
Term Frequency. Term frequency Two factors: – A term that appears just once in a document is probably not as significant as a term that appears a number.
Chapter 6: Information Retrieval and Web Search
Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: [Note: Some.
Ranking in Information Retrieval Systems Prepared by: Mariam John CSE /23/2006.
Introduction to Information Retrieval Introduction to Information Retrieval COMP4210: Information Retrieval and Search Engines Lecture 5: Scoring, Term.
Vector Space Models.
1 CS 391L: Machine Learning Text Categorization Raymond J. Mooney University of Texas at Austin.
CIS 530 Lecture 2 From frequency to meaning: vector space models of semantics.
Lecture 6: Scoring, Term Weighting and the Vector Space Model
Information Retrieval Techniques MS(CS) Lecture 7 AIR UNIVERSITY MULTAN CAMPUS Most of the slides adapted from IIR book.
Information Retrieval and Web Search IR models: Vector Space Model Instructor: Rada Mihalcea [Note: Some slides in this set were adapted from an IR course.
Introduction n IR systems usually adopt index terms to process queries n Index term: u a keyword or group of selected words u any word (more general) n.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 9: Scoring, Term Weighting and the Vector Space Model.
Web Information Retrieval
Information Retrieval and Web Search IR models: Vector Space Model Term Weighting Approaches Instructor: Rada Mihalcea.
Automated Information Retrieval
The Vector Space Models (VSM)
Plan for Today’s Lecture(s)
7CCSMWAL Algorithmic Issues in the WWW
Ch 6 Term Weighting and Vector Space Model
Information Retrieval and Web Search
אחזור מידע, מנועי חיפוש וספריות
Representation of documents and queries
From frequency to meaning: vector space models of semantics
6. Implementation of Vector-Space Retrieval
Chapter 5: Information Retrieval and Web Search
4. Boolean and Vector Space Retrieval Models
Hankz Hankui Zhuo Text Categorization Hankz Hankui Zhuo
5. Vector Space and Probabilistic Retrieval Models
Boolean and Vector Space Retrieval Models
CS 430: Information Discovery
Efficient Retrieval Document-term matrix t1 t tj tm nf
Recuperação de Informação B
Recuperação de Informação B
Information Retrieval and Web Design
Advanced information retrieval
CS276: Information Retrieval and Web Search
Term Frequency–Inverse Document Frequency
Presentation transcript:

Information Retrieval and Web Search IR models: Vector Space Model Instructor: Rada Mihalcea [Edited by J. Wiebe] [Note: Some slides in this set were adapted from an IR course taught by Ray Mooney at UT Austin, who in turn adapted them from Joydeep Ghosh, who in turn adapted them …]

Vector-Space Model t distinct terms remain after preprocessing Unique terms that form the VOCABULARY These “orthogonal” terms form a vector space. Dimension = t = |vocabulary| 2 terms  bi-dimensional; …; n-terms  n-dimensional Each term, i, in a document or query j, is given a real-valued weight, wij. Both documents and queries are expressed as t-dimensional vectors: dj = (w1j, w2j, …, wtj)

Vector-Space Model Query as vector: We regard query as short document We return the documents ranked by the closeness of their vectors to the query, also represented as a vector. Vectorial model was developed in the SMART system (Salton, c. 1970) and standardly used by TREC participants and web IR systems

Graphic Representation Example: D1 = 2T1 + 3T2 + 5T3 D2 = 3T1 + 7T2 + T3 Q = 0T1 + 0T2 + 2T3 T3 T1 T2 D1 = 2T1+ 3T2 + 5T3 D2 = 3T1 + 7T2 + T3 Q = 0T1 + 0T2 + 2T3 7 3 2 5 Is D1 or D2 more similar to Q?

Document Collection Representation A collection of n documents can be represented in the vector space model by a term-document matrix. An entry in the matrix corresponds to the “weight” of a term in the document; zero means the term has no significance in the document or it simply doesn’t exist in the document. T1 T2 …. Tt D1 w11 w21 … wt1 D2 w12 w22 … wt2 : : : : Dn w1n w2n … wtn

Term Weights: Term Frequency More frequent terms in a document are more important, i.e. more indicative of the topic. fij = frequency of term i in document j

Term Weights: Inverse Document Frequency Terms that appear in many different documents are less indicative of overall topic. df i = document frequency of term i = number of documents containing term i idfi = inverse document frequency of term i, = log2 (N/ df i) (N: total number of documents) An indication of a term’s discrimination power. Log used to dampen the effect relative to tf.

wij = tfij idfi = tfij log2 (N/ dfi) TF-IDF Weighting A typical weighting is tf-idf weighting: wij = tfij idfi = tfij log2 (N/ dfi) A term occurring frequently in the document but rarely in the rest of the collection is given high weight. Experimentally, tf-idf has been found to work well. It was also theoretically proved to work well (Papineni, NAACL 2001)

Computing TF-IDF: An Example Given a document containing terms with given frequencies: A(3), B(2), C(1) Assume collection contains 10,000 documents and document frequencies of these terms are: A(50), B(1300), C(250) Then: A: tf = 3; idf = log(10000/50) = 5.3; tf-idf = 15.9 B: tf = 2; idf = log(10000/1300) = 2.0; tf-idf = 4 C: tf = 1; idf = log(10000/250) = 3.7; tf-idf = 3.7

Query Vector Query vector is typically treated as a document and also tf-idf weighted. Alternative is for the user to supply weights for the given query terms.

Similarity Measure We now have vectors for all documents in the collection, a vector for the query, how to compute similarity? A similarity measure is a function that computes the degree of similarity between two vectors. Using a similarity measure between the query and each document: It is possible to rank the retrieved documents in the order of presumed relevance. It is possible to enforce a certain threshold so that the size of the retrieved set can be controlled.

Desiderata for proximity If d1 is near d2, then d2 is near d1. If d1 near d2, and d2 near d3, then d1 is not far from d3. No document is closer to d than d itself. Sometimes it is a good idea to determine the maximum possible similarity as the “distance” between a document d and itself

First cut: Euclidean distance Distance between vectors d1 and d2 is the length of the vector |d1 – d2|. Euclidean distance Why is this not a great idea? We still haven’t dealt with the issue of length normalization Long documents would be more similar to each other by virtue of length, not topic However, we can implicitly normalize by looking at angles instead

Second cut: Manhattan Distance Or “city block” measure Based on the idea that generally in American cities you cannot follow a direct line between two points. Uses the formula: y x

Third cut: Inner Product Similarity between vectors for the document di and query q can be computed as the vector inner product: sim(dj,q) = dj•q = wij · wiq where wij is the weight of term i in document j and wiq is the weight of term i in the query For binary vectors, the inner product is the number of matched query terms in the document (size of intersection). For weighted term vectors, it is the sum of the products of the weights of the matched terms.

Properties of Inner Product Favors long documents with a large number of unique terms. Again, the issue of normalization Measures how many terms matched but not how many terms are not matched.

Cosine similarity Distance between vectors d1 and d2 captured by the cosine of the angle x between them. Note – this is similarity, not distance t 1 d2 d1 t 3 t 2 θ

Cosine similarity Cosine of angle between two vectors The denominator involves the lengths of the vectors So the cosine measure is also known as the normalized inner product http://en.wikipedia.org/wiki/Euclidean_distance (not required; fyi)

Example Documents: Austen's Sense and Sensibility, Pride and Prejudice; Bronte's Wuthering Heights cos(SAS, PAP) = .996 x .993 + .087 x .120 + .017 x 0.0 = 0.999 cos(SAS, WH) = .996 x .847 + .087 x .466 + .017 x .254 = 0.929

Comments on Vector Space Models Simple, mathematically based approach. Considers both local (tf) and global (idf) word occurrence frequencies. Provides partial matching and ranked results. Tends to work quite well in practice despite obvious weaknesses. Allows efficient implementation for large document collections.

Naïve Implementation Convert all documents in collection D to tf-idf weighted vectors, dj, for keyword vocabulary V. Convert query to a tf-idf-weighted vector q. For each dj in D do Compute score sj = cosSim(dj, q) Sort documents by decreasing score. Present top ranked documents to the user. Time complexity: O(|V|·|D|) Bad for large V & D ! |V| = 10,000; |D| = 100,000; |V|·|D| = 1,000,000,000

Practical Implementation Based on the observation that documents containing none of the query keywords do not affect the final ranking Try to identify only those documents that contain at least one query keyword Actual implementation of an inverted index We are not covering this in this class, so the rest of the presentation has been deleted