Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1.

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 7: Scoring and results assembly.
Advertisements

Chapter 5: Introduction to Information Retrieval
Lecture 11 Search, Corpora Characteristics, & Lucene Introduction.
Ranking models in IR Key idea: We wish to return in order the documents most likely to be useful to the searcher To do this, we want to know which documents.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model.
TF-IDF David Kauchak cs160 Fall 2009 adapted from:
CpSc 881: Information Retrieval
Database Management Systems, R. Ramakrishnan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides.
Ch 4: Information Retrieval and Text Mining
CS276 Information Retrieval and Web Mining
Hinrich Schütze and Christina Lioma
Computer comunication B Information retrieval. Information retrieval: introduction 1 This topic addresses the question on how it is possible to find relevant.
1 CS 430 / INFO 430 Information Retrieval Lecture 3 Vector Methods 1.
Information Retrieval IR 6. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index support.
Retrieval Models II Vector Space, Probabilistic.  Allan, Ballesteros, Croft, and/or Turtle Properties of Inner Product The inner product is unbounded.
Automatic Indexing (Term Selection) Automatic Text Processing by G. Salton, Chap 9, Addison-Wesley, 1989.
Vector Space Model : TF - IDF
IR Models: Review Vector Model and Probabilistic.
CES 514 Data Mining March 11, 2010 Lecture 5: scoring, term weighting, vector space model (Ch 6)
CS246 Basic Information Retrieval. Today’s Topic  Basic Information Retrieval (IR)  Bag of words assumption  Boolean Model  Inverted index  Vector-space.
Chapter 5: Information Retrieval and Web Search
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
CS276A Text Information Retrieval, Mining, and Exploitation Lecture 4 15 Oct 2002.
Documents as vectors Each doc j can be viewed as a vector of tf.idf values, one component for each term So we have a vector space terms are axes docs live.
Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1 Ad indexes.
Boolean and Vector Space Models
Scoring, Term Weighting, and Vector Space Model Lecture 7: Scoring, Term Weighting and the Vector Space Model Web Search and Mining 1.
Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1.
Clustering Documents. Overview It is a process of partitioning a set of data into a set of meaningful subclasses. Every data in the subclass shares a.
Information Retrieval Lecture 2 Introduction to Information Retrieval (Manning et al. 2007) Chapter 6 & 7 For the MSc Computer Science Programme Dell Zhang.
Basic ranking Models Boolean and Vector Space Models.
Xiaoying Gao Computer Science Victoria University of Wellington Intelligent Agents COMP 423.
Advanced topics in Computer Science Jiaheng Lu Department of Computer Science Renmin University of China
Weighting and Matching against Indices. Zipf’s Law In any corpus, such as the AIT, we can count how often each word occurs in the corpus as a whole =
Term Frequency. Term frequency Two factors: – A term that appears just once in a document is probably not as significant as a term that appears a number.
Lecture 5 Term and Document Frequency CS 6633 資訊檢索 Information Retrieval and Web Search 資電 125 Based on ppt files by Hinrich Schütze.
Chapter 6: Information Retrieval and Web Search
1 Computing Relevance, Similarity: The Vector Space Model.
Introduction to Digital Libraries hussein suleman uct cs honours 2003.
Ranking in Information Retrieval Systems Prepared by: Mariam John CSE /23/2006.
CPSC 404 Laks V.S. Lakshmanan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides at UC-Berkeley.
Comparing and Ranking Documents Once our search engine has retrieved a set of documents, we may want to Rank them by relevance –Which are the best fit.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
Information Retrieval and Web Mining Lecture 6. This lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index.
Introduction to Information Retrieval Introduction to Information Retrieval COMP4210: Information Retrieval and Search Engines Lecture 5: Scoring, Term.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 5 9/6/2011.
Vector Space Models.
CIS 530 Lecture 2 From frequency to meaning: vector space models of semantics.
Lecture 6: Scoring, Term Weighting and the Vector Space Model
Information Retrieval Techniques MS(CS) Lecture 7 AIR UNIVERSITY MULTAN CAMPUS Most of the slides adapted from IIR book.
Information Retrieval and Web Search IR models: Vector Space Model Instructor: Rada Mihalcea [Note: Some slides in this set were adapted from an IR course.
Xiaoying Gao Computer Science Victoria University of Wellington COMP307 NLP 4 Information Retrieval.
Term weighting and Vector space retrieval
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 9: Scoring, Term Weighting and the Vector Space Model.
Web Information Retrieval
Information Retrieval and Web Search IR models: Vector Space Model Term Weighting Approaches Instructor: Rada Mihalcea.
3: Search & retrieval: Structures. The dog stopped attacking the cat, that lived in U.S.A. collection corpus database web d1…..d n docs processed term-doc.
IR 6 Scoring, term weighting and the vector space model.
The Vector Space Models (VSM)
Plan for Today’s Lecture(s)
7CCSMWAL Algorithmic Issues in the WWW
Ch 6 Term Weighting and Vector Space Model
Aidan Hogan CC Procesamiento Masivo de Datos Otoño 2017 Lecture 7: Information Retrieval II Aidan Hogan
IST 516 Fall 2011 Dongwon Lee, Ph.D.
Aidan Hogan CC Procesamiento Masivo de Datos Otoño 2018 Lecture 7 Information Retrieval: Ranking Aidan Hogan
Representation of documents and queries
From frequency to meaning: vector space models of semantics
Term Frequency–Inverse Document Frequency
Presentation transcript:

Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1

Process query Look-up the index Retrieve list of documents Order documents Content relevance Link analysis Popularity Prepare results page Today’s question: Given a large list of documents that match a query, how to order them according to their relevance? 2

Answer: Scoring Documents Given document d Given query q Calculate score(q,d) Rank documents in decreasing order of score(q,d) Generic Model: Documents = bag of [unordered] words (in set theory a bag is a multiset) A document is composed of terms A query is composed of terms score(q,d) will depend on terms 3

Method 1: Assign weights to terms Assign to each term a weight tf t,d - term frequency (how often term t occurs in document d) query = ‘who wrote wild boys’ doc1 = ‘Duran Duran sang Wild Boys in 1984.’ doc2 = ‘Wild boys don’t remain forever wild.’ doc3 = ‘Who brought wild flowers?’ doc4 = ‘It was John Krakauer who wrote In to the wild.’ query = {boys: 1, who: 1, wild: 1, wrote: 1} doc1 = {1984: 1, boys: 1, duran: 2, in: 1, sang: 1, wild: 1} doc2 = {boys: 1, don’t: 1, forever: 1, remain: 1, wild: 2}… score(q, doc1) = = 2score(q, doc2) = = 3 score(q,doc3) = = 2score(q, doc4) = = 3 4

Why is Method 1 not good? All terms have equal importance. Bigger documents have more terms, thus the score is larger. It ignores term order. Postulate: If a word appears in every document, probably it is not that important (it has no discriminatory power). 5

Method 2: New weights df t - document frequency for term t idf t - inverse document frequency for term t tf-idf td - a combined weight for term t in document d Increases with the number of occurrences within a doc Increases with the rarity of the term across the whole corpus N - total number of documents 6

Example: idf values 7 termsdfidf boys brought don’t duran flowers forever in it john termsdfidf krakauer remain sang the to was who wild40.0 wrote10.602

Example: calculating scores (1) documentsS: tf-idfS: tf duran duran sang wild boys in wild boys don't remain forever wild who brought wild flowers it was john krakauer who wrote in to the wild query = ‘who wrote wild boys’ documentsS: tf-idfS: tf duran duran sang wild boys in wild boys don't remain forever wild who brought wild flowers it was john krakauer who wrote in to the wild

Example: calculating scores (2) documentsS: tf-idfS: tf duran duran who sang wild boys in wild boys don't remain forever wild who brought wild flowers it was john krakauer who wrote in to the wild documentsS: tf-idfS: tf duran duran sang wrote wild boys in wild boys don't remain forever wild who brought wild flowers it was john krakauer who wrote in to the wild query = ‘who wrote wild boys’ 9

The Vector Space Model Formalizing the “bag-of-words” model. Each term from the collection becomes a dimension in a n-dimensional space. A document is a vector in this space, where term weights serve as coordinates. It is important for: Scoring documents for answering queries Query by example Document classification Document clustering 10

Term-document matrix (revision) 11 Anthony & Cleopatra Julius CaesarHamletOthello Anthony Brutus Caesar Calphurnia01000 Cleopatra48000 The counts in each column represent term frequency (tf).

Documents as vectors 12 … combat … courage… enemy … fierce … peace … war HenryVI HenryVI HenryVI Othello Rom.&Jul Taming … Calculation example: N = 44 (works in the Shakespeare collection) wardf = 21, idf = log(44/21) = HenryVI-1tf-idf war = tf war * idf war = 12 * = HenryVI-3 = 50 * =

Why turn docs into vectors? 13 Query-by-example Given a doc D, find others “like” it. Now that D is a vector, => Given a doc, find vectors (docs) “near” it. Intuition: t1t1 d2d2 d1d1 d3d3 d4d4 d5d5 t3t3 t2t2 θ φ Postulate: Documents that are “close together” in vector space talk about the same things.

Some geometry 14 t1 t2 d1 d2 d1 cosine can be used as a measure of similarity between two vectors Given two vectors and

Cosine Similarity 15 where is a weight, e.g., tf-idf We can regard a query q as a document d q and use the same formula: For any two given documents d j and d k, their similarity is:

Example 16 Given the Shakespeare play “Hamlet”, find most similar plays to it. 1.Taming of the shrew 2.Winter’s tale 3.Richard III horhaue tftf-idftftf-idf Hamlet Taming of the Shrew The word ‘hor’ appears only in these two plays. It is an abbreviation (‘Hor.’) for the names Horatio and Hortentio. The product of the tf-idf values for this word amounts to 82% of the similarity value between the two documents.

Digression: spamming indices 17 This method was invented before the days when people were in the business of spamming web search engines. Consider: Indexing a sensible passive document collection vs. An active document collection, where people (and indeed, service companies) are shaping documents in order to maximize scores Vector space similarity may not be as useful in this context.

Issues to consider 18 How would you augment the inverted index to support cosine ranking computations? Walk through the steps of serving a query. The math of the vector space model is quite straightforward, but being able to do cosine ranking efficiently at runtime is nontrivial