Boolean, Vector Space, Probabilistic

Slides:



Advertisements
Similar presentations
Text Categorization.
Advertisements

Boolean and Vector Space Retrieval Models
Chapter 5: Introduction to Information Retrieval
Modern information retrieval Modelling. Introduction IR systems usually adopt index terms to process queries IR systems usually adopt index terms to process.
Basic IR: Modeling Basic IR Task: Slightly more complex:
Query Languages. Information Retrieval Concerned with the: Representation of Storage of Organization of, and Access to Information items.
Retrieval Models and Ranking Systems CSC 575 Intelligent Information Retrieval.
Lecture 11 Search, Corpora Characteristics, & Lucene Introduction.
Modern Information Retrieval by R. Baeza-Yates and B. Ribeiro-Neto
Web Search - Summer Term 2006 II. Information Retrieval (Basics Cont.)
Learning for Text Categorization
IR Models: Overview, Boolean, and Vector
Information Retrieval Ling573 NLP Systems and Applications April 26, 2011.
T.Sharon - A.Frank 1 Internet Resources Discovery (IRD) Classic Information Retrieval (IR)
Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22.
ISP 433/533 Week 2 IR Models.
Query Operations: Automatic Local Analysis. Introduction Difficulty of formulating user queries –Insufficient knowledge of the collection –Insufficient.
Basic IR: Queries Query is statement of user’s information need. Index is designed to map queries to likely to be relevant documents. Query type, content,
1 Boolean and Vector Space Retrieval Models Many slides in this section are adapted from Prof Raymond Mooney (UTexas), Prof. Joydeep Ghosh (UT ECE) who.
Chapter 5: Query Operations Baeza-Yates, 1999 Modern Information Retrieval.
T.Sharon - A.Frank 1 Internet Resources Discovery (IRD) IR Queries.
Ch 4: Information Retrieval and Text Mining
Modern Information Retrieval Chapter 2 Modeling. Can keywords be used to represent a document or a query? keywords as query and matching as query processing.
Chapter 2Modeling 資工 4B 陳建勳. Introduction.  Traditional information retrieval systems usually adopt index terms to index and retrieve documents.
Modeling Modern Information Retrieval
1 Query Language Baeza-Yates and Navarro Modern Information Retrieval, 1999 Chapter 4.
Boolean, Vector Space, Probabilistic
Vector Space Model CS 652 Information Extraction and Integration.
Retrieval Models II Vector Space, Probabilistic.  Allan, Ballesteros, Croft, and/or Turtle Properties of Inner Product The inner product is unbounded.
Modern Information Retrieval Chapter 2 Modeling. Can keywords be used to represent a document or a query? keywords as query and matching as query processing.
Automatic Indexing (Term Selection) Automatic Text Processing by G. Salton, Chap 9, Addison-Wesley, 1989.
IR Models: Review Vector Model and Probabilistic.
1 CS 502: Computing Methods for Digital Libraries Lecture 11 Information Retrieval I.
Query Operations: Automatic Global Analysis. Motivation Methods of local analysis extract information from local set of documents retrieved to expand.
Chapter 5: Information Retrieval and Web Search
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
Modeling (Chap. 2) Modern Information Retrieval Spring 2000.
Chapter 2 Architecture of a Search Engine. Search Engine Architecture n A software architecture consists of software components, the interfaces provided.
Query Operations J. H. Wang Mar. 26, The Retrieval Process User Interface Text Operations Query Operations Indexing Searching Ranking Index Text.
Information Retrieval Models - 1 Boolean. Introduction IR systems usually adopt index terms to process queries Index terms:  A keyword or group of selected.
1 Information Retrieval Acknowledgements: Dr Mounia Lalmas (QMW) Dr Joemon Jose (Glasgow)
Xiaoying Gao Computer Science Victoria University of Wellington Intelligent Agents COMP 423.
IR Models J. H. Wang Mar. 11, The Retrieval Process User Interface Text Operations Query Operations Indexing Searching Ranking Index Text quer y.
Weighting and Matching against Indices. Zipf’s Law In any corpus, such as the AIT, we can count how often each word occurs in the corpus as a whole =
Term Frequency. Term frequency Two factors: – A term that appears just once in a document is probably not as significant as a term that appears a number.
Chapter 6: Information Retrieval and Web Search
Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: [Note: Some.
Introduction to Digital Libraries hussein suleman uct cs honours 2003.
Ranking in Information Retrieval Systems Prepared by: Mariam John CSE /23/2006.
Information Retrieval Model Aj. Khuanlux MitsophonsiriCS.426 INFORMATION RETRIEVAL.
Comparing and Ranking Documents Once our search engine has retrieved a set of documents, we may want to Rank them by relevance –Which are the best fit.
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
Vector Space Models.
1 CS 391L: Machine Learning Text Categorization Raymond J. Mooney University of Texas at Austin.
Information Retrieval
C.Watterscsci64031 Classical IR Models. C.Watterscsci64032 Goal Hit set of relevant documents Ranked set Best match Answer.
Information Retrieval and Web Search IR models: Vector Space Model Instructor: Rada Mihalcea [Note: Some slides in this set were adapted from an IR course.
Introduction n IR systems usually adopt index terms to process queries n Index term: u a keyword or group of selected words u any word (more general) n.
Xiaoying Gao Computer Science Victoria University of Wellington COMP307 NLP 4 Information Retrieval.
Information Retrieval and Web Search IR models: Vector Space Model Term Weighting Approaches Instructor: Rada Mihalcea.
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
Text Based Information Retrieval
Information Retrieval and Web Search
Representation of documents and queries
Introduction to Information Retrieval
4. Boolean and Vector Space Retrieval Models
Boolean and Vector Space Retrieval Models
Recuperação de Informação B
Recuperação de Informação B
Information Retrieval and Web Design
Presentation transcript:

Boolean, Vector Space, Probabilistic Retrieval Models I Boolean, Vector Space, Probabilistic

What is a retrieval model? An idealization or abstraction of an actual process (retrieval) results in measure of similarity b/w query and document May describe the computational process e.g. how documents are ranked note that inverted file is an implementation not a model May attempt to describe the human process e.g. the information need, search strategy, etc Retrieval variables: queries, documents, terms, relevance judgements, users, information needs Have an explicit or implicit definition of relevance

Mathematical models Study the properties of the process Draw conclusions or make predictions Conclusions derived depend on whether model is a good approximation of the actual situation Statistical models represent repetitive processes predict frequencies of interesting events use probability as the fundamental tool

Exact Match Retrieval Retrieving documents that satisfy a Boolean expression constitutes the Boolean exact match retrieval model query specifies precise retrieval criteria every document either matches or fails to match query result is a set of documents (no order) Advantages: efficient predictable, easy to explain structured queries work well when you know exactly what docs you want

Exact-match Retrieval Model Disadvantages: query formulation difficult for most users difficulty increases with collection size (why?) indexing vocabulary same as query vocabulary acceptable precision generally means unacceptable recall ranking models are consistently better Best-match or ranking models are now more common we’ll see these later

Boolean retrieval Most common exact-match model Boolean queries queries: logic expressions with doc features as operands retrieved documents are generally not ranked query formulation difficult for novice users Boolean queries Used by Boolean retrieval model and in other models Boolean query  Boolean model “Pure” Boolean operators: AND, OR, and NOT Most systems have proximity operators Most systems support simple regular expressions as search terms to match spelling variants

WESTLAW: Large commercial system Serves legal and professional market legal materials (court opinions, statutes, regulations,…) news (newspapers, magazines, journals, …) financial (stock quotes, SEC materials, financial analyses, …) Total collection size: 5-7 Terabytes 700,000 users Claim 47% of legal searchers 120K daily on-line users (54% of on-line users) In operation since 1974 (51 countries as of 2002) Best-match (and free text queries) added in 1992

WESTLAW query language features Boolean operators Proximity operators Phrases - “West Publishing” Word proximity - West /5 Publishing Same sentence - Massachusetts /s technology Same paragraph - “information retrieval” /p exact-match Restrictions (e.g. DATE(AFTER 1992 & BEFORE 1995))

WESTLAW query language features Term expansion Universal characters (THOM*SON) Truncation (THOM!) Automatic expansion of plurals, possessives Document structure (fields) query expression restricted to named document component (title, abstract) title(“inference network”) & cites(Croft) & date(after 1990)

Features to Note about Queries Queries are developed incrementally add query components until reasonable number of documents retrieved “query by numbers” implicit relevance feedback Queries are complex proximity operators used very frequently implicit OR for synonyms /NOT is rare Queries are long (av. 9-10 words) not typical Internet queries (1-2 words)

WESTLAW Query Processing Stop words indexing stopwords (one) query stopwords (about 50) stopwords retained in some contexts Thesaurus to aid user expansion of queries Document display sort order specified by user (e.g. date) query term highlighting

WESTLAW query example What is the statute of limitations in cases involving the federal tort claims act? LIMIT! /3 STATUTE ACTION /S FEDERAL /2 TORT /3 CLAIM Are there any cases which discuss negligent maintenance or failure to maintain aids to navigation such as lights, buoys, or channel markers? NOT NEGLECT! FAIL! NEGLIG! /5 MAINT! REPAIR! /P NAVIGAT! /5 AID EQUIP! LIGHT BUOY “CHANNEL MARKER”

Exact-Match VS Best-Match Retrieval query specifies precise retrieval criteria every document either matches or fails to match query result is a set of documents Best-match Most common retrieval approach generally better retrieval performance query describes good or “best” matching document result is ranked list of documents, good documents appear at top of ranking result may include estimate of quality hard to compare best- and exact-match in principled way

Advantages of Best Match: Best-Match Retrieval Advantages of Best Match: Significantly more effective than exact match Uncertainty is a better model than certainty Easier to use (support full text queries) Similar efficiency (based on inverted file implementations)

Best Match Retrieval Disadvantages: More difficult to convey an appropriate cognitive model (“control”) Full text is not natural language understanding (no “magic”) Efficiency is always less than exact match (cannot reject documents early) Boolean or structured queries can be part of a best-match retrieval model

Why did Commercial services ignore best-match (for 20 years)? Inertia/installed software base “Cultural” differences Not clear tests on small collections would scale Operating costs Training Few convincing user studies Risk

Retrieval Models A retrieval model specifies the details of: Document representation Query representation Retrieval function Determines a notion of relevance. either implicitly or explicitly Notion of relevance can be binary or continuous (i.e. ranked retrieval).

Classes of Retrieval Models Boolean models (set theoretic) Extended Boolean Vector space models (statistical/algebraic) Generalized VS Latent Semantic Indexing Probabilistic models

Other Model Dimensions Logical View of Documents Index terms Full text Full text + Structure (e.g. hypertext) User Task Retrieval Browsing

Retrieval Tasks Ad hoc retrieval: Fixed document corpus, varied queries. Filtering: Fixed query, continuous document stream. User Profile: A model of relative static preferences. Binary decision of relevant/not-relevant. Routing: Same as filtering but continuously supply ranked lists rather than binary filtering.

Issues for Vector Space Model How to determine important words in a document? Word sense? Word n-grams (and phrases, idioms,…)  terms How to determine degree of importance of a term within a document and within the entire collection? How to determine the degree of similarity between a document and the query? In the case of the web, what is a collection and what are the effects of links, formatting information, etc.?

The Vector-Space Model Assume t distinct terms remain after pre-processing (the index terms or the vocabulary). These “orthogonal” terms form a vector space. Dimension = t = |vocabulary| Each term, i, in a document or query, j, is given a real-valued weight, wij. Both documents and queries are expressed as t-dimensional vectors: dj = (w1j, w2j, …, wtj)

Document Collection Collection of n documents can be represented in the vector space model by a term-document matrix. An entry in the matrix corresponds to the “weight” of a term in the document; zero means the term has no significance in the document or it simply doesn’t exist in the document. tf-idf weighting typical: wij = tfij*idfi = tfij log2 (N/ dfi) T1 T2 …. Tt D1 w11 w21 … wt1 D2 w12 w22 … wt2 : : : : Dn w1n w2n … wtn

Graphic Representation Example: D1 = 2T1 + 3T2 + 5T3 D2 = 3T1 + 7T2 + T3 Q = 0T1 + 0T2 + 2T3 T3 T1 T2 D1 = 2T1+ 3T2 + 5T3 D2 = 3T1 + 7T2 + T3 Q = 0T1 + 0T2 + 2T3 7 3 2 5 Is D1 or D2 more similar to Q? How to measure the degree of similarity? Distance? Angle? Projection?

Term Weights: Term Frequency More frequent terms in a document are more important, i.e. more indicative of the topic. fij = frequency of term i in document j May want to normalize term frequency (tf) across the entire corpus: tfij = fij / max{fij}

Term Weights: IDF Terms that appear in many different documents are less indicative of overall topic. df i = document frequency of term i = number of documents containing term i idfi = inverse document frequency of term i, = log2 (N/ df i) (N: total number of documents) Recall: indication of a term’s discrimination power. Log used to dampen the effect relative to tf.

Query Vector Query vector is typically treated as a document and also tf-idf weighted. Alternative is for the user to supply weights for the given query terms.

Similarity Measure A similarity measure is a function that computes the degree of similarity between two vectors. Using a similarity measure between the query and each document: It is possible to rank the retrieved documents in the order of presumed relevance. It is possible to enforce a certain threshold so that the size of the retrieved set can be controlled.

Similarity Measure - Inner Product Similarity between vectors for the document di and query q can be computed as the vector inner product: sim(dj,q) = dj•q = wij · wiq where wij is the weight of term i in document j and wiq is the weight of term i in the query For binary vectors, the inner product is the number of matched query terms in the document (size of intersection). For weighted term vectors, it is the sum of the products of the weights of the matched terms.