Information Retrieval CSE 8337 (Part IV) Spring 2011 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates.

Slides:



Advertisements
Similar presentations
Relevance Feedback & Query Expansion
Advertisements

Chapter 5: Introduction to Information Retrieval
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Relevance Feedback and Query Expansion
Kevin Knight,
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
Introduction to Information Retrieval (Part 2) By Evren Ermis.
Information Retrieval IR 7. Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations.
Evaluating Search Engine
1 Advanced information retrieval Chapter. 05: Query Reformulation.
Query Operations: Automatic Local Analysis. Introduction Difficulty of formulating user queries –Insufficient knowledge of the collection –Insufficient.
Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs.
CS276 Information Retrieval and Web Search Lecture 9.
CS276A Information Retrieval Lecture 9. Recap of the last lecture Results summaries Evaluating a search engine Benchmarks Precision and recall.
Chapter 5: Query Operations Baeza-Yates, 1999 Modern Information Retrieval.
INFO 624 Week 3 Retrieval System Evaluation
1 Query Language Baeza-Yates and Navarro Modern Information Retrieval, 1999 Chapter 4.
Recall: Query Reformulation Approaches 1. Relevance feedback based vector model (Rocchio …) probabilistic model (Robertson & Sparck Jones, Croft…) 2. Cluster.
Relevance Feedback Main Idea:
Query Operations: Automatic Global Analysis. Motivation Methods of local analysis extract information from local set of documents retrieved to expand.
Evaluation of Image Retrieval Results Relevant: images which meet user’s information need Irrelevant: images which don’t meet user’s information need Query:
Chapter 5: Information Retrieval and Web Search
Modeling (Chap. 2) Modern Information Retrieval Spring 2000.
Evaluation David Kauchak cs458 Fall 2012 adapted from:
PrasadL11QueryOps1 Query Operations Adapted from Lectures by Prabhakar Raghavan (Yahoo, Stanford) and Christopher Manning (Stanford)
Evaluation David Kauchak cs160 Fall 2009 adapted from:
Query Expansion.
Information Retrieval and Web Search Relevance Feedback. Query Expansion Instructor: Rada Mihalcea Class web page:
Information Retrieval and Web Search IR Evaluation and IR Standard Text Collections.
COMP423.  Query expansion  Two approaches ◦ Relevance feedback ◦ Thesaurus-based  Most Slides copied from ◦
PrasadL11QueryOps1 Query Operations Adapted from Lectures by Prabhakar Raghavan (Google) and Christopher Manning (Stanford)
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 7 9/13/2011.
Query Operations J. H. Wang Mar. 26, The Retrieval Process User Interface Text Operations Query Operations Indexing Searching Ranking Index Text.
Information Retrieval Lecture 7. Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations.
Query Operations. Query Models n IR Systems usually adopt index terms to process queries; n Index term: u A keyword or a group of selected words; u Any.
1 Query Operations Relevance Feedback & Query Expansion.
Chapter 6: Information Retrieval and Web Search
CS276 Information Retrieval and Web Search
Evaluation of (Search) Results How do we know if our results are any good? Evaluating a search engine  Benchmarks  Precision and recall Results summaries:
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Basic Implementation and Evaluations Aj. Khuanlux MitsophonsiriCS.426 INFORMATION RETRIEVAL.
Query Languages Aj. Khuanlux MitsophonsiriCS.426 INFORMATION RETRIEVAL.
Query Suggestion. n A variety of automatic or semi-automatic query suggestion techniques have been developed  Goal is to improve effectiveness by matching.
Motivation  Methods of local analysis extract information from local set of documents retrieved to expand the query  An alternative is to expand the.
Information Retrieval and Web Search Relevance Feedback. Query Expansion Instructor: Rada Mihalcea.
Chapter. 3: Retrieval Evaluation 1/2/2016Dr. Almetwally Mostafa 1.
Evaluation. The major goal of IR is to search document relevant to a user query. The evaluation of the performance of IR systems relies on the notion.
Information Retrieval Quality of a Search Engine.
Lecture 11: Relevance Feedback & Query Expansion
Introduction to Information Retrieval Introduction to Information Retrieval Information Retrieval and Web Search Lecture 9: Relevance feedback & query.
Kevin Knight,
Information Retrieval Lecture 3 Introduction to Information Retrieval (Manning et al. 2007) Chapter 8 For the MSc Computer Science Programme Dell Zhang.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Query Refinement and Relevance Feedback.
Information Retrieval CSE 8337 Spring 2007 Query Operations Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar.
Information Retrieval CSE 8337 Spring 2003 Query Operations Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates.
Query expansion COMP423. Menu Query expansion Two approaches Relevance feedback Thesaurus-based Most Slides copied from
Lecture 9: Query Expansion. This lecture Improving results For high recall. E.g., searching for aircraft doesn’t match with plane; nor thermodynamic with.
Introduction to Information Retrieval Introduction to Information Retrieval Information Retrieval and Web Search Lecture 8: Evaluation.
Sampath Jayarathna Cal Poly Pomona
Sampath Jayarathna Cal Poly Pomona
Advanced information retrieval
Lecture 12: Relevance Feedback & Query Expansion - II
COIS 442 Information Retrieval Foundations
Evaluation of IR Systems
Evaluation.
Modern Information Retrieval
Relevance Feedback & Query Expansion
Automatic Global Analysis
Retrieval Performance Evaluation - Measures
Presentation transcript:

Information Retrieval CSE 8337 (Part IV) Spring 2011 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto Data Mining Introductory and Advanced Topics by Margaret H. Dunham  Introduction to Information Retrieval by Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze

CSE 8337 Spring CSE 8337 Outline Introduction Text Processing Indexes Boolean Queries Web Searching/Crawling Vector Space Model Matching Evaluation Feedback/Expansion

CSE 8337 Spring Why System Evaluation? There are many retrieval models/ algorithms/ systems, which one is the best? What does best mean? IR evaluation may not actually look at traditional CS metrics of space/time. What is the best component for: Ranking function (dot-product, cosine, … ) Term selection (stopword removal, stemming … ) Term weighting (TF, TF-IDF, … ) How far down the ranked list will a user need to look to find some/all relevant documents?

CSE 8337 Spring Measures for a search engine How fast does it index Number of documents/hour (Average document size) How fast does it search Latency as a function of index size Expressiveness of query language Ability to express complex information needs Speed on complex queries Uncluttered UI Is it free?

CSE 8337 Spring Measures for a search engine All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise The key measure: user happiness What is this? Speed of response/size of index are factors But blindingly fast, useless answers won’t make a user happy Need a way of quantifying user happiness

CSE 8337 Spring Happiness: elusive to measure Most common proxy: relevance of search results But how do you measure relevance? We will detail a methodology here, then examine its issues Relevant measurement requires 3 elements: 1.A benchmark document collection 2.A benchmark suite of queries 3.A usually binary assessment of either Relevant or Nonrelevant for each query and each document

CSE 8337 Spring Difficulties in Evaluating IR Systems Effectiveness is related to the relevancy of retrieved items. Relevancy is not typically binary but continuous. Even if relevancy is binary, it can be a difficult judgment to make. Relevancy, from a human standpoint, is: Subjective: Depends upon a specific user ’ s judgment. Situational: Relates to user ’ s current needs. Cognitive: Depends on human perception and behavior. Dynamic: Changes over time.

CSE 8337 Spring How to perform evaluation Start with a corpus of documents. Collect a set of queries for this corpus. Have one or more human experts exhaustively label the relevant documents for each query. Typically assumes binary relevance judgments. Requires considerable human effort for large document/query corpora.

CSE 8337 Spring IR Evaluation Metrics Precision/Recall P/R graph Regular Smoothing Interpolating Averaging ROC Curve MAP R-Precision P/R points F-Measure E-Measure Fallout Novelty Coverage Utility ….

CSE 8337 Spring Relevant documents Retrieved documents Entire document collection retrieved & relevant not retrieved but relevant retrieved & irrelevant Not retrieved & irrelevant retrievednot retrieved relevant irrelevant Precision and Recall

CSE 8337 Spring Determining Recall is Difficult Total number of relevant items is sometimes not available: Sample across the database and perform relevance judgment on these items. Apply different retrieval algorithms to the same database for the same query. The aggregate of relevant items is taken as the total relevant set.

CSE 8337 Spring Trade-off between Recall and Precision Recall Precision The ideal Desired areas Returns relevant documents but misses many useful ones too Returns most relevant documents but includes lots of junk

CSE 8337 Spring Recall-Precision Graph Example

CSE 8337 Spring A precision-recall curve

CSE 8337 Spring Recall-Precision Graph Smoothing Avoid sawtooth lines by smoothing Interpolate for one query Average across queries

CSE 8337 Spring Interpolating a Recall/Precision Curve Interpolate a precision value for each standard recall level: r j  {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} r 0 = 0.0, r 1 = 0.1, …, r 10 =1.0 The interpolated precision at the j-th standard recall level is the maximum known precision at any recall level between the j-th and (j + 1)-th level:

CSE 8337 Spring Interpolated precision Idea: If locally precision increases with increasing recall, then you should get to count that… So you max of precisions to right of value(Need not be at only standard levels.)

CSE 8337 Spring Precision across queries Recall and Precision are calculated for a specific query. Generally want a value for many queries. Calculate average precision recall over a set of queries. Average precision at recall level r: N q – number of queries P i (r) - precision at recall level r for i th query

CSE 8337 Spring Average Recall/Precision Curve Typically average performance over a large set of queries. Compute average precision at each standard recall level across all queries. Plot average precision/recall curves to evaluate overall system performance on a document/query corpus.

CSE 8337 Spring Compare Two or More Systems The curve closest to the upper right-hand corner of the graph indicates the best performance

CSE 8337 Spring Operating Characteristic Curve (ROC Curve)

CSE 8337 Spring ROC Curve Data False Positive Rate vs True Positive Rate True Positive Rate Sensitivity – proportion of positive results received Recall tn/(fp+tn) False Positive Rate fp/(fp+tn) 1-Specificity Specificity – proportion of negative results not received

CSE 8337 Spring Yet more evaluation measures… Mean average precision (MAP) Average of the precision value obtained for the top k documents, each time a relevant doc is retrieved Avoids interpolation, use of fixed recall levels MAP for query collection is arithmetic ave. Macro-averaging: each query counts equally R-precision If have known (though perhaps incomplete) set of relevant documents of size Rel, then calculate precision of top Rel docs returned Perfect system could score 1.0.

CSE 8337 Spring Variance For a test collection, it is usual that a system does poor on some information needs (e.g., MAP = 0.1) and well on others (e.g., MAP = 0.7) Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query. That is, there are easy information needs and hard ones!

CSE 8337 Spring Evaluation Graphs are good, but people want summary measures! Precision at fixed retrieval level Precision-at-k: Precision of top k results Perhaps appropriate for most of web search: all people want are good matches on the first one or two results pages But: averages badly and has an arbitrary parameter of k 11-point interpolated average precision The standard measure in the early TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them Evaluates performance at all recall levels

CSE 8337 Spring Typical (good) 11 point precisions SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)

CSE 8337 Spring Computing Recall/Precision Points For a given query, produce the ranked list of retrievals. Adjusting a threshold on this ranked list produces different sets of retrieved documents, and therefore different recall/precision measures. Mark each document in the ranked list that is relevant according to the gold standard. Compute a recall/precision pair for each position in the ranked list that contains a relevant document.

CSE 8337 Spring R=3/6=0.5; P=3/4=0.75 Computing Recall/Precision Points: An Example (modified from [Salton83]) Let total # of relevant docs = 6 Check each new recall point: R=1/6=0.167;P=1/1=1 R=2/6=0.333;P=2/2=1 R=5/6=0.833;p=5/13=0.38 R=4/6=0.667; P=4/6=0.667

CSE 8337 Spring F-Measure One measure of performance that takes into account both recall and precision. Harmonic mean of recall and precision: Calculated at a specific document in the ranking. Compared to arithmetic mean, both need to be high for harmonic mean to be high. Compromise between precision and recall

CSE 8337 Spring A combined measure: F Combined measure that assesses precision/recall tradeoff is F measure (weighted harmonic mean): People usually use balanced F 1 measure i.e., with  = 1 or  = ½

CSE 8337 Spring E Measure (parameterized F Measure) A variant of F measure that allows weighting emphasis on precision or recall: Value of  controls trade-off:  = 1: Equally weight precision and recall (E=F).  > 1: Weight precision more.  < 1: Weight recall more.

CSE 8337 Spring Fallout Rate Problems with both precision and recall: Number of irrelevant documents in the collection is not taken into account. Recall is undefined when there is no relevant document in the collection. Precision is undefined when no document is retrieved.

CSE 8337 Spring Fallout Want fallout to be close to 0. In general want to maximize recall and minimize fallout. Examine fallout-recall graph. More systems oriented than recall-precision.

CSE 8337 Spring Subjective Relevance Measures Novelty Ratio: The proportion of items retrieved and judged relevant by the user and of which they were previously unaware. Ability to find new information on a topic. Coverage Ratio: The proportion of relevant items retrieved out of the total relevant documents known to a user prior to the search. Relevant when the user wants to locate documents which they have seen before (e.g., the budget report for Year 2000).

CSE 8337 Spring Utility Subjective measure Cost-Benefit Analysis for retrieved documents Cr – Benefit of retrieving relevant document Cnr – Cost of retrieving a nonrelevant document Crn – Cost of not retrieving a relevant document Nr – Number of relevant documents retrieved Nnr – Number of nonrelevant documents retrieved Nrn – Number of relevant documents not retrieved

CSE 8337 Spring Other Factors to Consider User effort: Work required from the user in formulating queries, conducting the search, and screening the output. Response time: Time interval between receipt of a user query and the presentation of system responses. Form of presentation: Influence of search output format on the user ’ s ability to utilize the retrieved materials. Collection coverage: Extent to which any/all relevant items are included in the document corpus.

CSE 8337 Spring Experimental Setup for Benchmarking Analytical performance evaluation is difficult for document retrieval systems because many characteristics such as relevance, distribution of words, etc., are difficult to describe with mathematical precision. Performance is measured by benchmarking. That is, the retrieval effectiveness of a system is evaluated on a given set of documents, queries, and relevance judgments. Performance data is valid only for the environment under which the system is evaluated.

CSE 8337 Spring Benchmarks A benchmark collection contains: A set of standard documents and queries/topics. A list of relevant documents for each query. Standard collections for traditional IR: TREC: Standard document collection Standard queries Algorithm under test Evaluation Standard result Retrieved result Precision and recall

CSE 8337 Spring Benchmarking  The Problems Performance data is valid only for a particular benchmark. Building a benchmark corpus is a difficult task. Benchmark web corpora are just starting to be developed. Benchmark foreign-language corpora are just starting to be developed.

CSE 8337 Spring The TREC Benchmark TREC: Text REtrieval Conference ( Originated from the TIPSTER program sponsored by Defense Advanced Research Projects Agency (DARPA). Became an annual conference in 1992, co-sponsored by the National Institute of Standards and Technology (NIST) and DARPA. Participants are given parts of a standard set of documents and TOPICS (from which queries have to be derived) in different stages for training and testing. Participants submit the P/R values for the final document and query corpus and present their results at the conference.

CSE 8337 Spring The TREC Objectives Provide a common ground for comparing different IR techniques. –Same set of documents and queries, and same evaluation method. Sharing of resources and experiences in developing the benchmark. –With major sponsorship from government to develop large benchmark collections. Encourage participation from industry and academia. Development of new evaluation techniques, particularly for new applications. –Retrieval, routing/filtering, non-English collection, web-based collection, question answering.

CSE 8337 Spring From document collections to test collections Still need Test queries Relevance assessments Test queries Must be germane to docs available Best designed by domain experts Random query terms generally not a good idea Relevance assessments Human judges, time-consuming Are human panels perfect?

CSE 8337 Spring Kappa measure for inter-judge (dis)agreement Kappa measure Agreement measure among judges Designed for categorical judgments Corrects for chance agreement Kappa = [ P(A) – P(E) ] / [ 1 – P(E) ] P(A) – proportion of time judges agree P(E) – what agreement would be by chance Kappa = 0 for chance agreement, 1 for total agreement.

CSE 8337 Spring Kappa Measure: Example Number of docsJudge 1Judge 2 300Relevant 70Nonrelevant 20RelevantNonrelevant 10Nonrelevantrelevant P(A)? P(E)?

CSE 8337 Spring Kappa Example P(A) = 370/400 = P(nonrelevant) = ( )/800 = P(relevant) = ( )/800 = P(E) = ^ ^2 = Kappa = (0.925 – 0.665)/( ) = Kappa > 0.8 = good agreement 0.67 “tentative conclusions” (Carletta ’96) Depends on purpose of study For >2 judges: average pairwise kappas

CSE 8337 Spring Interjudge Agreement: TREC 3

CSE 8337 Spring Impact of Inter-judge Agreement Impact on absolute performance measure can be significant (0.32 vs 0.39) Little impact on ranking of different systems or relative performance Suppose we want to know if algorithm A is better than algorithm B A standard information retrieval experiment will give us a reliable answer to this question.

CSE 8337 Spring Critique of pure relevance Relevance vs Marginal Relevance A document can be redundant even if it is highly relevant Duplicates The same information from different sources Marginal relevance is a better measure of utility for the user. Using facts/entities as evaluation units more directly measures true relevance. But harder to create evaluation set

CSE 8337 Spring Can we avoid human judgment? No Makes experimental work hard Especially on a large scale In some very specific settings, can use proxies E.g.: for approximate vector space retrieval, we can compare the cosine distance closeness of the closest docs to those found by an approximate retrieval algorithm But once we have test collections, we can reuse them (so long as we don’t overtrain too badly)

CSE 8337 Spring Evaluation at large search engines Search engines have test collections of queries and hand-ranked results Recall is difficult to measure on the web Search engines often use precision at top k, e.g., k = or measures that reward you more for getting rank 1 right than for getting rank 10 right. NDCG (Normalized Cumulative Discounted Gain) Search engines also use non-relevance-based measures. Clickthrough on first result Not very reliable if you look at a single clickthrough … but pretty reliable in the aggregate. Studies of user behavior in the lab A/B testing

CSE 8337 Spring A/B testing Purpose: Test a single innovation Prerequisite: You have a large search engine up and running. Have most users use old system Divert a small proportion of traffic (e.g., 1%) to the new system that includes the innovation Evaluate with an “automatic” measure like clickthrough on first result Now we can directly see if the innovation does improve user happiness. Probably the evaluation methodology that large search engines trust most In principle less powerful than doing a multivariate regression analysis, but easier to understand Problems with A/B Testing: UR-Kohavi.pdf

CSE 8337 Spring CSE 8337 Outline Introduction Text Processing Indexes Boolean Queries Web Searching/Crawling Vector Space Model Matching Evaluation Feedback/Expansion

CSE 8337 Spring Query Operations Introduction IR queries as stated by the user may not be precise or effective. There are many techniques to improve a stated query and then process that query instead.

CSE 8337 Spring How can results be improved? Options for improving result Local methods Personalization Relevance feedback Pseudo relevance feedback Query expansion Local Analysis Thesauri Automatic thesaurus generation Query assist

CSE 8337 Spring Relevance Feedback Relevance feedback: user feedback on relevance of docs in initial set of results User issues a (short, simple) query The user marks some results as relevant or non- relevant. The system computes a better representation of the information need based on feedback. Relevance feedback can go through one or more iterations. Idea: it may be difficult to formulate a good query when you don’t know the collection well, so iterate

CSE 8337 Spring Relevance Feedback After initial retrieval results are presented, allow the user to provide feedback on the relevance of one or more of the retrieved documents. Use this feedback information to reformulate the query. Produce new results based on reformulated query. Allows more interactive, multi-pass process.

CSE 8337 Spring Relevance feedback We will use ad hoc retrieval to refer to regular retrieval without relevance feedback. We now look at four examples of relevance feedback that highlight different aspects.

CSE 8337 Spring Similar pages

CSE 8337 Spring Relevance Feedback: Example Image search engine

CSE 8337 Spring Results for Initial Query

CSE 8337 Spring Relevance Feedback

CSE 8337 Spring Results after Relevance Feedback

CSE 8337 Spring Initial query/results Initial query: New space satellite applications , 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer , 07/09/91, NASA Scratches Environment Gear From Satellite Plan , 04/04/90, Science Panel Backs NASA Satellite Plan, But Urges Launches of Smaller Probes , 09/09/91, A NASA Satellite Project Accomplishes Incredible Feat: Staying Within Budget , 07/24/90, Scientist Who Exposed Global Warming Proposes Satellites for Climate Research , 08/22/90, Report Provides Support for the Critics Of Using Big Satellites to Study Climate , 04/13/87, Arianespace Receives Satellite Launch Pact From Telesat Canada , 12/02/87, Telecommunications Tale of Two Companies User then marks relevant documents with “+”

CSE 8337 Spring Expanded query after relevance feedback new space satellite application nasa eos launch aster instrument arianespace bundespost ss rocket scientist broadcast earth oil measure

CSE 8337 Spring Results for expanded query , 07/09/91, NASA Scratches Environment Gear From Satellite Plan , 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer , 08/07/89, When the Pentagon Launches a Secret Satellite, Space Sleuths Do Some Spy Work of Their Own , 07/31/89, NASA Uses ‘Warm’ Superconductors For Fast Circuit , 12/02/87, Telecommunications Tale of Two Companies , 07/09/91, Soviets May Adapt Parts of SS-20 Missile For Commercial Use , 07/12/88, Gaping Gap: Pentagon Lags in Race To Match the Soviets In Rocket Launchers , 06/14/90, Rescue of Satellite By Space Agency To Cost $90 Million 2 1 8

CSE 8337 Spring Relevance Feedback Use assessments by users as to the relevance of previously returned documents to create new (modify old) queries. Technique: 1. Increase weights of terms from relevant documents. 2. Decrease weight of terms from nonrelevant documents.

CSE 8337 Spring Relevance Feedback Architecture Rankings IR System Document corpus Ranked Documents 1. Doc1 2. Doc2 3. Doc3. 1. Doc1  2. Doc2  3. Doc3 . Feedback Query String Revise d Query ReRanked Documents 1. Doc2 2. Doc4 3. Doc5. Query Reformulation

CSE 8337 Spring Query Reformulation Revise query to account for feedback: Query Expansion: Add new terms to query from relevant documents. Term Reweighting: Increase weight of terms in relevant documents and decrease weight of terms in irrelevant documents. Several algorithms for query reformulation.

CSE 8337 Spring Relevance Feedback in vector spaces We can modify the query based on relevance feedback and apply standard vector space model. Use only the docs that were marked. Relevance feedback can improve recall and precision Relevance feedback is most useful for increasing recall in situations where recall is important Users can be expected to review results and to take time to iterate

CSE 8337 Spring The Theoretically Best Query x x x x o o o Optimal query x non-relevant documents o relevant documents o o o x x x x x x x x x x x x  x x

CSE 8337 Spring Query Reformulation for Vectors Change query vector using vector algebra. Add the vectors for the relevant documents to the query vector. Subtract the vectors for the irrelevant docs from the query vector. This both adds both positive and negatively weighted terms to the query as well as reweighting the initial terms.

CSE 8337 Spring Optimal Query Assume that the relevant set of documents C r are known. Then the best query that ranks all and only the relevant queries at the top is: Where N is the total number of documents.

CSE 8337 Spring Standard Rocchio Method Since all relevant documents unknown, just use the known relevant (D r ) and irrelevant (D n ) sets of documents and include the initial query q.  : Tunable weight for initial query.  : Tunable weight for relevant documents.  : Tunable weight for irrelevant documents.

CSE 8337 Spring Relevance feedback on initial query x x x x o o o Revised query x known non-relevant documents o known relevant documents o o o x x x x x x x x x x x x  x x Initial query 

CSE 8337 Spring Positive vs Negative Feedback Positive feedback is more valuable than negative feedback (so, set  <  ; e.g.  = 0.25,  = 0.75). Many systems only allow positive feedback (  =0). Why?

CSE 8337 Spring Ide Regular Method Since more feedback should perhaps increase the degree of reformulation, do not normalize for amount of feedback:  : Tunable weight for initial query.  : Tunable weight for relevant documents.  : Tunable weight for irrelevant documents.

CSE 8337 Spring Ide “Dec Hi” Method Bias towards rejecting just the highest ranked of the irrelevant documents:  : Tunable weight for initial query.  : Tunable weight for relevant documents.  : Tunable weight for irrelevant document.

CSE 8337 Spring Comparison of Methods Overall, experimental results indicate no clear preference for any one of the specific methods. All methods generally improve retrieval performance (recall & precision) with feedback. Generally just let tunable constants equal 1.

CSE 8337 Spring Relevance Feedback: Assumptions A1: User has sufficient knowledge for initial query. A2: Relevance prototypes are “well-behaved”. Term distribution in relevant documents will be similar Term distribution in non-relevant documents will be different from those in relevant documents Either: All relevant documents are tightly clustered around a single prototype. Or: There are different prototypes, but they have significant vocabulary overlap. Similarities between relevant and irrelevant documents are small

CSE 8337 Spring Violation of A1 User does not have sufficient initial knowledge. Examples: Misspellings (Brittany Speers). Cross-language information retrieval. Mismatch of searcher’s vocabulary vs. collection vocabulary Cosmonaut/astronaut

CSE 8337 Spring Violation of A2 There are several relevance prototypes. Examples: Burma/Myanmar Contradictory government policies Pop stars that worked at Burger King Often: instances of a general concept Good editorial content can address problem Report on contradictory government policies

CSE 8337 Spring Relevance Feedback: Problems Long queries are inefficient for typical IR engine. Long response times for user. High cost for retrieval system. Partial solution: Only reweight certain prominent terms Perhaps top 20 by term frequency Users are often reluctant to provide explicit feedback It’s often harder to understand why a particular document was retrieved after applying relevance feedback Why?

CSE 8337 Spring Evaluation of relevance feedback strategies Use q 0 and compute precision and recall graph Use q m and compute precision recall graph Assess on all documents in the collection Spectacular improvements, but … it’s cheating! Partly due to known relevant documents ranked higher Must evaluate with respect to documents not seen by user Use documents in residual collection (set of documents minus those assessed relevant) Measures usually lower than for original query But a more realistic evaluation Relative performance can be validly compared Empirically, one round of relevance feedback is often very useful. Two rounds is sometimes marginally useful.

CSE 8337 Spring Evaluation of relevance feedback Second method – assess only the docs not rated by the user in the first round Could make relevance feedback look worse than it really is Can still assess relative performance of algorithms Most satisfactory – use two collections each with their own relevance assessments q 0 and user feedback from first collection q m run on second collection and measured

CSE 8337 Spring Why is Feedback Not Widely Used? Users sometimes reluctant to provide explicit feedback. Results in long queries that require more computation to retrieve, and search engines process lots of queries and allow little time for each one. Makes it harder to understand why a particular document was retrieved.

CSE 8337 Spring Evaluation: Caveat True evaluation of usefulness must compare to other methods taking the same amount of time. Alternative to relevance feedback: User revises and resubmits query. Users may prefer revision/resubmission to having to judge relevance of documents. There is no clear evidence that relevance feedback is the “best use” of the user’s time.

CSE 8337 Spring Pseudo relevance feedback Pseudo-relevance feedback automates the “manual” part of true relevance feedback. Pseudo-relevance algorithm: Retrieve a ranked list of hits for the user’s query Assume that the top k documents are relevant. Do relevance feedback (e.g., Rocchio) Works very well on average But can go horribly wrong for some queries. Several iterations can cause query drift. Why?

CSE 8337 Spring PseudoFeedback Results Found to improve performance on TREC competition ad-hoc retrieval task. Works even better if top documents must also satisfy additional boolean constraints in order to be used in feedback.

CSE 8337 Spring Relevance Feedback on the Web Some search engines offer a similar/related pages feature (this is a trivial form of relevance feedback) Google Altavista But some don’t because it’s hard to explain to average user: Yahoo Excite initially had true relevance feedback, but abandoned it due to lack of use. α/β/γ ??

CSE 8337 Spring Excite Relevance Feedback Spink et al Only about 4% of query sessions from a user used relevance feedback option Expressed as “More like this” link next to each result But about 70% of users only looked at first page of results and didn’t pursue things further So 4% is about 1/8 of people extending search Relevance feedback improved results about 2/3 of the time

CSE 8337 Spring Query Expansion In relevance feedback, users give additional input (relevant/non-relevant) on documents, which is used to reweight terms in the documents In query expansion, users give additional input (good/bad search term) on words or phrases

CSE 8337 Spring How do we augment the user query? Manual thesaurus E.g. MedLine: physician, syn: doc, doctor, MD, medico Can be query rather than just synonyms Global Analysis: (static; of all documents in collection) Automatically derived thesaurus (co-occurrence statistics) Refinements based on query log mining Common on the web Local Analysis: (dynamic) Analysis of documents in result set

CSE 8337 Spring Local vs. Global Automatic Analysis Local – Documents retrieved are examined to automatically determine query expansion. No relevance feedback needed. Global – Thesaurus used to help select terms for expansion.

CSE 8337 Spring Automatic Local Analysis At query time, dynamically determine similar terms based on analysis of top-ranked retrieved documents. Base correlation analysis on only the “local” set of retrieved documents for a specific query. Avoids ambiguity by determining similar (correlated) terms only within relevant documents. “Apple computer”  “Apple computer Powerbook laptop”

CSE 8337 Spring Automatic Local Analysis Expand query with terms found in local clusters. D l – set of documents retrieved for query q. V l – Set of words used in D l. S l – Set of distinct stems in V l. f si,j –Frequency of stem s i in document d j found in D l. Construct stem-stem association matrix.

CSE 8337 Spring Association Matrix w 1 w 2 w 3 …………………..w n w1w2w3..wnw1w2w3..wn c 11 c 12 c 13 …………………c 1n c 21 c 31. c n1 c ij : Correlation factor between stems s i and stem s j f ik : Frequency of term i in document k

CSE 8337 Spring Normalized Association Matrix Frequency based correlation factor favors more frequent terms. Normalize association scores: Normalized score is 1 if two stems have the same frequency in all documents.

CSE 8337 Spring Metric Correlation Matrix Association correlation does not account for the proximity of terms in documents, just co-occurrence frequencies within documents. Metric correlations account for term proximity. V i : Set of all occurrences of term i in any document. r(k u,k v ): Distance in words between word occurrences k u and k v (  if k u and k v are occurrences in different documents).

CSE 8337 Spring Normalized Metric Correlation Matrix Normalize scores to account for term frequencies:

CSE 8337 Spring Query Expansion with Correlation Matrix For each term i in query, expand query with the n terms, j, with the highest value of c ij (s ij ). This adds semantically related terms in the “neighborhood” of the query terms.

CSE 8337 Spring Problems with Local Analysis Term ambiguity may introduce irrelevant statistically correlated terms. “Apple computer”  “Apple red fruit computer” Since terms are highly correlated anyway, expansion may not retrieve many additional documents.

CSE 8337 Spring Automatic Global Analysis Determine term similarity through a pre-computed statistical analysis of the complete corpus. Compute association matrices which quantify term correlations in terms of how frequently they co-occur. Expand queries with statistically most similar terms.

CSE 8337 Spring Automatic Global Analysis There are two modern variants based on a thesaurus-like structure built using all documents in collection Query Expansion based on a Similarity Thesaurus Query Expansion based on a Statistical Thesaurus

CSE 8337 Spring Thesaurus A thesaurus provides information on synonyms and semantically related words and phrases. Example: physician syn: ||croaker, doc, doctor, MD, medical, mediciner, medico, ||sawbones rel: medic, general practitioner, surgeon,

CSE 8337 Spring Thesaurus-based Query Expansion For each term, t, in a query, expand the query with synonyms and related words of t from the thesaurus. May weight added terms less than original query terms. Generally increases recall. May significantly decrease precision, particularly with ambiguous terms. “interest rate”  “interest rate fascinate evaluate”

CSE 8337 Spring Similarity Thesaurus The similarity thesaurus is based on term to term relationships rather than on a matrix of co-occurrence. This relationship are not derived directly from co- occurrence of terms inside documents. They are obtained by considering that the terms are concepts in a concept space. In this concept space, each term is indexed by the documents in which it appears. Terms assume the original role of documents while documents are interpreted as indexing elements

CSE 8337 Spring Similarity Thesaurus The following definitions establish the proper framework t: number of terms in the collection N: number of documents in the collection f i,j : frequency of occurrence of the term ki in the document dj t j : vocabulary of document d j itf j : inverse term frequency for document d j

CSE 8337 Spring Similarity Thesaurus Inverse term frequency for document d j To k i is associated a vector

CSE 8337 Spring Similarity Thesaurus where w i,j is a weight associated to index- document pair[k i,d j ]. These weights are computed as follows

CSE 8337 Spring Similarity Thesaurus The relationship between two terms k u and k v is computed as a correlation factor c u,v given by The global similarity thesaurus is built through the computation of correlation factor c u,v for each pair of indexing terms [k u,k v ] in the collection

CSE 8337 Spring Similarity Thesaurus This computation is expensive Global similarity thesaurus has to be computed only once and can be updated incrementally

CSE 8337 Spring Query Expansion based on a Similarity Thesaurus Query expansion is done in three steps as follows:  Represent the query in the concept space used for representation of the index terms 2 Based on the global similarity thesaurus, compute a similarity sim(q,kv) between each term kv correlated to the query terms and the whole query q. 3 Expand the query with the top r ranked terms according to sim(q,kv)

CSE 8337 Spring Query Expansion - step one To the query q is associated a vector q in the term-concept space given by where w i,q is a weight associated to the index-query pair[k i,q]

CSE 8337 Spring Query Expansion - step two Compute a similarity sim(q,k v ) between each term k v and the user query q where c u,v is the correlation factor

CSE 8337 Spring Query Expansion - step three Add the top r ranked terms according to sim(q,k v ) to the original query q to form the expanded query q’ To each expansion term k v in the query q’ is assigned a weight w v,q’ given by The expanded query q’ is then used to retrieve new documents to the user

CSE 8337 Spring Query Expansion Sample Doc1 = D, D, A, B, C, A, B, C Doc2 = E, C, E, A, A, D Doc3 = D, C, B, B, D, A, B, C, A Doc4 = A c(A,A) = c(A,C) = c(A,D) = c(D,E) = c(B,E) = c(E,E) =

CSE 8337 Spring Query Expansion Sample Query: q = A E E sim(q,A) = sim(q,C) = sim(q,D) = sim(q,B) = sim(q,E) = New query: q’ = A C D E E w(A,q')= 6.88 w(C,q')= 6.75 w(D,q')= 6.75 w(E,q')= 6.64

CSE 8337 Spring WordNet A more detailed database of semantic relationships between English words. Developed by famous cognitive psychologist George Miller and a team at Princeton University. About 144,000 English words. Nouns, adjectives, verbs, and adverbs grouped into about 109,000 synonym sets called synsets.

CSE 8337 Spring WordNet Synset Relationships Antonym: front  back Attribute: benevolence  good (noun to adjective) Pertainym: alphabetical  alphabet (adjective to noun) Similar: unquestioning  absolute Cause: kill  die Entailment: breathe  inhale Holonym: chapter  text (part-of) Meronym: computer  cpu (whole-of) Hyponym: tree  plant (specialization) Hypernym: fruit  apple (generalization)

CSE 8337 Spring WordNet Query Expansion Add synonyms in the same synset. Add hyponyms to add specialized terms. Add hypernyms to generalize a query. Add other related terms to expand query.

CSE 8337 Spring Statistical Thesaurus Existing human-developed thesauri are not easily available in all languages. Human thesuari are limited in the type and range of synonymy and semantic relations they represent. Semantically related terms can be discovered from statistical analysis of corpora.

CSE 8337 Spring Query Expansion Based on a Statistical Thesaurus Global thesaurus is composed of classes which group correlated terms in the context of the whole collection Such correlated terms can then be used to expand the original user query This terms must be low frequency terms However, it is difficult to cluster low frequency terms To circumvent this problem, we cluster documents into classes instead and use the low frequency terms in these documents to define our thesaurus classes. This algorithm must produce small and tight clusters.

CSE 8337 Spring Query Expansion based on a Statistical Thesaurus Use the thesaurus class to query expansion. Compute an average term weight wt c for each thesaurus class C

CSE 8337 Spring Query Expansion based on a Statistical Thesaurus wt c can be used to compute a thesaurus class weight w c as

CSE 8337 Spring Query Expansion Sample TC = 0.90 NDC = 2.00 MIDF = 0.2 sim(1,3) = 0.99 sim(1,2) = 0.40 sim(2,3) = 0.29 sim(4,1) = 0.00 sim(4,2) = 0.00 sim(4,3) = 0.00 Doc1 = D, D, A, B, C, A, B, C Doc2 = E, C, E, A, A, D Doc3 = D, C, B, B, D, A, B, C, A Doc4 = A idf A = 0.0 idf B = 0.3 idf C = 0.12 idf D = 0.12 idf E = 0.60 q'=A B E E q= A E E

CSE 8337 Spring Query Expansion based on a Statistical Thesaurus Problems with this approach initialization of parameters TC,NDC and MIDF TC depends on the collection Inspection of the cluster hierarchy is almost always necessary for assisting with the setting of TC. A high value of TC might yield classes with too few terms

CSE 8337 Spring Complete link algorithm This is document clustering algorithm with produces small and tight clusters Place each document in a distinct cluster. Compute the similarity between all pairs of clusters. Determine the pair of clusters [C u,C v ] with the highest inter-cluster similarity. Merge the clusters C u and C v Verify a stop criterion. If this criterion is not met then go back to step 2. Return a hierarchy of clusters. Similarity between two clusters is defined as the minimum of similarities between all pair of inter- cluster documents

CSE 8337 Spring Selecting the terms that compose each class Given the document cluster hierarchy for the whole collection, the terms that compose each class of the global thesaurus are selected as follows Obtain from the user three parameters TC: Threshold class NDC: Number of documents in class MIDF: Minimum inverse document frequency

CSE 8337 Spring Selecting the terms that compose each class Use the parameter TC as threshold value for determining the document clusters that will be used to generate thesaurus classes This threshold has to be surpassed by sim(C u,C v ) if the documents in the clusters C u and C v are to be selected as sources of terms for a thesaurus class

CSE 8337 Spring Selecting the terms that compose each class Use the parameter NDC as a limit on the size of clusters (number of documents) to be considered. A low value of NDC might restrict the selection to the smaller cluster Cu+v

CSE 8337 Spring Selecting the terms that compose each class Consider the set of document in each document cluster pre-selected above. Only the lower frequency documents are used as sources of terms for the thesaurus classes The parameter MIDF defines the minimum value of inverse document frequency for any term which is selected to participate in a thesaurus class

CSE 8337 Spring Global vs. Local Analysis Global analysis requires intensive term correlation computation only once at system development time. Local analysis requires intensive term correlation computation for every query at run time (although number of terms and documents is less than in global analysis). But local analysis gives better results.

CSE 8337 Spring Example of manual thesaurus

CSE 8337 Spring Thesaurus-based query expansion For each term, t, in a query, expand the query with synonyms and related words of t from the thesaurus feline → feline cat May weight added terms less than original query terms. Generally increases recall Widely used in many science/engineering fields May significantly decrease precision, particularly with ambiguous terms. “interest rate”  “interest rate fascinate evaluate” There is a high cost of manually producing a thesaurus And for updating it for scientific changes

CSE 8337 Spring Automatic Thesaurus Generation Attempt to generate a thesaurus automatically by analyzing the collection of documents Fundamental notion: similarity between two words Definition 1: Two words are similar if they co-occur with similar words. Definition 2: Two words are similar if they occur in a given grammatical relation with the same words. You can harvest, peel, eat, prepare, etc. apples and pears, so apples and pears must be similar. Co-occurrence based is more robust, grammatical relations are more accurate.

CSE 8337 Spring Co-occurrence Thesaurus Simplest way to compute one is based on term-term similarities in C = AA T where A is term-document matrix. w i,j = (normalized) weight for (t i,d j ) For each t i, pick terms with high values in C titi djdj N M What does C contain if A is a term-doc incidence (0/1) matrix?

CSE 8337 Spring Automatic Thesaurus Generation Example

CSE 8337 Spring Automatic Thesaurus Generation Discussion Quality of associations is usually a problem. Term ambiguity may introduce irrelevant statistically correlated terms. “Apple computer”  “Apple red fruit computer” Problems: False positives: Words deemed similar that are not False negatives: Words deemed dissimilar that are similar Since terms are highly correlated anyway, expansion may not retrieve many additional documents.

CSE 8337 Spring Query Expansion Conclusions Expansion of queries with related terms can improve performance, particularly recall. However, must select similar terms very carefully to avoid problems, such as loss of precision.

CSE 8337 Spring Conclusion Thesaurus is a efficient method to expand queries The computation is expensive but it is executed only once Query expansion based on similarity thesaurus may use high term frequency to expand the query Query expansion based on statistical thesaurus need well defined parameters

CSE 8337 Spring Query assist Would you expect such a feature to increase the query volume at a search engine?

CSE 8337 Spring Query assist Generally done by query log mining Recommend frequent recent queries that contain partial string typed by user A ranking problem! View each prior query as a doc – Rank-order those matching partial string …