Presentation on theme: "Information Retrieval and Data Mining (AT71. 07) Comp. Sc. and Inf"— Presentation transcript:
1Information Retrieval and Data Mining (AT71. 07) Comp. Sc. and Inf Information Retrieval and Data Mining (AT71.07) Comp. Sc. and Inf. Mgmt. Asian Institute of TechnologyInstructor: Dr. Sumanta Guha Slide Sources: Introduction to Information Retrieval book slides from Stanford University, adapted and supplemented Chapter 9: Relevance feedback and query expansion
2CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar RaghavanLecture 9: Relevance feedback and query expansion
3Recap: Unranked retrieval evaluation: Precision and Recall Precision: fraction of retrieved docs that are relevant = P(relevant|retrieved)Recall: fraction of relevant docs that are retrieved = P(retrieved|relevant)Precision P = tp/(tp + fp)Recall R = tp/(tp + fn)RelevantNonrelevantRetrievedtpfpNot Retrievedfntn
4Recap: A combined measure: F Combined measure that assesses precision/recall tradeoff is F measure (weighted harmonic mean):where 2 = (1 - )/People usually use balanced F1 measurei.e., with = 1 or = ½Harmonic mean is a conservative averageSee CJ van Rijsbergen, Information Retrieval
5This lecture Improving results Options for improving results… For high recall. E.g., searching for aircraft should match with plane; thermodynamics with heatOptions for improving results…Global methodsQuery expansionThesauriAutomatic thesaurus generationLocal methodsRelevance feedbackPseudo relevance feedbackThe examples in the first bullet refer to the Cranfield data set which we have used as a programming exercise.
6Sec. 9.1Relevance FeedbackRelevance feedback: user feedback on relevance of docs in initial set of resultsUser issues a (short, simple) query. System returns results.The user marks some results as relevant or non-relevant.The system computes a better representation of the information need based on feedback; returns results again.Repeat: Relevance feedback can go through one or more iterations.Principle: it may be difficult to formulate a good query when you don’t know the collection well, so iterate
7Sec. 9.1Relevance feedbackWe will use ad hoc retrieval to refer to regular retrieval without relevance feedback.We now look at four examples of relevance feedback that highlight different aspects.
8Similar pages Clicking this gives feedback to the search engine that the doc is relevant!
9Relevance Feedback: Example SecRelevance Feedback: ExampleImage search engineSite doesn’t exist any more!
10Results for Initial Query SecResults for Initial Query
12Results after Relevance Feedback SecResults after Relevance Feedback
13Initial query/results SecInitial query/resultsInitial query: New space satellite applications, 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer, 07/09/91, NASA Scratches Environment Gear From Satellite Plan, 04/04/90, Science Panel Backs NASA Satellite Plan, But Urges Launches of Smaller Probes, 09/09/91, A NASA Satellite Project Accomplishes Incredible Feat: Staying Within Budget, 07/24/90, Scientist Who Exposed Global Warming Proposes Satellites for Climate Research, 08/22/90, Report Provides Support for the Critics Of Using Big Satellites to Study Climate, 04/13/87, Arianespace Receives Satellite Launch Pact From Telesat Canada, 12/02/87, Telecommunications Tale of Two CompaniesUser then marks relevant documents with “+”.+
14Expanded query after relevance feedback SecExpanded query after relevance feedback2.074 new spacesatellite application5.991 nasa eos4.196 launch aster3.516 instrument arianespace3.004 bundespost ss2.790 rocket scientist2.003 broadcast earth0.836 oil measure
15Results for expanded query SecResults for expanded query*, 07/09/91, NASA Scratches Environment Gear From Satellite Plan , 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer , 08/07/89, When the Pentagon Launches a Secret Satellite, Space Sleuths Do Some Spy Work of Their Own , 07/31/89, NASA Uses ‘Warm’ Superconductors For Fast Circuit , 12/02/87, Telecommunications Tale of Two Companies , 07/09/91, Soviets May Adapt Parts of SS-20 Missile For Commercial Use , 07/12/88, Gaping Gap: Pentagon Lags in Race To Match the Soviets In Rocket Launchers , 06/14/90, Rescue of Satellite By Space Agency To Cost $90 Million
16SecKey concept: CentroidThe centroid is the center of mass of a set of pointsRecall that we represent documents as points in a high-dimensional spaceDefinition: Centroidwhere C is a set of documents.
17SecRocchio AlgorithmThe Rocchio algorithm uses the vector space model to pick a relevance fed-back queryGiven an initial query q0 , Rocchio seeks to manufacture the optimal query qopt that maximizes similarity with docs relevant to q0 and minimizes similarity with docs non-relevant to q0, i.e.,Solution:Problem: we don’t know the truly relevant docs!
18The Theoretically Best Query SecThe Theoretically Best Queryxxxxoxxxxxxxxoxoxoxxooxxx non-relevant documentso relevant documentsOptimal query
19Rocchio 1971 Algorithm (SMART) SecRocchio 1971 Algorithm (SMART)Used in practice:Dr = set of known relevant doc vectorsDnr = set of known irrelevant doc vectorsDifferent from Cr and Cnrqm = modified query vector; q0 = original query vector;α, β,γ: weights (hand-chosen or set empirically)New query moves toward relevant documents and away from irrelevant documents!SMART: Cornell (Salton) IR system of 1970s to 1990s.
20SecSubtleties to noteTradeoff α vs. β/γ : If we have a lot of judged documents, we want a higher β/γ.Some weights (= components) of new query vector qm can become negative because of its formulaNegative term weights are ignored (set to 0)
21Relevance feedback on initial query SecRelevance feedback on initial queryInitial queryxxxoxxxxxxxoxoxxoxooxxxxx known non-relevant documentso known relevant documentsRevised query
22Relevance Feedback in vector spaces SecRelevance Feedback in vector spacesWe can modify the query based on relevance feedback and apply standard vector space model.Use only the docs that were marked.Relevance feedback can improve recall and precisionRelevance feedback is most useful for increasing recall in situations where recall is importantAssumption is that users will make effort to review results and to take time to iterateJust as we modified the query in the vector space model, we can also modify it here. I’m not aware of work that uses language model based Ir this way.
23Positive vs Negative Feedback SecPositive vs Negative FeedbackPositive feedback is more valuable than negative feedback (so, set < ; e.g. = 0.25, = 0.75).Many systems allow only positive feedback (=0).Exercise 9.1: Under what conditions would the query qm inbe the same as the query q0? In all other cases, is qm closer than q0 to the centroid of relevant docs?Exercise 9.2: Why is positive feedback likely to be more useful than negative feedback to an IR system?It’s harder for user to give negative feedback (absence of evidence is not evidence of absence). It’s also harder to use since relevant documents can often form tight cluster, but non-relevant documents rarely do.
24Probabilistic Relevance Feedback Build a Naive Bayes classifier using known relevant and known non-relevant docs as a training set:Recall Naive Bayes classifiers from DM and the example of a training set from a computer shop:Possible values for hypothesis H are C1:buys_computer = ‘yes’ and C2:buys_computer = ‘no’Attributes X = (age, income, student, credit_rating)Select Ci s.t. P(Ci|X) is maximum.In case of doc classification:Possible values for hypothesis H are C1:doc_relevant = ‘yes’ and C2:doc_relevant = ‘no’Attributes X = (term1, term2, …, termk)Q = query, Cr = relevant documents Cnr = not relevant documentsBoolean attributes: present/not present in doc
25Rocchio Relevance Feedback: Assumptions SecRocchio Relevance Feedback: AssumptionsA1: User has sufficient knowledge for initial query.A2: Relevance prototypes are “well-behaved”.Term distribution in relevant documents will be similarTerm distribution in non-relevant documents will be different from those in relevant documentsIn other words, relevant and non-relevant are bunched in separate clustersoxooxoxoxooo
26Relevance Feedback: Problems Long queries are inefficient for typical IR engine.Long response times for user.High cost for retrieval system.Partial solution:Only reweight certain prominent terms, e.g., top 20 by frequencyUsers in many domains are often reluctant to provide explicit feedbackEmpirically, one round of relevance feedback is often very useful. Two rounds is sometimes marginally useful.Domains where relevance feedback might be effective are recommender systems where a user is looking for a very specific answer, e.g., hotel, flight, etc.A long vector space query is like a disjunction: you have to consider documents with any of the words, and sum partial cosine similarities over the various terms.
27Relevance Feedback on the Web SecRelevance Feedback on the WebSome search engines offer a similar/related pages feature (this is a trivial form of relevance feedback)Google (link-based)AltavistaStanford WebBaseBut some don’t because it’s hard to explain to average user:AllthewebbingYahooExcite initially had true relevance feedback, but abandoned it due to lack of use.
28Excite Relevance Feedback SecSpink et al. 2000Only about 4% of query sessions from a user used relevance feedback optionExpressed as “More like this” link next to each resultBut about 70% of users only looked at first page of results and didn’t pursue things furtherSo 4% is about 1/8 of people extending searchRelevance feedback improved results about 2/3 of the timeModern search engines have become extremely efficient at ad hoc retrieval (i.e., without relevance feedback), so importance of relevance feedback in search has diminished
29Pseudo relevance feedback SecPseudo relevance feedbackPseudo-relevance feedback automates the “manual” part of true relevance feedback.Pseudo-relevance algorithm:Retrieve a ranked list of hits for the user’s queryAssume that the top k documents are relevant.Do relevance feedback (e.g., Rocchio)Works very well on averageBut can go horribly wrong for some queries.Several iterations can cause query drift
30SecQuery ExpansionIn relevance feedback, users give additional input (relevant/non-relevant) on documents, which is used to reweight terms in the documentsIn query expansion, users give additional input (good/bad search term) on words or phrases
31Query assist (assisted expansion) Would you expect such a feature to increase the queryvolume at a search engine?
32Global method: Query Reformulation SecGlobal method: Query ReformulationManual thesaurusE.g. MedLine: physician, syn: doc, doctor, MD, medicofeline → feline catGlobal Analysis: (static; of all documents in collection)Automatically derived thesaurus(co-occurrence statistics)Refinements based on query log miningCommon on the web
33Co-occurrence Thesaurus SecCo-occurrence ThesaurusSimplest way to compute one is based on term-term similarities in C = AAT where A is term-document matrix.wi,j = (normalized) weight for (ti ,dj)For each ti, pick terms with high values in CdjNWhat does C contain if A is a term-doc incidence (0/1) matrix?tiM
34Automatic Thesaurus Generation Example SecAutomatic Thesaurus Generation Example