Presentation is loading. Please wait.

Presentation is loading. Please wait.

Information Retrieval CSE 8337 (Part IV) Spring 2011 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates.

Similar presentations


Presentation on theme: "Information Retrieval CSE 8337 (Part IV) Spring 2011 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates."— Presentation transcript:

1 Information Retrieval CSE 8337 (Part IV) Spring 2011 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/ http://www.sims.berkeley.edu/~hearst/irbook/ Data Mining Introductory and Advanced Topics by Margaret H. Dunham http://www.engr.smu.edu/~mhd/book  Introduction to Information Retrieval by Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze http://informationretrieval.org

2 CSE 8337 Spring 2011 2 CSE 8337 Outline Introduction Text Processing Indexes Boolean Queries Web Searching/Crawling Vector Space Model Matching Evaluation Feedback/Expansion

3 CSE 8337 Spring 2011 3 Why System Evaluation? There are many retrieval models/ algorithms/ systems, which one is the best? What does best mean? IR evaluation may not actually look at traditional CS metrics of space/time. What is the best component for: Ranking function (dot-product, cosine, … ) Term selection (stopword removal, stemming … ) Term weighting (TF, TF-IDF, … ) How far down the ranked list will a user need to look to find some/all relevant documents?

4 CSE 8337 Spring 2011 4 Measures for a search engine How fast does it index Number of documents/hour (Average document size) How fast does it search Latency as a function of index size Expressiveness of query language Ability to express complex information needs Speed on complex queries Uncluttered UI Is it free?

5 CSE 8337 Spring 2011 5 Measures for a search engine All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise The key measure: user happiness What is this? Speed of response/size of index are factors But blindingly fast, useless answers won’t make a user happy Need a way of quantifying user happiness

6 CSE 8337 Spring 2011 6 Happiness: elusive to measure Most common proxy: relevance of search results But how do you measure relevance? We will detail a methodology here, then examine its issues Relevant measurement requires 3 elements: 1.A benchmark document collection 2.A benchmark suite of queries 3.A usually binary assessment of either Relevant or Nonrelevant for each query and each document

7 CSE 8337 Spring 2011 7 Difficulties in Evaluating IR Systems Effectiveness is related to the relevancy of retrieved items. Relevancy is not typically binary but continuous. Even if relevancy is binary, it can be a difficult judgment to make. Relevancy, from a human standpoint, is: Subjective: Depends upon a specific user ’ s judgment. Situational: Relates to user ’ s current needs. Cognitive: Depends on human perception and behavior. Dynamic: Changes over time.

8 CSE 8337 Spring 2011 8 How to perform evaluation Start with a corpus of documents. Collect a set of queries for this corpus. Have one or more human experts exhaustively label the relevant documents for each query. Typically assumes binary relevance judgments. Requires considerable human effort for large document/query corpora.

9 CSE 8337 Spring 2011 9 IR Evaluation Metrics Precision/Recall P/R graph Regular Smoothing Interpolating Averaging ROC Curve MAP R-Precision P/R points F-Measure E-Measure Fallout Novelty Coverage Utility ….

10 CSE 8337 Spring 2011 10 Relevant documents Retrieved documents Entire document collection retrieved & relevant not retrieved but relevant retrieved & irrelevant Not retrieved & irrelevant retrievednot retrieved relevant irrelevant Precision and Recall

11 CSE 8337 Spring 2011 11 Determining Recall is Difficult Total number of relevant items is sometimes not available: Sample across the database and perform relevance judgment on these items. Apply different retrieval algorithms to the same database for the same query. The aggregate of relevant items is taken as the total relevant set.

12 CSE 8337 Spring 2011 12 Trade-off between Recall and Precision 1 0 1 Recall Precision The ideal Desired areas Returns relevant documents but misses many useful ones too Returns most relevant documents but includes lots of junk

13 CSE 8337 Spring 2011 13 Recall-Precision Graph Example

14 CSE 8337 Spring 2011 14 A precision-recall curve

15 CSE 8337 Spring 2011 15 Recall-Precision Graph Smoothing Avoid sawtooth lines by smoothing Interpolate for one query Average across queries

16 CSE 8337 Spring 2011 16 Interpolating a Recall/Precision Curve Interpolate a precision value for each standard recall level: r j  {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} r 0 = 0.0, r 1 = 0.1, …, r 10 =1.0 The interpolated precision at the j-th standard recall level is the maximum known precision at any recall level between the j-th and (j + 1)-th level:

17 CSE 8337 Spring 2011 17 Interpolated precision Idea: If locally precision increases with increasing recall, then you should get to count that… So you max of precisions to right of value(Need not be at only standard levels.)

18 CSE 8337 Spring 2011 18 Precision across queries Recall and Precision are calculated for a specific query. Generally want a value for many queries. Calculate average precision recall over a set of queries. Average precision at recall level r: N q – number of queries P i (r) - precision at recall level r for i th query

19 CSE 8337 Spring 2011 19 Average Recall/Precision Curve Typically average performance over a large set of queries. Compute average precision at each standard recall level across all queries. Plot average precision/recall curves to evaluate overall system performance on a document/query corpus.

20 CSE 8337 Spring 2011 20 Compare Two or More Systems The curve closest to the upper right-hand corner of the graph indicates the best performance

21 CSE 8337 Spring 2011 21 Operating Characteristic Curve (ROC Curve)

22 CSE 8337 Spring 2011 22 ROC Curve Data False Positive Rate vs True Positive Rate True Positive Rate Sensitivity – proportion of positive results received Recall tn/(fp+tn) False Positive Rate fp/(fp+tn) 1-Specificity Specificity – proportion of negative results not received

23 CSE 8337 Spring 2011 23 Yet more evaluation measures… Mean average precision (MAP) Average of the precision value obtained for the top k documents, each time a relevant doc is retrieved Avoids interpolation, use of fixed recall levels MAP for query collection is arithmetic ave. Macro-averaging: each query counts equally R-precision If have known (though perhaps incomplete) set of relevant documents of size Rel, then calculate precision of top Rel docs returned Perfect system could score 1.0.

24 CSE 8337 Spring 2011 24 Variance For a test collection, it is usual that a system does poor on some information needs (e.g., MAP = 0.1) and well on others (e.g., MAP = 0.7) Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query. That is, there are easy information needs and hard ones!

25 CSE 8337 Spring 2011 25 Evaluation Graphs are good, but people want summary measures! Precision at fixed retrieval level Precision-at-k: Precision of top k results Perhaps appropriate for most of web search: all people want are good matches on the first one or two results pages But: averages badly and has an arbitrary parameter of k 11-point interpolated average precision The standard measure in the early TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them Evaluates performance at all recall levels

26 CSE 8337 Spring 2011 26 Typical (good) 11 point precisions SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)

27 CSE 8337 Spring 2011 27 Computing Recall/Precision Points For a given query, produce the ranked list of retrievals. Adjusting a threshold on this ranked list produces different sets of retrieved documents, and therefore different recall/precision measures. Mark each document in the ranked list that is relevant according to the gold standard. Compute a recall/precision pair for each position in the ranked list that contains a relevant document.

28 CSE 8337 Spring 2011 28 R=3/6=0.5; P=3/4=0.75 Computing Recall/Precision Points: An Example (modified from [Salton83]) Let total # of relevant docs = 6 Check each new recall point: R=1/6=0.167;P=1/1=1 R=2/6=0.333;P=2/2=1 R=5/6=0.833;p=5/13=0.38 R=4/6=0.667; P=4/6=0.667

29 CSE 8337 Spring 2011 29 F-Measure One measure of performance that takes into account both recall and precision. Harmonic mean of recall and precision: Calculated at a specific document in the ranking. Compared to arithmetic mean, both need to be high for harmonic mean to be high. Compromise between precision and recall

30 CSE 8337 Spring 2011 30 A combined measure: F Combined measure that assesses precision/recall tradeoff is F measure (weighted harmonic mean): People usually use balanced F 1 measure i.e., with  = 1 or  = ½

31 CSE 8337 Spring 2011 31 E Measure (parameterized F Measure) A variant of F measure that allows weighting emphasis on precision or recall: Value of  controls trade-off:  = 1: Equally weight precision and recall (E=F).  > 1: Weight precision more.  < 1: Weight recall more.

32 CSE 8337 Spring 2011 32 Fallout Rate Problems with both precision and recall: Number of irrelevant documents in the collection is not taken into account. Recall is undefined when there is no relevant document in the collection. Precision is undefined when no document is retrieved.

33 CSE 8337 Spring 2011 33 Fallout Want fallout to be close to 0. In general want to maximize recall and minimize fallout. Examine fallout-recall graph. More systems oriented than recall-precision.

34 CSE 8337 Spring 2011 34 Subjective Relevance Measures Novelty Ratio: The proportion of items retrieved and judged relevant by the user and of which they were previously unaware. Ability to find new information on a topic. Coverage Ratio: The proportion of relevant items retrieved out of the total relevant documents known to a user prior to the search. Relevant when the user wants to locate documents which they have seen before (e.g., the budget report for Year 2000).

35 CSE 8337 Spring 2011 35 Utility Subjective measure Cost-Benefit Analysis for retrieved documents Cr – Benefit of retrieving relevant document Cnr – Cost of retrieving a nonrelevant document Crn – Cost of not retrieving a relevant document Nr – Number of relevant documents retrieved Nnr – Number of nonrelevant documents retrieved Nrn – Number of relevant documents not retrieved

36 CSE 8337 Spring 2011 36 Other Factors to Consider User effort: Work required from the user in formulating queries, conducting the search, and screening the output. Response time: Time interval between receipt of a user query and the presentation of system responses. Form of presentation: Influence of search output format on the user ’ s ability to utilize the retrieved materials. Collection coverage: Extent to which any/all relevant items are included in the document corpus.

37 CSE 8337 Spring 2011 37 Experimental Setup for Benchmarking Analytical performance evaluation is difficult for document retrieval systems because many characteristics such as relevance, distribution of words, etc., are difficult to describe with mathematical precision. Performance is measured by benchmarking. That is, the retrieval effectiveness of a system is evaluated on a given set of documents, queries, and relevance judgments. Performance data is valid only for the environment under which the system is evaluated.

38 CSE 8337 Spring 2011 38 Benchmarks A benchmark collection contains: A set of standard documents and queries/topics. A list of relevant documents for each query. Standard collections for traditional IR: TREC: http://trec.nist.gov/http://trec.nist.gov/ Standard document collection Standard queries Algorithm under test Evaluation Standard result Retrieved result Precision and recall

39 CSE 8337 Spring 2011 39 Benchmarking  The Problems Performance data is valid only for a particular benchmark. Building a benchmark corpus is a difficult task. Benchmark web corpora are just starting to be developed. Benchmark foreign-language corpora are just starting to be developed.

40 CSE 8337 Spring 2011 40 The TREC Benchmark TREC: Text REtrieval Conference (http://trec.nist.gov/)http://trec.nist.gov/ Originated from the TIPSTER program sponsored by Defense Advanced Research Projects Agency (DARPA). Became an annual conference in 1992, co-sponsored by the National Institute of Standards and Technology (NIST) and DARPA. Participants are given parts of a standard set of documents and TOPICS (from which queries have to be derived) in different stages for training and testing. Participants submit the P/R values for the final document and query corpus and present their results at the conference.

41 CSE 8337 Spring 2011 41 The TREC Objectives Provide a common ground for comparing different IR techniques. –Same set of documents and queries, and same evaluation method. Sharing of resources and experiences in developing the benchmark. –With major sponsorship from government to develop large benchmark collections. Encourage participation from industry and academia. Development of new evaluation techniques, particularly for new applications. –Retrieval, routing/filtering, non-English collection, web-based collection, question answering.

42 CSE 8337 Spring 2011 42 From document collections to test collections Still need Test queries Relevance assessments Test queries Must be germane to docs available Best designed by domain experts Random query terms generally not a good idea Relevance assessments Human judges, time-consuming Are human panels perfect?

43 CSE 8337 Spring 2011 43 Kappa measure for inter-judge (dis)agreement Kappa measure Agreement measure among judges Designed for categorical judgments Corrects for chance agreement Kappa = [ P(A) – P(E) ] / [ 1 – P(E) ] P(A) – proportion of time judges agree P(E) – what agreement would be by chance Kappa = 0 for chance agreement, 1 for total agreement.

44 CSE 8337 Spring 2011 44 Kappa Measure: Example Number of docsJudge 1Judge 2 300Relevant 70Nonrelevant 20RelevantNonrelevant 10Nonrelevantrelevant P(A)? P(E)?

45 CSE 8337 Spring 2011 45 Kappa Example P(A) = 370/400 = 0.925 P(nonrelevant) = (10+20+70+70)/800 = 0.2125 P(relevant) = (10+20+300+300)/800 = 0.7878 P(E) = 0.2125^2 + 0.7878^2 = 0.665 Kappa = (0.925 – 0.665)/(1-0.665) = 0.776 Kappa > 0.8 = good agreement 0.67 “tentative conclusions” (Carletta ’96) Depends on purpose of study For >2 judges: average pairwise kappas

46 CSE 8337 Spring 2011 46 Interjudge Agreement: TREC 3

47 CSE 8337 Spring 2011 47 Impact of Inter-judge Agreement Impact on absolute performance measure can be significant (0.32 vs 0.39) Little impact on ranking of different systems or relative performance Suppose we want to know if algorithm A is better than algorithm B A standard information retrieval experiment will give us a reliable answer to this question.

48 CSE 8337 Spring 2011 48 Critique of pure relevance Relevance vs Marginal Relevance A document can be redundant even if it is highly relevant Duplicates The same information from different sources Marginal relevance is a better measure of utility for the user. Using facts/entities as evaluation units more directly measures true relevance. But harder to create evaluation set

49 CSE 8337 Spring 2011 49 Can we avoid human judgment? No Makes experimental work hard Especially on a large scale In some very specific settings, can use proxies E.g.: for approximate vector space retrieval, we can compare the cosine distance closeness of the closest docs to those found by an approximate retrieval algorithm But once we have test collections, we can reuse them (so long as we don’t overtrain too badly)

50 CSE 8337 Spring 2011 50 Evaluation at large search engines Search engines have test collections of queries and hand-ranked results Recall is difficult to measure on the web Search engines often use precision at top k, e.g., k = 10... or measures that reward you more for getting rank 1 right than for getting rank 10 right. NDCG (Normalized Cumulative Discounted Gain) Search engines also use non-relevance-based measures. Clickthrough on first result Not very reliable if you look at a single clickthrough … but pretty reliable in the aggregate. Studies of user behavior in the lab A/B testing

51 CSE 8337 Spring 2011 51 A/B testing Purpose: Test a single innovation Prerequisite: You have a large search engine up and running. Have most users use old system Divert a small proportion of traffic (e.g., 1%) to the new system that includes the innovation Evaluate with an “automatic” measure like clickthrough on first result Now we can directly see if the innovation does improve user happiness. Probably the evaluation methodology that large search engines trust most In principle less powerful than doing a multivariate regression analysis, but easier to understand Problems with A/B Testing: http://www.sigkdd.org/explorations/issues/12-2-2010-12/v12- 02-8-UR-Kohavi.pdf

52 CSE 8337 Spring 2011 52 CSE 8337 Outline Introduction Text Processing Indexes Boolean Queries Web Searching/Crawling Vector Space Model Matching Evaluation Feedback/Expansion

53 CSE 8337 Spring 2011 53 Query Operations Introduction IR queries as stated by the user may not be precise or effective. There are many techniques to improve a stated query and then process that query instead.

54 CSE 8337 Spring 2011 54 How can results be improved? Options for improving result Local methods Personalization Relevance feedback Pseudo relevance feedback Query expansion Local Analysis Thesauri Automatic thesaurus generation Query assist

55 CSE 8337 Spring 2011 55 Relevance Feedback Relevance feedback: user feedback on relevance of docs in initial set of results User issues a (short, simple) query The user marks some results as relevant or non- relevant. The system computes a better representation of the information need based on feedback. Relevance feedback can go through one or more iterations. Idea: it may be difficult to formulate a good query when you don’t know the collection well, so iterate

56 CSE 8337 Spring 2011 56 Relevance Feedback After initial retrieval results are presented, allow the user to provide feedback on the relevance of one or more of the retrieved documents. Use this feedback information to reformulate the query. Produce new results based on reformulated query. Allows more interactive, multi-pass process.

57 CSE 8337 Spring 2011 57 Relevance feedback We will use ad hoc retrieval to refer to regular retrieval without relevance feedback. We now look at four examples of relevance feedback that highlight different aspects.

58 CSE 8337 Spring 2011 58 Similar pages

59 CSE 8337 Spring 2011 59 Relevance Feedback: Example Image search engine

60 CSE 8337 Spring 2011 60 Results for Initial Query

61 CSE 8337 Spring 2011 61 Relevance Feedback

62 CSE 8337 Spring 2011 62 Results after Relevance Feedback

63 CSE 8337 Spring 2011 63 Initial query/results Initial query: New space satellite applications 1. 0.539, 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer 2. 0.533, 07/09/91, NASA Scratches Environment Gear From Satellite Plan 3. 0.528, 04/04/90, Science Panel Backs NASA Satellite Plan, But Urges Launches of Smaller Probes 4. 0.526, 09/09/91, A NASA Satellite Project Accomplishes Incredible Feat: Staying Within Budget 5. 0.525, 07/24/90, Scientist Who Exposed Global Warming Proposes Satellites for Climate Research 6. 0.524, 08/22/90, Report Provides Support for the Critics Of Using Big Satellites to Study Climate 7. 0.516, 04/13/87, Arianespace Receives Satellite Launch Pact From Telesat Canada 8. 0.509, 12/02/87, Telecommunications Tale of Two Companies User then marks relevant documents with “+”. + + +

64 CSE 8337 Spring 2011 64 Expanded query after relevance feedback 2.074 new 15.106 space 30.816 satellite 5.660 application 5.991 nasa 5.196 eos 4.196 launch 3.972 aster 3.516 instrument 3.446 arianespace 3.004 bundespost 2.806 ss 2.790 rocket 2.053 scientist 2.003 broadcast 1.172 earth 0.836 oil 0.646 measure

65 CSE 8337 Spring 2011 65 Results for expanded query 1. 0.513, 07/09/91, NASA Scratches Environment Gear From Satellite Plan 2. 0.500, 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer 3. 0.493, 08/07/89, When the Pentagon Launches a Secret Satellite, Space Sleuths Do Some Spy Work of Their Own 4. 0.493, 07/31/89, NASA Uses ‘Warm’ Superconductors For Fast Circuit 5. 0.492, 12/02/87, Telecommunications Tale of Two Companies 6. 0.491, 07/09/91, Soviets May Adapt Parts of SS-20 Missile For Commercial Use 7. 0.490, 07/12/88, Gaping Gap: Pentagon Lags in Race To Match the Soviets In Rocket Launchers 8. 0.490, 06/14/90, Rescue of Satellite By Space Agency To Cost $90 Million 2 1 8

66 CSE 8337 Spring 2011 66 Relevance Feedback Use assessments by users as to the relevance of previously returned documents to create new (modify old) queries. Technique: 1. Increase weights of terms from relevant documents. 2. Decrease weight of terms from nonrelevant documents.

67 CSE 8337 Spring 2011 67 Relevance Feedback Architecture Rankings IR System Document corpus Ranked Documents 1. Doc1 2. Doc2 3. Doc3. 1. Doc1  2. Doc2  3. Doc3 . Feedback Query String Revise d Query ReRanked Documents 1. Doc2 2. Doc4 3. Doc5. Query Reformulation

68 CSE 8337 Spring 2011 68 Query Reformulation Revise query to account for feedback: Query Expansion: Add new terms to query from relevant documents. Term Reweighting: Increase weight of terms in relevant documents and decrease weight of terms in irrelevant documents. Several algorithms for query reformulation.

69 CSE 8337 Spring 2011 69 Relevance Feedback in vector spaces We can modify the query based on relevance feedback and apply standard vector space model. Use only the docs that were marked. Relevance feedback can improve recall and precision Relevance feedback is most useful for increasing recall in situations where recall is important Users can be expected to review results and to take time to iterate

70 CSE 8337 Spring 2011 70 The Theoretically Best Query x x x x o o o Optimal query x non-relevant documents o relevant documents o o o x x x x x x x x x x x x  x x

71 CSE 8337 Spring 2011 71 Query Reformulation for Vectors Change query vector using vector algebra. Add the vectors for the relevant documents to the query vector. Subtract the vectors for the irrelevant docs from the query vector. This both adds both positive and negatively weighted terms to the query as well as reweighting the initial terms.

72 CSE 8337 Spring 2011 72 Optimal Query Assume that the relevant set of documents C r are known. Then the best query that ranks all and only the relevant queries at the top is: Where N is the total number of documents.

73 CSE 8337 Spring 2011 73 Standard Rocchio Method Since all relevant documents unknown, just use the known relevant (D r ) and irrelevant (D n ) sets of documents and include the initial query q.  : Tunable weight for initial query.  : Tunable weight for relevant documents.  : Tunable weight for irrelevant documents.

74 CSE 8337 Spring 2011 74 Relevance feedback on initial query x x x x o o o Revised query x known non-relevant documents o known relevant documents o o o x x x x x x x x x x x x  x x Initial query 

75 CSE 8337 Spring 2011 75 Positive vs Negative Feedback Positive feedback is more valuable than negative feedback (so, set  <  ; e.g.  = 0.25,  = 0.75). Many systems only allow positive feedback (  =0). Why?

76 CSE 8337 Spring 2011 76 Ide Regular Method Since more feedback should perhaps increase the degree of reformulation, do not normalize for amount of feedback:  : Tunable weight for initial query.  : Tunable weight for relevant documents.  : Tunable weight for irrelevant documents.

77 CSE 8337 Spring 2011 77 Ide “Dec Hi” Method Bias towards rejecting just the highest ranked of the irrelevant documents:  : Tunable weight for initial query.  : Tunable weight for relevant documents.  : Tunable weight for irrelevant document.

78 CSE 8337 Spring 2011 78 Comparison of Methods Overall, experimental results indicate no clear preference for any one of the specific methods. All methods generally improve retrieval performance (recall & precision) with feedback. Generally just let tunable constants equal 1.

79 CSE 8337 Spring 2011 79 Relevance Feedback: Assumptions A1: User has sufficient knowledge for initial query. A2: Relevance prototypes are “well-behaved”. Term distribution in relevant documents will be similar Term distribution in non-relevant documents will be different from those in relevant documents Either: All relevant documents are tightly clustered around a single prototype. Or: There are different prototypes, but they have significant vocabulary overlap. Similarities between relevant and irrelevant documents are small

80 CSE 8337 Spring 2011 80 Violation of A1 User does not have sufficient initial knowledge. Examples: Misspellings (Brittany Speers). Cross-language information retrieval. Mismatch of searcher’s vocabulary vs. collection vocabulary Cosmonaut/astronaut

81 CSE 8337 Spring 2011 81 Violation of A2 There are several relevance prototypes. Examples: Burma/Myanmar Contradictory government policies Pop stars that worked at Burger King Often: instances of a general concept Good editorial content can address problem Report on contradictory government policies

82 CSE 8337 Spring 2011 82 Relevance Feedback: Problems Long queries are inefficient for typical IR engine. Long response times for user. High cost for retrieval system. Partial solution: Only reweight certain prominent terms Perhaps top 20 by term frequency Users are often reluctant to provide explicit feedback It’s often harder to understand why a particular document was retrieved after applying relevance feedback Why?

83 CSE 8337 Spring 2011 83 Evaluation of relevance feedback strategies Use q 0 and compute precision and recall graph Use q m and compute precision recall graph Assess on all documents in the collection Spectacular improvements, but … it’s cheating! Partly due to known relevant documents ranked higher Must evaluate with respect to documents not seen by user Use documents in residual collection (set of documents minus those assessed relevant) Measures usually lower than for original query But a more realistic evaluation Relative performance can be validly compared Empirically, one round of relevance feedback is often very useful. Two rounds is sometimes marginally useful.

84 CSE 8337 Spring 2011 84 Evaluation of relevance feedback Second method – assess only the docs not rated by the user in the first round Could make relevance feedback look worse than it really is Can still assess relative performance of algorithms Most satisfactory – use two collections each with their own relevance assessments q 0 and user feedback from first collection q m run on second collection and measured

85 CSE 8337 Spring 2011 85 Why is Feedback Not Widely Used? Users sometimes reluctant to provide explicit feedback. Results in long queries that require more computation to retrieve, and search engines process lots of queries and allow little time for each one. Makes it harder to understand why a particular document was retrieved.

86 CSE 8337 Spring 2011 86 Evaluation: Caveat True evaluation of usefulness must compare to other methods taking the same amount of time. Alternative to relevance feedback: User revises and resubmits query. Users may prefer revision/resubmission to having to judge relevance of documents. There is no clear evidence that relevance feedback is the “best use” of the user’s time.

87 CSE 8337 Spring 2011 87 Pseudo relevance feedback Pseudo-relevance feedback automates the “manual” part of true relevance feedback. Pseudo-relevance algorithm: Retrieve a ranked list of hits for the user’s query Assume that the top k documents are relevant. Do relevance feedback (e.g., Rocchio) Works very well on average But can go horribly wrong for some queries. Several iterations can cause query drift. Why?

88 CSE 8337 Spring 2011 88 PseudoFeedback Results Found to improve performance on TREC competition ad-hoc retrieval task. Works even better if top documents must also satisfy additional boolean constraints in order to be used in feedback.

89 CSE 8337 Spring 2011 89 Relevance Feedback on the Web Some search engines offer a similar/related pages feature (this is a trivial form of relevance feedback) Google Altavista But some don’t because it’s hard to explain to average user: Yahoo Excite initially had true relevance feedback, but abandoned it due to lack of use. α/β/γ ??

90 CSE 8337 Spring 2011 90 Excite Relevance Feedback Spink et al. 2000 Only about 4% of query sessions from a user used relevance feedback option Expressed as “More like this” link next to each result But about 70% of users only looked at first page of results and didn’t pursue things further So 4% is about 1/8 of people extending search Relevance feedback improved results about 2/3 of the time

91 CSE 8337 Spring 2011 91 Query Expansion In relevance feedback, users give additional input (relevant/non-relevant) on documents, which is used to reweight terms in the documents In query expansion, users give additional input (good/bad search term) on words or phrases

92 CSE 8337 Spring 2011 92 How do we augment the user query? Manual thesaurus E.g. MedLine: physician, syn: doc, doctor, MD, medico Can be query rather than just synonyms Global Analysis: (static; of all documents in collection) Automatically derived thesaurus (co-occurrence statistics) Refinements based on query log mining Common on the web Local Analysis: (dynamic) Analysis of documents in result set

93 CSE 8337 Spring 2011 93 Local vs. Global Automatic Analysis Local – Documents retrieved are examined to automatically determine query expansion. No relevance feedback needed. Global – Thesaurus used to help select terms for expansion.

94 CSE 8337 Spring 2011 94 Automatic Local Analysis At query time, dynamically determine similar terms based on analysis of top-ranked retrieved documents. Base correlation analysis on only the “local” set of retrieved documents for a specific query. Avoids ambiguity by determining similar (correlated) terms only within relevant documents. “Apple computer”  “Apple computer Powerbook laptop”

95 CSE 8337 Spring 2011 95 Automatic Local Analysis Expand query with terms found in local clusters. D l – set of documents retrieved for query q. V l – Set of words used in D l. S l – Set of distinct stems in V l. f si,j –Frequency of stem s i in document d j found in D l. Construct stem-stem association matrix.

96 CSE 8337 Spring 2011 96 Association Matrix w 1 w 2 w 3 …………………..w n w1w2w3..wnw1w2w3..wn c 11 c 12 c 13 …………………c 1n c 21 c 31. c n1 c ij : Correlation factor between stems s i and stem s j f ik : Frequency of term i in document k

97 CSE 8337 Spring 2011 97 Normalized Association Matrix Frequency based correlation factor favors more frequent terms. Normalize association scores: Normalized score is 1 if two stems have the same frequency in all documents.

98 CSE 8337 Spring 2011 98 Metric Correlation Matrix Association correlation does not account for the proximity of terms in documents, just co-occurrence frequencies within documents. Metric correlations account for term proximity. V i : Set of all occurrences of term i in any document. r(k u,k v ): Distance in words between word occurrences k u and k v (  if k u and k v are occurrences in different documents).

99 CSE 8337 Spring 2011 99 Normalized Metric Correlation Matrix Normalize scores to account for term frequencies:

100 CSE 8337 Spring 2011 100 Query Expansion with Correlation Matrix For each term i in query, expand query with the n terms, j, with the highest value of c ij (s ij ). This adds semantically related terms in the “neighborhood” of the query terms.

101 CSE 8337 Spring 2011 101 Problems with Local Analysis Term ambiguity may introduce irrelevant statistically correlated terms. “Apple computer”  “Apple red fruit computer” Since terms are highly correlated anyway, expansion may not retrieve many additional documents.

102 CSE 8337 Spring 2011 102 Automatic Global Analysis Determine term similarity through a pre-computed statistical analysis of the complete corpus. Compute association matrices which quantify term correlations in terms of how frequently they co-occur. Expand queries with statistically most similar terms.

103 CSE 8337 Spring 2011 103 Automatic Global Analysis There are two modern variants based on a thesaurus-like structure built using all documents in collection Query Expansion based on a Similarity Thesaurus Query Expansion based on a Statistical Thesaurus

104 CSE 8337 Spring 2011 104 Thesaurus A thesaurus provides information on synonyms and semantically related words and phrases. Example: physician syn: ||croaker, doc, doctor, MD, medical, mediciner, medico, ||sawbones rel: medic, general practitioner, surgeon,

105 CSE 8337 Spring 2011 105 Thesaurus-based Query Expansion For each term, t, in a query, expand the query with synonyms and related words of t from the thesaurus. May weight added terms less than original query terms. Generally increases recall. May significantly decrease precision, particularly with ambiguous terms. “interest rate”  “interest rate fascinate evaluate”

106 CSE 8337 Spring 2011 106 Similarity Thesaurus The similarity thesaurus is based on term to term relationships rather than on a matrix of co-occurrence. This relationship are not derived directly from co- occurrence of terms inside documents. They are obtained by considering that the terms are concepts in a concept space. In this concept space, each term is indexed by the documents in which it appears. Terms assume the original role of documents while documents are interpreted as indexing elements

107 CSE 8337 Spring 2011 107 Similarity Thesaurus The following definitions establish the proper framework t: number of terms in the collection N: number of documents in the collection f i,j : frequency of occurrence of the term ki in the document dj t j : vocabulary of document d j itf j : inverse term frequency for document d j

108 CSE 8337 Spring 2011 108 Similarity Thesaurus Inverse term frequency for document d j To k i is associated a vector

109 CSE 8337 Spring 2011 109 Similarity Thesaurus where w i,j is a weight associated to index- document pair[k i,d j ]. These weights are computed as follows

110 CSE 8337 Spring 2011 110 Similarity Thesaurus The relationship between two terms k u and k v is computed as a correlation factor c u,v given by The global similarity thesaurus is built through the computation of correlation factor c u,v for each pair of indexing terms [k u,k v ] in the collection

111 CSE 8337 Spring 2011 111 Similarity Thesaurus This computation is expensive Global similarity thesaurus has to be computed only once and can be updated incrementally

112 CSE 8337 Spring 2011 112 Query Expansion based on a Similarity Thesaurus Query expansion is done in three steps as follows:  Represent the query in the concept space used for representation of the index terms 2 Based on the global similarity thesaurus, compute a similarity sim(q,kv) between each term kv correlated to the query terms and the whole query q. 3 Expand the query with the top r ranked terms according to sim(q,kv)

113 CSE 8337 Spring 2011 113 Query Expansion - step one To the query q is associated a vector q in the term-concept space given by where w i,q is a weight associated to the index-query pair[k i,q]

114 CSE 8337 Spring 2011 114 Query Expansion - step two Compute a similarity sim(q,k v ) between each term k v and the user query q where c u,v is the correlation factor

115 CSE 8337 Spring 2011 115 Query Expansion - step three Add the top r ranked terms according to sim(q,k v ) to the original query q to form the expanded query q’ To each expansion term k v in the query q’ is assigned a weight w v,q’ given by The expanded query q’ is then used to retrieve new documents to the user

116 CSE 8337 Spring 2011 116 Query Expansion Sample Doc1 = D, D, A, B, C, A, B, C Doc2 = E, C, E, A, A, D Doc3 = D, C, B, B, D, A, B, C, A Doc4 = A c(A,A) = 10.991 c(A,C) = 10.781 c(A,D) = 10.781... c(D,E) = 10.398 c(B,E) = 10.396 c(E,E) = 10.224

117 CSE 8337 Spring 2011 117 Query Expansion Sample Query: q = A E E sim(q,A) = 24.298 sim(q,C) = 23.833 sim(q,D) = 23.833 sim(q,B) = 23.830 sim(q,E) = 23.435 New query: q’ = A C D E E w(A,q')= 6.88 w(C,q')= 6.75 w(D,q')= 6.75 w(E,q')= 6.64

118 CSE 8337 Spring 2011 118 WordNet A more detailed database of semantic relationships between English words. Developed by famous cognitive psychologist George Miller and a team at Princeton University. About 144,000 English words. Nouns, adjectives, verbs, and adverbs grouped into about 109,000 synonym sets called synsets.

119 CSE 8337 Spring 2011 119 WordNet Synset Relationships Antonym: front  back Attribute: benevolence  good (noun to adjective) Pertainym: alphabetical  alphabet (adjective to noun) Similar: unquestioning  absolute Cause: kill  die Entailment: breathe  inhale Holonym: chapter  text (part-of) Meronym: computer  cpu (whole-of) Hyponym: tree  plant (specialization) Hypernym: fruit  apple (generalization)

120 CSE 8337 Spring 2011 120 WordNet Query Expansion Add synonyms in the same synset. Add hyponyms to add specialized terms. Add hypernyms to generalize a query. Add other related terms to expand query.

121 CSE 8337 Spring 2011 121 Statistical Thesaurus Existing human-developed thesauri are not easily available in all languages. Human thesuari are limited in the type and range of synonymy and semantic relations they represent. Semantically related terms can be discovered from statistical analysis of corpora.

122 CSE 8337 Spring 2011 122 Query Expansion Based on a Statistical Thesaurus Global thesaurus is composed of classes which group correlated terms in the context of the whole collection Such correlated terms can then be used to expand the original user query This terms must be low frequency terms However, it is difficult to cluster low frequency terms To circumvent this problem, we cluster documents into classes instead and use the low frequency terms in these documents to define our thesaurus classes. This algorithm must produce small and tight clusters.

123 CSE 8337 Spring 2011 123 Query Expansion based on a Statistical Thesaurus Use the thesaurus class to query expansion. Compute an average term weight wt c for each thesaurus class C

124 CSE 8337 Spring 2011 124 Query Expansion based on a Statistical Thesaurus wt c can be used to compute a thesaurus class weight w c as

125 CSE 8337 Spring 2011 125 Query Expansion Sample TC = 0.90 NDC = 2.00 MIDF = 0.2 sim(1,3) = 0.99 sim(1,2) = 0.40 sim(2,3) = 0.29 sim(4,1) = 0.00 sim(4,2) = 0.00 sim(4,3) = 0.00 Doc1 = D, D, A, B, C, A, B, C Doc2 = E, C, E, A, A, D Doc3 = D, C, B, B, D, A, B, C, A Doc4 = A idf A = 0.0 idf B = 0.3 idf C = 0.12 idf D = 0.12 idf E = 0.60 q'=A B E E q= A E E

126 CSE 8337 Spring 2011 126 Query Expansion based on a Statistical Thesaurus Problems with this approach initialization of parameters TC,NDC and MIDF TC depends on the collection Inspection of the cluster hierarchy is almost always necessary for assisting with the setting of TC. A high value of TC might yield classes with too few terms

127 CSE 8337 Spring 2011 127 Complete link algorithm This is document clustering algorithm with produces small and tight clusters Place each document in a distinct cluster. Compute the similarity between all pairs of clusters. Determine the pair of clusters [C u,C v ] with the highest inter-cluster similarity. Merge the clusters C u and C v Verify a stop criterion. If this criterion is not met then go back to step 2. Return a hierarchy of clusters. Similarity between two clusters is defined as the minimum of similarities between all pair of inter- cluster documents

128 CSE 8337 Spring 2011 128 Selecting the terms that compose each class Given the document cluster hierarchy for the whole collection, the terms that compose each class of the global thesaurus are selected as follows Obtain from the user three parameters TC: Threshold class NDC: Number of documents in class MIDF: Minimum inverse document frequency

129 CSE 8337 Spring 2011 129 Selecting the terms that compose each class Use the parameter TC as threshold value for determining the document clusters that will be used to generate thesaurus classes This threshold has to be surpassed by sim(C u,C v ) if the documents in the clusters C u and C v are to be selected as sources of terms for a thesaurus class

130 CSE 8337 Spring 2011 130 Selecting the terms that compose each class Use the parameter NDC as a limit on the size of clusters (number of documents) to be considered. A low value of NDC might restrict the selection to the smaller cluster Cu+v

131 CSE 8337 Spring 2011 131 Selecting the terms that compose each class Consider the set of document in each document cluster pre-selected above. Only the lower frequency documents are used as sources of terms for the thesaurus classes The parameter MIDF defines the minimum value of inverse document frequency for any term which is selected to participate in a thesaurus class

132 CSE 8337 Spring 2011 132 Global vs. Local Analysis Global analysis requires intensive term correlation computation only once at system development time. Local analysis requires intensive term correlation computation for every query at run time (although number of terms and documents is less than in global analysis). But local analysis gives better results.

133 CSE 8337 Spring 2011 133 Example of manual thesaurus

134 CSE 8337 Spring 2011 134 Thesaurus-based query expansion For each term, t, in a query, expand the query with synonyms and related words of t from the thesaurus feline → feline cat May weight added terms less than original query terms. Generally increases recall Widely used in many science/engineering fields May significantly decrease precision, particularly with ambiguous terms. “interest rate”  “interest rate fascinate evaluate” There is a high cost of manually producing a thesaurus And for updating it for scientific changes

135 CSE 8337 Spring 2011 135 Automatic Thesaurus Generation Attempt to generate a thesaurus automatically by analyzing the collection of documents Fundamental notion: similarity between two words Definition 1: Two words are similar if they co-occur with similar words. Definition 2: Two words are similar if they occur in a given grammatical relation with the same words. You can harvest, peel, eat, prepare, etc. apples and pears, so apples and pears must be similar. Co-occurrence based is more robust, grammatical relations are more accurate.

136 CSE 8337 Spring 2011 136 Co-occurrence Thesaurus Simplest way to compute one is based on term-term similarities in C = AA T where A is term-document matrix. w i,j = (normalized) weight for (t i,d j ) For each t i, pick terms with high values in C titi djdj N M What does C contain if A is a term-doc incidence (0/1) matrix?

137 CSE 8337 Spring 2011 137 Automatic Thesaurus Generation Example

138 CSE 8337 Spring 2011 138 Automatic Thesaurus Generation Discussion Quality of associations is usually a problem. Term ambiguity may introduce irrelevant statistically correlated terms. “Apple computer”  “Apple red fruit computer” Problems: False positives: Words deemed similar that are not False negatives: Words deemed dissimilar that are similar Since terms are highly correlated anyway, expansion may not retrieve many additional documents.

139 CSE 8337 Spring 2011 139 Query Expansion Conclusions Expansion of queries with related terms can improve performance, particularly recall. However, must select similar terms very carefully to avoid problems, such as loss of precision.

140 CSE 8337 Spring 2011 140 Conclusion Thesaurus is a efficient method to expand queries The computation is expensive but it is executed only once Query expansion based on similarity thesaurus may use high term frequency to expand the query Query expansion based on statistical thesaurus need well defined parameters

141 CSE 8337 Spring 2011 141 Query assist Would you expect such a feature to increase the query volume at a search engine?

142 CSE 8337 Spring 2011 142 Query assist Generally done by query log mining Recommend frequent recent queries that contain partial string typed by user A ranking problem! View each prior query as a doc – Rank-order those matching partial string …


Download ppt "Information Retrieval CSE 8337 (Part IV) Spring 2011 Some Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates."

Similar presentations


Ads by Google