Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs.

Similar presentations


Presentation on theme: "Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs."— Presentation transcript:

1 Evaluation

2  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs. B –System A vs A´ Most common evaluation - retrieval effectiveness – Assistance in formulating queries – Speed of retrieval – Resources required – Presentation of documents – Ability to find relevant documents

3  Allan, Ballesteros, Croft, and/or Turtle The Concept of Relevance Relevance of a document D to a query Q is subjective –Different users will have different judgments –Same users may judge differently at different times –Degree of relevance of different documents may vary

4  Allan, Ballesteros, Croft, and/or Turtle The Concept of Relevance In evaluating IR systems it is assumed that: –A subset of the documents of the database (DB) are relevant –A document is either relevant or not

5  Allan, Ballesteros, Croft, and/or Turtle Relevance In a small collection - the relevance of each document can be checked With real collections, never know full set of relevant documents Any retrieval model includes an implicit definition of relevance –Satisfiability of a FOL expression –Distance –P(Relevance|query,document) –P(query|document)

6  Allan, Ballesteros, Croft, and/or Turtle Evaluation Set of queries Collection of documents (corpus) Relevance judgements: Which documents are correct and incorrect for each query Potato farming and nutritional value of potatoes. Mr. Potato Head … nutritional info for spuds potato blight … growing potatoes … x  x   x If small collection, can review all documents Not practical for large collections Any ideas about how we might approach collecting relevance judgments for very large collections?

7  Allan, Ballesteros, Croft, and/or Turtle Finding Relevant Documents Pooling –Retrieve documents using several automatic techniques –Judge top n documents for each technique –Relevant set is union –Subset of true relevant set Possible to estimate size of relevant set by sampling When testing: –How should un-judged documents be treated? –How might this affect results?

8  Allan, Ballesteros, Croft, and/or Turtle Test Collections To compare the performance of two techniques: –each technique used to evaluate same queries –results (set or ranked list) compared using metric –most common measures - precision and recall Usually use multiple measures to get different views of performance Usually test with multiple collections – –performance is collection dependent

9  Allan, Ballesteros, Croft, and/or Turtle

10 Evaluation Retrieved documents Relevant documents Rel&Ret documents Ability to return ALL relevant items. Retrieved Ability to return ONLY relevant items. Let retrieved = 100, relevant = 25, rel & ret = 10 Recall = 10/25 =.40 Precision = 10/100 =.10

11  Allan, Ballesteros, Croft, and/or Turtle Precision and Recall Precision and recall well-defined for sets For ranked retrieval –Compute value at fixed recall points (e.g. precision at 20% recall) –Compute a P/R point for each relevant document, interpolate –Compute value at fixed rank cutoffs (e.g. precision at rank 20)

12  Allan, Ballesteros, Croft, and/or Turtle Average Precision for a Query Often want a single-number effectiveness measure Average precision is widely used in IR Calculate by averaging precision when recall increases

13  Allan, Ballesteros, Croft, and/or Turtle

14

15

16 Averaging Across Queries Hard to compare P/R graphs or tables for individual queries (too much data) –Need to average over many queries Two main types of averaging –Micro-average - each relevant document is a point in the average (most common) –Macro-average - each query is a point in the average Also done with average precision value –Average of many queries’ average precision values –Called mean average precision (MAP) “Average average precision” sounds weird

17  Allan, Ballesteros, Croft, and/or Turtle

18

19 Averaging and Interpolation Interpolation –actual recall levels of individual queries are seldom equal to standard levels –interpolation estimates the best possible performance value between two known values e.g.) assume 3 relevant docs retrieved at ranks 4, 9, 20 their precision at actual recall is.25,.22, and.15 –On average, as recall increases, precision decreases

20  Allan, Ballesteros, Croft, and/or Turtle Averaging and Interpolation Actual recall levels of individual queries are seldom equal to standard levels Interpolated precision at the ith recall level, R i, is the maximum precision at all points p such that R i  p  R i+1 –assume only 3 relevant docs retrieved at ranks 4, 9, 20 –their actual recall points are:.33,.67, and 1.0 –their precision is.25,.22, and.15 –what is interpolated precision at standard recall points? Recall levelInterpolated Precision 0.0, 0.1, 0.2, 0.30.25 0.4, 0.5, 0.60.22 0.7, 0.8, 0.9, 1.00.15

21  Allan, Ballesteros, Croft, and/or Turtle Interpolated Average Precision Average precision at standard recall points For a given query, compute P/R point for every relevant doc. Interpolate precision at standard recall levels –11-pt is usually 100%, 90, 80, …, 10, 0% (yes, 0% recall) –3-pt is usually 75%, 50%, 25% Average over all queries to get average precision at each recall level Average interpolated recall levels to get single result –Called “interpolated average precision” Not used much anymore; “mean average precision” more common Values at specific interpolated points still commonly used

22  Allan, Ballesteros, Croft, and/or Turtle 1. d 123 (*)6. d 9 (*)11. d 38 2. d 84 7. d 511 12. d 48 3. d 56 (*)8. d 129 13. d 250 4. d 6 9. d 187 14. d 113 5. d 8 10. d 25 (*)15. d 3 (*) Let, R q = {d 3, d 5, d 9, d 25, d 39, d 44, d 56, d 71, d 89, d 123 } |R q | = 10, no. of relevant docs for q Ranking of retreived docs in the answer set of q: 10 % Recall=>.1 * 10 rel docs = 1 rel doc retrieved One doc retrieved to get 1 rel doc: precision = 1/1 = 100% Micro-averaging: 1 Qry Find precision given total number of docs retrieved at given recall value.

23  Allan, Ballesteros, Croft, and/or Turtle 1. d 123 (*)6. d 9 (*)11. d 38 2. d 84 7. d 511 12. d 48 3. d 56 (*)8. d 129 13. d 250 4. d 6 9. d 187 14. d 113 5. d 8 10. d 25 (*)15. d 3 (*) Let, R q = {d 3, d 5, d 9, d 25, d 39, d 44, d 56, d 71, d 89, d 123 } |R q | = 10, no. of relevant docs for q Ranking of retreived docs in the answer set of q: 10 % Recall=>.1 * 10 rel docs = 1 rel doc retrieved One doc retrieved to get 1 rel doc: precision = 1/1 = 100% Micro-averaging : 1 Qry 20% Recall:.2 * 10 rel docs = 2 rel docs retrieved 3 docs retrieved to get 2 rel docs: precision = 2/3 = 0.667 Find precision given total number of docs retrieved at given recall value.

24  Allan, Ballesteros, Croft, and/or Turtle 1. d 123 (*)6. d 9 (*)11. d 38 2. d 84 7. d 511 12. d 48 3. d 56 (*)8. d 129 13. d 250 4. d 6 9. d 187 14. d 113 5. d 8 10. d 25 (*)15. d 3 (*) Let, R q = {d 3, d 5, d 9, d 25, d 39, d 44, d 56, d 71, d 89, d 123 } |R q | = 10, no. of relevant docs for q Ranking of retreived docs in the answer set of q: 10 % Recall=>.1 * 10 rel docs = 1 rel doc retrieved One doc retrieved to get 1 rel doc: precision = 1/1 = 100% Micro-averaging : 1 Qry What is precision at recall values from 40-100%? 20% Recall:.2 * 10 rel docs = 2 rel docs retrieved 3 docs retrieved to get 2 rel docs: precision = 2/3 = 0.667 30% Recall:.3 * 10 rel docs = 3 rel docs retrieved 6 docs retrieved to get 3 rel docs: precision = 3/6 = 0.5

25  Allan, Ballesteros, Croft, and/or Turtle |R q | = 10, no. of relevant docs for q Ranking of retreived docs in the answer set of q: Recall/ Precision Curve 0 20 40 60 80 100 120 20406080100120 Precision Recall 1. d 123 (*)5. d 8 9. d 187 13. d 250 2. d 84 6. d 9 (*)10. d 25 (*)14. d 113 3. d 56 (*)7. d 511 11. d 38 15. d 3 (*) 4. d 6 8. d 129 12. d 48 RecallPrecision 0.11/1 = 100% 0.22/3 = 0.67% 0.33/6 = 0.5% 0.44/10 = 0.4% 0.55/15 = 0.33% 0.60%… 1.00%

26  Allan, Ballesteros, Croft, and/or Turtle Averaging and Interpolation macroaverage - each query is a point in the avg –can be independent of any parameter –average of precision values across several queries at standard recall levels e.g.) assume 3 relevant docs retrieved at ranks 4, 9, 20 –their actual recall points are:.33,.67, and 1.0 (why?) –their precision is.25,.22, and.15 (why?) Average over all relevant docs –rewards systems that retrieve relevant docs at the top (.25+.22+.15)/3= 0.21

27  Allan, Ballesteros, Croft, and/or Turtle Recall-Precision Tables & Graphs

28  Allan, Ballesteros, Croft, and/or Turtle Document Level Averages Precision after a given number of docs retrieved –e.g.) 5, 10, 15, 20, 30, 100, 200, 500, & 1000 documents Reflects the actual system performance as a user might see it Each precision avg is computed by summing precisions at the specified doc cut-off and dividing by the number of queries –e.g. average precision for all queries at the point where n docs have been retrieved

29  Allan, Ballesteros, Croft, and/or Turtle R-Precision Precision after R documents are retrieved –R = number of relevant docs for the query Average R-Precision –mean of the R-Precisions across all queries e.g.) Assume 2 qrys having 50 & 10 relevant docs; system retrieves 17 and 7 relevant docs in the top 50 and 10 documents retrieved, respectively

30  Allan, Ballesteros, Croft, and/or Turtle Evaluation Recall-Precision value pairs may co-vary in ways that are hard to understand Would like to find composite measures –A single number measure of effectiveness primarily ad hoc and not theoretically justifiable Some attempt to invent measures that combine parts of the contingency table into a single number measure

31  Allan, Ballesteros, Croft, and/or Turtle Contingency Table Miss = C/(A+C)

32  Allan, Ballesteros, Croft, and/or Turtle Symmetric Difference A is the retrieved set of documents B is the relevant set of documents A  B (the symmetric difference) is the shaded area

33  Allan, Ballesteros, Croft, and/or Turtle E measure (van Rijsbergen) used to emphasize precision or recall –like a weighted average of precision and recall large  increases importance of precision –can transform by  = 1/(  2 +1),  = P/R –when  = 1/2,  = 1; precision and recall are equally important E= normalized symmetric difference of retrieved and relevant sets E b=1 = |A  B|/(|A| + |B|) F =1- E is typical (good results mean larger values of F)


Download ppt "Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs."

Similar presentations


Ads by Google