2010.03.01 - SLIDE 1IS 240 – Spring 2010 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.

Slides:



Advertisements
Similar presentations
Evaluation of Information Retrieval Systems Thanks to Marti Hearst, Ray Larson.
Advertisements

Information Extraction Lecture 4 – Named Entity Recognition II CIS, LMU München Winter Semester Dr. Alexander Fraser, CIS.
1 Retrieval Performance Evaluation Modern Information Retrieval by R. Baeza-Yates and B. Ribeiro-Neto Addison-Wesley, (Chapter 3)
Evaluating Search Engine
Information Retrieval Review
- SLAYT 1 BBY 220 Re-evaluation of IR Systems Yaşar Tonta Hacettepe Üniversitesi yunus.hacettepe.edu.tr/~tonta/ BBY220 Bilgi Erişim.
Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs.
SLIDE 1IS 202 – FALL 2002 Lecture 20: Evaluation Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00.
1 CS 430: Information Discovery Lecture 10 Cranfield and TREC.
Modern Information Retrieval
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
SLIDE 1IS 240 – Spring 2011 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
INFO 624 Week 3 Retrieval System Evaluation
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
SLIDE 1IS 240 – Spring 2009 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
Information Access I Measurement and Evaluation GSLT, Göteborg, October 2003 Barbara Gawronska, Högskolan i Skövde.
Current Topics in Information Access: IR Background
Reference Collections: Task Characteristics. TREC Collection Text REtrieval Conference (TREC) –sponsored by NIST and DARPA (1992-?) Comparing approaches.
SLIDE 1IS 202 – FALL 2004 Lecture 10: IR Evaluation Workshop Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30.
SIMS 202 Information Organization and Retrieval Prof. Marti Hearst and Prof. Ray Larson UC Berkeley SIMS Tues/Thurs 9:30-11:00am Fall 2000.
Evaluating the Performance of IR Sytems
SLIDE 1IS 202 – FALL 2004 Lecture 9: IR Evaluation Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00.
Indexing and Representation: The Vector Space Model Document represented by a vector of terms Document represented by a vector of terms Words (or word.
SLIDE 1IS 240 – Spring 2009 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
SLIDE 1IS 240 – Spring 2011 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
Web Search – Summer Term 2006 II. Information Retrieval (Basics Cont.) (c) Wolfgang Hürst, Albert-Ludwigs-University.
WXGB6106 INFORMATION RETRIEVAL Week 3 RETRIEVAL EVALUATION.
ISP 433/633 Week 6 IR Evaluation. Why Evaluate? Determine if the system is desirable Make comparative assessments.
Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs.
SLIDE 1IS 202 – FALL 2003 Lecture 20: Evaluation Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00.
Evaluation Information retrieval Web. Purposes of Evaluation System Performance Evaluation efficiency of data structures and methods operational profile.
Evaluation of Image Retrieval Results Relevant: images which meet user’s information need Irrelevant: images which don’t meet user’s information need Query:
Search and Retrieval: Relevance and Evaluation Prof. Marti Hearst SIMS 202, Lecture 20.
Evaluation David Kauchak cs458 Fall 2012 adapted from:
Evaluation David Kauchak cs160 Fall 2009 adapted from:
Search Engines and Information Retrieval Chapter 1.
Information Retrieval and Web Search IR Evaluation and IR Standard Text Collections.
Minimal Test Collections for Retrieval Evaluation B. Carterette, J. Allan, R. Sitaraman University of Massachusetts Amherst SIGIR2006.
Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?
IR Evaluation Evaluate what? –user satisfaction on specific task –speed –presentation (interface) issue –etc. My focus today: –comparative performance.
Modern Information Retrieval: A Brief Overview By Amit Singhal Ranjan Dash.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Basic Implementation and Evaluations Aj. Khuanlux MitsophonsiriCS.426 INFORMATION RETRIEVAL.
Measuring How Good Your Search Engine Is. *. Information System Evaluation l Before 1993 evaluations were done using a few small, well-known corpora of.
Performance Measurement. 2 Testing Environment.
1 CS 430 / INFO 430 Information Retrieval Lecture 8 Evaluation of Retrieval Effectiveness 1.
What Does the User Really Want ? Relevance, Precision and Recall.
1 CS 430: Information Discovery Lecture 8 Evaluation of Retrieval Effectiveness II.
1 What Makes a Query Difficult? David Carmel, Elad YomTov, Adam Darlow, Dan Pelleg IBM Haifa Research Labs SIGIR 2006.
Chapter. 3: Retrieval Evaluation 1/2/2016Dr. Almetwally Mostafa 1.
Evaluation of Information Retrieval Systems Xiangming Mu.
Evaluation. The major goal of IR is to search document relevant to a user query. The evaluation of the performance of IR systems relies on the notion.
Information Retrieval Quality of a Search Engine.
Information Retrieval Lecture 3 Introduction to Information Retrieval (Manning et al. 2007) Chapter 8 For the MSc Computer Science Programme Dell Zhang.
SLIDE 1IS 202 – FALL 2002 Lecture 20: Web Search Issues and Algorithms Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday.
Evaluation of Information Retrieval Systems
Evaluation of Information Retrieval Systems
Evaluation.
Modern Information Retrieval
IR Theory: Evaluation Methods
Evaluation of Information Retrieval Systems
Evaluation of Information Retrieval Systems
Evaluation of Information Retrieval Systems
Cumulated Gain-Based Evaluation of IR Techniques
Retrieval Evaluation - Reference Collections
Retrieval Evaluation - Measures
Retrieval Performance Evaluation - Measures
Presentation transcript:

SLIDE 1IS 240 – Spring 2010 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval Lecture 11: Evaluation Intro

SLIDE 2IS 240 – Spring 2010 Mini-TREC Proposed Schedule –February 10 – Database and previous Queries –March 1 – report on system acquisition and setup –March 9, New Queries for testing… –April 19, Results due –April 21, Results and system rankings –April 28 (May 10?) Group reports and discussion

SLIDE 3IS 240 – Spring 2010 Today Announcement Evaluation of IR Systems –Precision vs. Recall –Cutoff Points –Test Collections/TREC –Blair & Maron Study

SLIDE 4IS 240 – Spring 2010 Be an IR Evaluator! I am one of the organizers for the NTCIR- 8/GeoTime evaluation looking at searching time and place questions We would like to get volunteers to help with evaluating topics This involves looking at the questions and then deciding relevance for various documents returned by different systems Want to help?

SLIDE 5IS 240 – Spring 2010 Today Evaluation of IR Systems –Precision vs. Recall –Cutoff Points –Test Collections/TREC –Blair & Maron Study

SLIDE 6IS 240 – Spring 2010 Evaluation Why Evaluate? What to Evaluate? How to Evaluate?

SLIDE 7IS 240 – Spring 2010 Why Evaluate? Determine if the system is desirable Make comparative assessments Test and improve IR algorithms

SLIDE 8IS 240 – Spring 2010 What to Evaluate? How much of the information need is satisfied. How much was learned about a topic. Incidental learning: –How much was learned about the collection. –How much was learned about other topics. How inviting the system is.

SLIDE 9IS 240 – Spring 2010 Relevance In what ways can a document be relevant to a query? –Answer precise question precisely. –Partially answer question. –Suggest a source for more information. –Give background information. –Remind the user of other knowledge. –Others...

SLIDE 10IS 240 – Spring 2010 Relevance How relevant is the document –for this user for this information need. Subjective, but Measurable to some extent –How often do people agree a document is relevant to a query How well does it answer the question? –Complete answer? Partial? –Background Information? –Hints for further exploration?

SLIDE 11IS 240 – Spring 2010 What to Evaluate? What can be measured that reflects users’ ability to use system? (Cleverdon 66) –Coverage of Information –Form of Presentation –Effort required/Ease of Use –Time and Space Efficiency –Recall proportion of relevant material actually retrieved –Precision proportion of retrieved material actually relevant effectiveness

SLIDE 12IS 240 – Spring 2010 Relevant vs. Retrieved Relevant Retrieved All docs

SLIDE 13IS 240 – Spring 2010 Precision vs. Recall Relevant Retrieved All docs

SLIDE 14IS 240 – Spring 2010 Why Precision and Recall? Get as much good stuff while at the same time getting as little junk as possible.

SLIDE 15IS 240 – Spring 2010 Retrieved vs. Relevant Documents Relevant Very high precision, very low recall

SLIDE 16IS 240 – Spring 2010 Retrieved vs. Relevant Documents Relevant Very low precision, very low recall (0 in fact)

SLIDE 17IS 240 – Spring 2010 Retrieved vs. Relevant Documents Relevant High recall, but low precision

SLIDE 18IS 240 – Spring 2010 Retrieved vs. Relevant Documents Relevant High precision, high recall (at last!)

SLIDE 19IS 240 – Spring 2010 Precision/Recall Curves There is a tradeoff between Precision and Recall So measure Precision at different levels of Recall Note: this is an AVERAGE over MANY queries precision recall x x x x

SLIDE 20IS 240 – Spring 2010 Precision/Recall Curves Difficult to determine which of these two hypothetical results is better: precision recall x x x x

SLIDE 21IS 240 – Spring 2010 Precision/Recall Curves

SLIDE 22IS 240 – Spring 2010 Document Cutoff Levels Another way to evaluate: –Fix the number of relevant documents retrieved at several levels: top 5 top 10 top 20 top 50 top 100 top 500 –Measure precision at each of these levels –Take (weighted) average over results This is sometimes done with just number of docs This is a way to focus on how well the system ranks the first k documents.

SLIDE 23IS 240 – Spring 2010 Problems with Precision/Recall Can’t know true recall value –except in small collections Precision/Recall are related –A combined measure sometimes more appropriate Assumes batch mode –Interactive IR is important and has different criteria for successful searches –We will touch on this in the UI section Assumes a strict rank ordering matters.

SLIDE 24IS 240 – Spring 2010 Relation to Contingency Table Accuracy: (a+d) / (a+b+c+d) Precision: a/(a+b) Recall: ? Why don’t we use Accuracy for IR? –(Assuming a large collection) –Most docs aren’t relevant –Most docs aren’t retrieved –Inflates the accuracy value Doc is Relevant Doc is NOT relevant Doc is retrieved ab Doc is NOT retrieved cd

SLIDE 25IS 240 – Spring 2010 The E-Measure Combine Precision and Recall into one number (van Rijsbergen 79) P = precision R = recall b = measure of relative importance of P or R For example, b = 0.5 means user is twice as interested in precision as recall

SLIDE 26IS 240 – Spring 2010 Old Test Collections Used 5 test collections –CACM (3204) –CISI (1460) –CRAN (1397) –INSPEC (12684) –MED (1033)

SLIDE 27IS 240 – Spring 2010 TREC Text REtrieval Conference/Competition –Run by NIST (National Institute of Standards & Technology) –2001 was the 10th year - 11th TREC in November Collection: 5 Gigabytes (5 CRDOMs), >1.5 Million Docs –Newswire & full text news (AP, WSJ, Ziff, FT, San Jose Mercury, LA Times) –Government documents (federal register, Congressional Record) –FBIS (Foreign Broadcast Information Service) –US Patents

SLIDE 28IS 240 – Spring 2010 TREC (cont.) Queries + Relevance Judgments –Queries devised and judged by “Information Specialists” –Relevance judgments done only for those documents retrieved -- not entire collection! Competition –Various research and commercial groups compete (TREC 6 had 51, TREC 7 had 56, TREC 8 had 66) –Results judged on precision and recall, going up to a recall level of 1000 documents Following slides from TREC overviews by Ellen Voorhees of NIST.

SLIDE 29IS 240 – Spring 2010

SLIDE 30IS 240 – Spring 2010

SLIDE 31IS 240 – Spring 2010

SLIDE 32IS 240 – Spring 2010

SLIDE 33IS 240 – Spring 2010

SLIDE 34IS 240 – Spring 2010

SLIDE 35IS 240 – Spring 2010 Sample TREC queries (topics) Number: 168 Topic: Financing AMTRAK Description: A document will address the role of the Federal Government in financing the operation of the National Railroad Transportation Corporation (AMTRAK) Narrative: A relevant document must provide information on the government’s responsibility to make AMTRAK an economically viable entity. It could also discuss the privatization of AMTRAK as an alternative to continuing government subsidies. Documents comparing government subsidies given to air and bus transportation with those provided to aMTRAK would also be relevant.

SLIDE 36IS 240 – Spring 2010

SLIDE 37IS 240 – Spring 2010

SLIDE 38IS 240 – Spring 2010

SLIDE 39IS 240 – Spring 2010

SLIDE 40IS 240 – Spring 2010

SLIDE 41IS 240 – Spring 2010

SLIDE 42IS 240 – Spring 2010

SLIDE 43IS 240 – Spring 2010

SLIDE 44IS 240 – Spring 2010

SLIDE 45IS 240 – Spring 2010

SLIDE 46IS 240 – Spring 2010

SLIDE 47IS 240 – Spring 2010 TREC Benefits: –made research systems scale to large collections (pre-WWW) –allows for somewhat controlled comparisons Drawbacks: –emphasis on high recall, which may be unrealistic for what most users want –very long queries, also unrealistic –comparisons still difficult to make, because systems are quite different on many dimensions –focus on batch ranking rather than interaction There is an interactive track.

SLIDE 48IS 240 – Spring 2010 TREC has changed Ad hoc track suspended in TREC 9 Emphasis now on specialized “tracks” –Interactive track –Natural Language Processing (NLP) track –Multilingual tracks (Chinese, Spanish) –Legal Discovery Searching –Patent Searching –High-Precision –High-Performance

SLIDE 49IS 240 – Spring 2010 TREC Results Differ each year For the main adhoc track: –Best systems not statistically significantly different –Small differences sometimes have big effects how good was the hyphenation model how was document length taken into account –Systems were optimized for longer queries and all performed worse for shorter, more realistic queries

SLIDE 50IS 240 – Spring 2010 The TREC_EVAL Program Takes a “qrels” file in the form… – qid iter docno rel Takes a “top-ranked” file in the form… –qid iter docno rank sim run_id –030 Q0 ZF prise1 Produces a large number of evaluation measures. For the basic ones in a readable format use “-o” Demo…

SLIDE 51IS 240 – Spring 2010 Blair and Maron 1985 A classic study of retrieval effectiveness –earlier studies were on unrealistically small collections Studied an archive of documents for a legal suit –~350,000 pages of text –40 queries –focus on high recall –Used IBM’s STAIRS full-text system Main Result: –The system retrieved less than 20% of the relevant documents for a particular information need; lawyers thought they had 75% But many queries had very high precision

SLIDE 52IS 240 – Spring 2010 Blair and Maron, cont. How they estimated recall – generated partially random samples of unseen documents –had users (unaware these were random) judge them for relevance Other results: –two lawyers searches had similar performance –lawyers recall was not much different from paralegal’s

SLIDE 53IS 240 – Spring 2010 Blair and Maron, cont. Why recall was low –users can’t foresee exact words and phrases that will indicate relevant documents “accident” referred to by those responsible as: “event,” “incident,” “situation,” “problem,” … differing technical terminology slang, misspellings –Perhaps the value of higher recall decreases as the number of relevant documents grows, so more detailed queries were not attempted once the users were satisfied

SLIDE 54IS 240 – Spring 2010 What to Evaluate? Effectiveness –Difficult to measure –Recall and Precision are one way –What might be others?

SLIDE 55IS 240 – Spring 2010 Next Time Next time –Calculating standard IR measures and more on trec_eval –Theoretical limits of Precision and Recall –Intro to Alternative evaluation metrics