CSCI 5417 Information Retrieval Systems Jim Martin Lecture 7 9/13/2011.

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 7: Scoring and results assembly.
Advertisements

Introduction to Information Retrieval
Chapter 5: Introduction to Information Retrieval
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
Information Retrieval IR 7. Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations.
Precision and Recall.
Evaluating Search Engine
Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs.
Introduction to Information Retrieval and Web Search Lecture 8: Evaluation and Result Summaries.
Modern Information Retrieval
Scoring and Ranking 198:541. Scoring Thus far, our queries have all been Boolean Docs either match or not Good for expert users with precise understanding.
INFO 624 Week 3 Retrieval System Evaluation
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
CS276A Information Retrieval Lecture 8. Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations.
Reference Collections: Task Characteristics. TREC Collection Text REtrieval Conference (TREC) –sponsored by NIST and DARPA (1992-?) Comparing approaches.
Retrieval Evaluation: Precision and Recall. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 7: Scores in a Complete Search.
Information Retrieval using the Boolean Model. Query Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia? Could grep all.
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Evaluation CSC4170 Web Intelligence and Social Computing Tutorial 5 Tutor: Tom Chao Zhou
ISP 433/633 Week 6 IR Evaluation. Why Evaluate? Determine if the system is desirable Make comparative assessments.
Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs.
CS276 Information Retrieval and Web Search
Evaluation of Image Retrieval Results Relevant: images which meet user’s information need Irrelevant: images which don’t meet user’s information need Query:
Chapter 5: Information Retrieval and Web Search
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
| 1 › Gertjan van Noord2014 Zoekmachines Lecture 5: Evaluation.
Evaluation David Kauchak cs458 Fall 2012 adapted from:
Evaluation David Kauchak cs160 Fall 2009 adapted from:
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar.
Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?
IR Evaluation Evaluate what? –user satisfaction on specific task –speed –presentation (interface) issue –etc. My focus today: –comparative performance.
Evaluating Search Engines in chapter 8 of the book Search Engines Information Retrieval in Practice Hongfei Yan.
Information Retrieval Lecture 7. Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations.
IR System Evaluation Farhad Oroumchian. IR System Evaluation System-centered strategy –Given documents, queries, and relevance judgments –Try several.
Evaluation INST 734 Module 5 Doug Oard. Agenda Evaluation fundamentals Test collections: evaluating sets  Test collections: evaluating rankings Interleaving.
Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 8: Evaluation.
Evaluation of (Search) Results How do we know if our results are any good? Evaluating a search engine  Benchmarks  Precision and recall Results summaries:
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Basic Implementation and Evaluations Aj. Khuanlux MitsophonsiriCS.426 INFORMATION RETRIEVAL.
ASSESSING LEARNING ALGORITHMS Yılmaz KILIÇASLAN. Assessing the performance of the learning algorithm A learning algorithm is good if it produces hypotheses.
What Does the User Really Want ? Relevance, Precision and Recall.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Chris Manning and Pandu Nayak Efficient.
Chapter. 3: Retrieval Evaluation 1/2/2016Dr. Almetwally Mostafa 1.
Evaluation. The major goal of IR is to search document relevant to a user query. The evaluation of the performance of IR systems relies on the notion.
Information Retrieval Quality of a Search Engine.
Learning to Rank: From Pairwise Approach to Listwise Approach Authors: Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li Presenter: Davidson Date:
Information Retrieval Lecture 3 Introduction to Information Retrieval (Manning et al. 2007) Chapter 8 For the MSc Computer Science Programme Dell Zhang.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 10 Evaluation.
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008 Annotations by Michael L. Nelson.
CS276 Information Retrieval and Web Search Lecture 8: Evaluation.
Introduction to Information Retrieval Introduction to Information Retrieval Information Retrieval and Web Search Lecture 8: Evaluation.
Sampath Jayarathna Cal Poly Pomona
Evaluation Anisio Lacerda.
7CCSMWAL Algorithmic Issues in the WWW
Evaluation of IR Systems
Lecture 10 Evaluation.
Evaluation.
אחזור מידע, מנועי חיפוש וספריות
Modern Information Retrieval
IR Theory: Evaluation Methods
Lecture 6 Evaluation.
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
8. Efficient Scoring Most slides were adapted from Stanford CS 276 course and University of Munich IR course.
Lecture 8: Evaluation Hankz Hankui Zhuo
Retrieval Performance Evaluation - Measures
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Precision and Recall.
Presentation transcript:

CSCI 5417 Information Retrieval Systems Jim Martin Lecture 7 9/13/2011

10/13/2015 CSCI Today Review Efficient scoring schemes Approximate scoring Evaluating IR systems

10/13/2015 CSCI Normal Cosine Scoring

10/13/2015 CSCI Speedups... Compute the cosines faster Don’t compute as many cosines

10/13/2015 CSCI Generic Approach to Reducing Cosines Find a set A of contenders, with K < |A| << N A does not necessarily contain the top K, but has many docs from among the top K Return the top K docs in A Think of A as pruning likely non-contenders

10/13/2015 CSCI Impact-Ordered Postings We really only want to compute scores for docs for which wf t,d is high enough Low scores are unlikely to change the ordering or reach the top K So sort each postings list by wf t,d How do we compute scores in order to pick off top K? Two ideas follow

10/13/2015 CSCI Early Termination When traversing t’s postings, stop early after either After a fixed number of docs or wf t,d drops below some threshold Take the union of the resulting sets of docs from the postings of each query term Compute only the scores for docs in this union

10/13/2015 CSCI IDF-ordered terms When considering the postings of query terms Look at them in order of decreasing IDF High IDF terms likely to contribute most to score As we update score contribution from each query term Stop if doc scores relatively unchanged

Evaluation 10/13/2015CSCI 54179

10/13/2015 CSCI Evaluation Metrics for Search Engines How fast does it index? Number of documents/hour Realtime search How fast does it search? Latency as a function of index size Expressiveness of query language Ability to express complex information needs Speed on complex queries

10/13/2015 CSCI Evaluation Metrics for Search Engines All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise But the key really is user happiness Speed of response/size of index are factors But blindingly fast, useless answers won’t make a user happy What makes people come back? Need a way of quantifying user happiness

10/13/2015 CSCI Measuring user happiness Issue: Who is the user we are trying to make happy? Web engine: user finds what they want and returns often to the engine Can measure rate of return users eCommerce site: user finds what they want and makes a purchase Measure time to purchase, or fraction of searchers who become buyers?

10/13/2015 CSCI Measuring user happiness Enterprise (company/govt/academic): Care about “user productivity” How much time do my users save when looking for information? Many other criteria having to do with breadth of access, secure access, etc.

10/13/2015 CSCI Happiness: Difficult to Measure Most common proxy for user happiness is relevance of search results But how do you measure relevance? We will detail one methodology here, then examine its issues Relevance measurement requires 3 elements: 1. A benchmark document collection 2. A benchmark suite of queries 3. A binary assessment of either Relevant or Not relevant for query-doc pairs Some work on more-than-binary, but not typical

10/13/2015 CSCI Evaluating an IR system The information need is translated into a query Relevance is assessed relative to the information need not the query E.g., Information need: I'm looking for information on whether drinking red wine is more effective at reducing your risk of heart attacks than white wine. Query: wine red white heart attack effective You evaluate whether the doc addresses the information need, not whether it has those words

10/13/2015 CSCI Standard Relevance Benchmarks TREC - National Institute of Standards and Testing (NIST) has run a large IR test-bed for many years Reuters and other benchmark doc collections used “Retrieval tasks” specified sometimes as queries Human experts mark, for each query and for each doc, Relevant or Irrelevant For at least for subset of docs that some system returned for that query

10/13/2015 CSCI Unranked Retrieval Evaluation As with any such classification task there are 4 possible system outcomes: a, b, c and d a and d represent correct responses. c and b are mistakes. False pos/False neg Type 1/Type 2 errors RelevantNot Relevant Retrievedab Not Retrieved cd

10/13/2015 CSCI Accuracy/Error Rate Given a query, an engine classifies each doc as “Relevant” or “Irrelevant”. Accuracy of an engine: the fraction of these classifications that is correct. a+d/a+b+c+d The number of correct judgments out of all the judgments made. Why is accuracy useless for evaluating large search engings?

10/13/2015 CSCI Unranked Retrieval Evaluation: Precision and Recall Precision: fraction of retrieved docs that are relevant = P(relevant|retrieved) Recall: fraction of relevant docs that are retrieved = P(retrieved|relevant) Precision P = a/(a+b) Recall R = a/(a+c) RelevantNot Relevant Retrievedab Not Retrievedcd

10/13/2015 CSCI Precision/Recall You can get high recall (but low precision) by retrieving all docs for all queries! Recall is a non-decreasing function of the number of docs retrieved That is, recall either stays the same or increases as you return more docs In a most systems, precision decreases with the number of docs retrieved Or as recall increases A fact with strong empirical confirmation

10/13/2015 CSCI Difficulties in Using Precision/Recall Should average over large corpus/query ensembles Need human relevance assessments People aren’t really reliable assessors Assessments have to be binary Heavily skewed by collection-specific facts Systems tuned on one collection may not transfer from one domain to another

10/13/2015 CSCI Evaluating Ranked Results Ranked results complicate things We’re not doing Boolean relevant/not relevant judgments Evaluation of ranked results: The system can return varying number of results All things being equal we want relevant documents higher in the ranking than non- relevant docs

10/13/2015 CSCI Recall/Precision 1R 2N 3N 4R 5R 6N 7R 8N 9N 10N

10/13/2015 CSCI Recall/Precision 1R 2N 3N 4R 5R 6N 7R 8N 9N 10N Assume there are 10 rel docs in the collection for this single query

10/13/2015 CSCI Recall/Precision 1R 2N 3N 4R 5R 6N 7R 8N 9N 10N RP 10%100% Assume 10 rel docs in collection

10/13/2015 CSCI A Precision-Recall curve Why the sawtooth shape?

10/13/2015 CSCI Averaging over queries A precision-recall graph for a single query isn’t a very useful piece of information You need to average performance over a whole bunch of queries. But there’s a technical issue: Precision-recall calculations fill only some points on the graph How do you determine a value (interpolate) between the points?

10/13/2015 CSCI Interpolated precision Idea: if locally precision increases with increasing recall, then you should get to count that… So you max of precisions to right of value

10/13/2015 CSCI Interpolated Values Ok... Now we can compute R/P pairs across queries... At standard points. The usual thing to do is to measure Precision at fixed (11) recall levels for each query

10/13/2015 CSCI An Interpolated Precision-Recall Curve

10/13/2015 CSCI Typical (good) 11 point precisions SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)

Break trec_eval 10/13/2015CSCI

10/13/2015 CSCI Evaluation Graphs are good, but people like single summary measures! Precision at fixed retrieval level Perhaps most appropriate for web search: all people want are good matches on the first one or two results pages But has an arbitrary parameter of k 11-point interpolated average precision The standard measure in the TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them Evaluates performance at all recall levels

10/13/2015 CSCI Yet more evaluation measures… Mean average precision (MAP) Average of the precision value obtained for the top k documents, each time a relevant doc is retrieved Avoids interpolation, use of fixed recall levels MAP for query collection is arithmetic avg. Macro-averaging: each query counts equally

10/13/2015 CSCI Variance For a test collection, it is usual that a system does poorly on some information needs (e.g., MAP = 0.1) and excellently on others (e.g., MAP = 0.7) Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query. That is, there are easy information needs and hard ones!

Finally All of these measures are used for distinct comparison purposes System A vs System B System A (1.1) vs System A (1.2) Approach A vs. Approach B Vector space approach vs. Probabilistic approaches Systems on different collections? System A on med vs. trec vs web text? They don’t represent absolute measures 10/13/2015CSCI

10/13/2015 CSCI From corpora to test collections Still need Test queries Relevance assessments Test queries Must be germane to docs available Best designed by domain experts Random query terms generally not a good idea Relevance assessments Human judges, time-consuming Human panels are not perfect

10/13/2015 CSCI Pooling With large datasets it’s impossible to really assess recall. You would have to look at every document. So TREC uses a technique called pooling. Run a query on a representative set of state of the art retrieval systems. Take the union of the top N results from these systems. Have the analysts judge the relevant docs in this set.

10/13/2015 CSCI TREC TREC Ad Hoc task from first 8 TRECs is standard IR task 50 detailed information needs a year Human evaluation of pooled results returned More recently other related things: Web track, HARD, Bio, Q/A A TREC query (TREC 5) Number: 225 Description: What is the main function of the Federal Emergency Management Agency (FEMA) and the funding level provided to meet emergencies? Also, what resources are available to FEMA such as people, equipment, facilities?

10/13/2015 CSCI Critique of Pure Relevance Relevance vs Marginal Relevance A document can be redundant even if it is highly relevant Duplicates The same information from different sources Marginal relevance is a better measure of utility for the user. Using facts/entities as evaluation units more directly measures true relevance. But harder to create evaluation set

10/13/2015 CSCI Search Engines… How does any of this apply to the big search engines?

10/13/2015 CSCI Evaluation at large search engines Recall is difficult to measure for the web Search engines often use precision at top k, e.g., k = 10 Or measures that reward you more for getting rank 1 right than for getting rank 10 right. NDCG (Normalized Cumulative Discounted Gain) Search engines also use non-relevance-based measures Clickthrough on first result Not very reliable if you look at a single clickthrough … but pretty reliable in the aggregate. Studies of user behavior in the lab A/B testing Focus groups Diary studies 42

10/13/2015 CSCI A/B testing Purpose: Test a single innovation Prerequisite: You have a system up and running. Have most users use old system Divert a small proportion of traffic (e.g., 1%) to the new system that includes the innovation Evaluate with an “automatic” measure like clickthrough on first result Now we can directly see if the innovation does improve user happiness. Probably the evaluation methodology that large search engines trust most 43

10/13/2015 CSCI Next Time Relevance feedback Should have read up through Chapter 9.