Relevance Feedback Prof. Marti Hearst SIMS 202, Lecture 24.

Slides:



Advertisements
Similar presentations
Relevance Feedback User tells system whether returned/disseminated documents are relevant to query/information need or not Feedback: usually positive sometimes.
Advertisements

Chapter 5: Introduction to Information Retrieval
Indexing. Efficient Retrieval Documents x terms matrix t 1 t 2... t j... t m nf d 1 w 11 w w 1j... w 1m 1/|d 1 | d 2 w 21 w w 2j... w 2m 1/|d.
Introduction to Information Retrieval
Lecture 11 Search, Corpora Characteristics, & Lucene Introduction.
Introduction to Information Retrieval (Part 2) By Evren Ermis.
Intelligent Information Retrieval 1 Vector Space Model for IR: Implementation Notes CSC 575 Intelligent Information Retrieval These notes are based, in.
Information Retrieval Visualization CPSC 533c Class Presentation Qixing Zheng March 22, 2004.
Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22.
1 CS 430 / INFO 430 Information Retrieval Lecture 8 Query Refinement: Relevance Feedback Information Filtering.
Database Management Systems, R. Ramakrishnan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides.
SIMS 202 Information Organization and Retrieval Prof. Marti Hearst and Prof. Ray Larson UC Berkeley SIMS Tues/Thurs 9:30-11:00am Fall 2000.
SLIDE 1IS 240 – Spring 2009 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
Chapter 5: Query Operations Baeza-Yates, 1999 Modern Information Retrieval.
SLIDE 1IS 202 – FALL 2004 Lecture 13: Midterm Review Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am -
Modern Information Retrieval Chapter 5 Query Operations.
8/28/97Information Organization and Retrieval IR Implementation Issues, Web Crawlers and Web Search Engines University of California, Berkeley School of.
SIMS 202 Information Organization and Retrieval Prof. Marti Hearst and Prof. Ray Larson UC Berkeley SIMS Tues/Thurs 9:30-11:00am Fall 2000.
Evaluating the Performance of IR Sytems
Query Reformulation: User Relevance Feedback. Introduction Difficulty of formulating user queries –Users have insufficient knowledge of the collection.
9/14/2000Information Organization and Retrieval Vector Representation, Term Weights and Clustering Ray Larson & Marti Hearst University of California,
9/21/2000Information Organization and Retrieval Ranking and Relevance Feedback Ray Larson & Marti Hearst University of California, Berkeley School of Information.
Web Search – Summer Term 2006 II. Information Retrieval (Basics) (c) Wolfgang Hürst, Albert-Ludwigs-University.
Search engines fdm 20c introduction to digital media lecture warren sack / film & digital media department / university of california, santa.
Chapter 5: Information Retrieval and Web Search
Search and Retrieval: Relevance and Evaluation Prof. Marti Hearst SIMS 202, Lecture 20.
Evaluation David Kauchak cs458 Fall 2012 adapted from:
Evaluation David Kauchak cs160 Fall 2009 adapted from:
Modern Information Retrieval Relevance Feedback
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 1 Boolean retrieval.
Evaluation Experiments and Experience from the Perspective of Interactive Information Retrieval Ross Wilkinson Mingfang Wu ICT Centre CSIRO, Australia.
LIS618 lecture 2 the Boolean model Thomas Krichel
Thanks to Bill Arms, Marti Hearst Documents. Last time Size of information –Continues to grow IR an old field, goes back to the ‘40s IR iterative process.
Query Operations J. H. Wang Mar. 26, The Retrieval Process User Interface Text Operations Query Operations Indexing Searching Ranking Index Text.
1 Query Operations Relevance Feedback & Query Expansion.
1 Information Retrieval Acknowledgements: Dr Mounia Lalmas (QMW) Dr Joemon Jose (Glasgow)
Chapter 6: Information Retrieval and Web Search
Information retrieval 1 Boolean retrieval. Information retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text)
1 Computing Relevance, Similarity: The Vector Space Model.
CPSC 404 Laks V.S. Lakshmanan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides at UC-Berkeley.
IR Theory: Relevance Feedback. Relevance Feedback: Example  Initial Results Search Engine2.
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
 A Case For Interaction: A Study of Interactive Information Retrieval Behavior and Effectiveness By Jürgen Koenemann and Nicholas J. Belkin (1996) John.
SLIDE 1IS 240 – Spring 2013 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
Measuring How Good Your Search Engine Is. *. Information System Evaluation l Before 1993 evaluations were done using a few small, well-known corpora of.
Information Retrieval
Advantages of Query Biased Summaries in Information Retrieval by A. Tombros and M. Sanderson Presenters: Omer Erdil Albayrak Bilge Koroglu.
UWMS Data Mining Workshop Content Analysis: Automated Summarizing Prof. Marti Hearst SIMS 202, Lecture 16.
Indexing LBSC 796/CMSC 828o Session 9, March 29, 2004 Doug Oard.
Search and Retrieval: Query Languages Prof. Marti Hearst SIMS 202, Lecture 19.
User Interfaces for Information Access Prof. Marti Hearst SIMS 202, Lecture 26.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Query Refinement and Relevance Feedback.
1 CS 430: Information Discovery Lecture 21 Interactive Retrieval.
1 Web Search Engines. 2 Search Engine Characteristics  Unedited – anyone can enter content Quality issues; Spam  Varied information types Phone book,
CS315 Introduction to Information Retrieval Boolean Search 1.
SIMS 202, Marti Hearst Final Review Prof. Marti Hearst SIMS 202.
Why indexing? For efficient searching of a document
Information Storage and Retrieval Fall Lecture 1: Introduction and History.
Large Scale Search: Inverted Index, etc.
Designing Cross-Language Information Retrieval System using various Techniques of Query Expansion and Indexing for Improved Performance  Hello everyone,
Lecture 1: Introduction and the Boolean Model Information Retrieval
Lecture 12: Relevance Feedback & Query Expansion - II
Information Retrieval and Web Search
Multimedia Information Retrieval
Thanks to Bill Arms, Marti Hearst
Basic Information Retrieval
Lecture 8 Information Retrieval Introduction
Chapter 5: Information Retrieval and Web Search
Relevance and Reinforcement in Interactive Browsing
Presentation transcript:

Relevance Feedback Prof. Marti Hearst SIMS 202, Lecture 24

Marti A. Hearst SIMS 202, Fall 1997 Today n Review Inverted Indexes n Relevance Feedback n aka query modification n aka “more like this” n Begin considering the role of the user

Information need Index Pre-process Parse Collections Rank Query text input How is the index constructed?

Marti A. Hearst SIMS 202, Fall 1997 Inverted Files n Primary data structure for text indexes n Invert documents into a big index n Basic idea: n list all the tokens in the collection n for each token, list all the docs it occurs in n do a few things to reduce redundancy in the data structure Read “Inverted Files” by Harman et al., Chapter 3, sections 3.1 through 3.3 (the rest of the chapter is optional).

Inverted Files We have seen “Vector files” conceptually. An Inverted File is a vector file “inverted” so that rows become columns and columns become rows

How Are Inverted Files Created n Documents are parsed to extract tokens. These are saved with the Document ID. Now is the time for all good men to come to the aid of their country Doc 1 It was a dark and stormy night in the country manor. The time was past midnight Doc 2

How Inverted Files are Created n After all documents have been parsed the inverted file is sorted

How Inverted Files are Created n Multiple term entries for a single document are merged n Within-document term frequency information is compiled

How Inverted Files are Created Then the file is split int a Dictionary and a Postings file

Inverted files n Permit fast search for individual terms n For each term, you get a list consisting of: n document ID n frequency of term in doc (optional) n position of term in doc (optional) n These lists can be used to solve Boolean queries: n country -> d1, d2 n manor -> d2 n country AND manor -> d2 n Also used for statistical ranking algorithms

Marti A. Hearst SIMS 202, Fall 1997 Finding Out About n Three phases: n Asking of a question n Construction of an answer n Assessment of the answer n Part of an iterative process Query modification

Information need Index Pre-process Parse Collections Rank Query text input Query Modification

Marti A. Hearst SIMS 202, Fall 1997 Relevance Feedback n Problem: how to reformulate the query? n Relevance Feedback: n Modify existing query based on relevance judgements n Extract terms from relevant documents and add them to the query n and/or re-weight the terms already in the query n Either automatic, or let users select the terms from an automatically-generated list

Marti A. Hearst SIMS 202, Fall 1997 Relevance Feedback n Usually do both: n expand query with new terms n re-weight terms in query n There are many variations n usually positive weights for terms from relevant docs n sometimes negative weights for terms from non-relevant docs

Marti A. Hearst SIMS 202, Fall 1997 Rocchio Method (See Harman Chapter 11)

Marti A. Hearst SIMS 202, Fall 1997 Rocchio Method n Rocchio automatically n re-weights terms n adds in new terms (from relevant docs) n Have to be careful when using negative terms n Most methods perform similarly n results heavily dependent on test collection n Machine learning methods are proving to work better than standard IR approaches like Rocchio

Marti A. Hearst SIMS 202, Fall 1997 Using Relevance Feedback n Known to improve results n in TREC-like conditions (no user involved) n What about with a user in the loop? n Let’s examine a user study of relevance feedback by Koenneman & Belkin 1996.

Marti A. Hearst SIMS 202, Fall 1997 Questions being Investigated Koenneman & Belkin 96 n How well do users work with statistical ranking on full text? n Does relevance feedback improve results? n Is user control over operation of relevance feedback helpful? n How do different levels of user control effect results?

Marti A. Hearst SIMS 202, Fall 1997 How much of the guts should the user see? n Opaque (black box) n (like web search engines) n Transparent n (see available terms after the r.f. ) n Penetrable n (see suggested terms before the r.f.) n Which do you think worked best?

Marti A. Hearst SIMS 202, Fall 1997

Marti A. Hearst SIMS 202, Fall 1997 Terms available for relevance feedback made visible (from Koenneman & Belkin)

Marti A. Hearst SIMS 202, Fall 1997 Details on User Study Koenemann & Belkin 96 n Subjects have a tutorial session to learn the system n Their goal is to keep modifying the query until they’ve developed one that gets high precision n This is an example of a routing query (as opposed to ad hoc)

Marti A. Hearst SIMS 202, Fall 1997 Details on User Study Koenemann & Belkin 96 n 64 novice searchers n 43 female, 21 male, native english n TREC test bed n Wall Street Journal subset n Two search topics n Automobile Recalls n Tobacco Advertising and the Young n Relevance judgements from TREC and experimenter n System was INQUERY (vector space with some bells and whistles)

Marti A. Hearst SIMS 202, Fall 1997 Sample TREC query

Marti A. Hearst SIMS 202, Fall 1997 Evaluation n Precision at 30 documents n Baseline: (Trial 1) n How well does initial search go? n One topic has more relevant docs than the other n Experimental condition (Trial 2) n Subjects get tutorial on relevance feedback n Modify query in one of four modes n no r.f., opaque, transparent, penetration

Marti A. Hearst SIMS 202, Fall 1997 Precision vs. RF condition (from Koenemann & Belkin 96)

Marti A. Hearst SIMS 202, Fall 1997 Effectiveness Results n Subjects with r.f. did 17-34% better performance than no r.f. n Subjects with penetration case did 15% better as a group than those in opaque and transparent cases.

Marti A. Hearst SIMS 202, Fall 1997 Number of iterations in formulating queries (from Koenemann & Belkin 96)

Marti A. Hearst SIMS 202, Fall 1997 Number of terms in created queries (from Koenemann & Belkin 96)

Marti A. Hearst SIMS 202, Fall 1997 Behavior Results n Search times approximately equal n Precision increased in first few iterations n Penetration case required fewer iterations to make a good query than transparent and opaque n R.F. queries much longer n but fewer terms in penetrable case -- users were more selective about which terms were added in.

Marti A. Hearst SIMS 202, Fall 1997 Relevance Feedback Summary n Iterative query modification can improve precision and recall for a standing query n In at least one study, users were able to make good choices by seeing which terms were suggested for r.f. and selecting among them n So … “more like this” can be useful!