Presentation is loading. Please wait.

Presentation is loading. Please wait.

Relevance Feedback Prof. Marti Hearst SIMS 202, Lecture 24.

Similar presentations


Presentation on theme: "Relevance Feedback Prof. Marti Hearst SIMS 202, Lecture 24."— Presentation transcript:

1 Relevance Feedback Prof. Marti Hearst SIMS 202, Lecture 24

2 Marti A. Hearst SIMS 202, Fall 1997 Today n Review Inverted Indexes n Relevance Feedback n aka query modification n aka “more like this” n Begin considering the role of the user

3 Information need Index Pre-process Parse Collections Rank Query text input How is the index constructed?

4 Marti A. Hearst SIMS 202, Fall 1997 Inverted Files n Primary data structure for text indexes n Invert documents into a big index n Basic idea: n list all the tokens in the collection n for each token, list all the docs it occurs in n do a few things to reduce redundancy in the data structure Read “Inverted Files” by Harman et al., Chapter 3, sections 3.1 through 3.3 (the rest of the chapter is optional).

5 Inverted Files We have seen “Vector files” conceptually. An Inverted File is a vector file “inverted” so that rows become columns and columns become rows

6 How Are Inverted Files Created n Documents are parsed to extract tokens. These are saved with the Document ID. Now is the time for all good men to come to the aid of their country Doc 1 It was a dark and stormy night in the country manor. The time was past midnight Doc 2

7 How Inverted Files are Created n After all documents have been parsed the inverted file is sorted

8 How Inverted Files are Created n Multiple term entries for a single document are merged n Within-document term frequency information is compiled

9 How Inverted Files are Created Then the file is split int a Dictionary and a Postings file

10 Inverted files n Permit fast search for individual terms n For each term, you get a list consisting of: n document ID n frequency of term in doc (optional) n position of term in doc (optional) n These lists can be used to solve Boolean queries: n country -> d1, d2 n manor -> d2 n country AND manor -> d2 n Also used for statistical ranking algorithms

11 Marti A. Hearst SIMS 202, Fall 1997 Finding Out About n Three phases: n Asking of a question n Construction of an answer n Assessment of the answer n Part of an iterative process Query modification

12 Information need Index Pre-process Parse Collections Rank Query text input Query Modification

13 Marti A. Hearst SIMS 202, Fall 1997 Relevance Feedback n Problem: how to reformulate the query? n Relevance Feedback: n Modify existing query based on relevance judgements n Extract terms from relevant documents and add them to the query n and/or re-weight the terms already in the query n Either automatic, or let users select the terms from an automatically-generated list

14 Marti A. Hearst SIMS 202, Fall 1997 Relevance Feedback n Usually do both: n expand query with new terms n re-weight terms in query n There are many variations n usually positive weights for terms from relevant docs n sometimes negative weights for terms from non-relevant docs

15 Marti A. Hearst SIMS 202, Fall 1997 Rocchio Method (See Harman Chapter 11)

16 Marti A. Hearst SIMS 202, Fall 1997 Rocchio Method n Rocchio automatically n re-weights terms n adds in new terms (from relevant docs) n Have to be careful when using negative terms n Most methods perform similarly n results heavily dependent on test collection n Machine learning methods are proving to work better than standard IR approaches like Rocchio

17 Marti A. Hearst SIMS 202, Fall 1997 Using Relevance Feedback n Known to improve results n in TREC-like conditions (no user involved) n What about with a user in the loop? n Let’s examine a user study of relevance feedback by Koenneman & Belkin 1996.

18 Marti A. Hearst SIMS 202, Fall 1997 Questions being Investigated Koenneman & Belkin 96 n How well do users work with statistical ranking on full text? n Does relevance feedback improve results? n Is user control over operation of relevance feedback helpful? n How do different levels of user control effect results?

19 Marti A. Hearst SIMS 202, Fall 1997 How much of the guts should the user see? n Opaque (black box) n (like web search engines) n Transparent n (see available terms after the r.f. ) n Penetrable n (see suggested terms before the r.f.) n Which do you think worked best?

20 Marti A. Hearst SIMS 202, Fall 1997

21 Marti A. Hearst SIMS 202, Fall 1997 Terms available for relevance feedback made visible (from Koenneman & Belkin)

22 Marti A. Hearst SIMS 202, Fall 1997 Details on User Study Koenemann & Belkin 96 n Subjects have a tutorial session to learn the system n Their goal is to keep modifying the query until they’ve developed one that gets high precision n This is an example of a routing query (as opposed to ad hoc)

23 Marti A. Hearst SIMS 202, Fall 1997 Details on User Study Koenemann & Belkin 96 n 64 novice searchers n 43 female, 21 male, native english n TREC test bed n Wall Street Journal subset n Two search topics n Automobile Recalls n Tobacco Advertising and the Young n Relevance judgements from TREC and experimenter n System was INQUERY (vector space with some bells and whistles)

24 Marti A. Hearst SIMS 202, Fall 1997 Sample TREC query

25 Marti A. Hearst SIMS 202, Fall 1997 Evaluation n Precision at 30 documents n Baseline: (Trial 1) n How well does initial search go? n One topic has more relevant docs than the other n Experimental condition (Trial 2) n Subjects get tutorial on relevance feedback n Modify query in one of four modes n no r.f., opaque, transparent, penetration

26 Marti A. Hearst SIMS 202, Fall 1997 Precision vs. RF condition (from Koenemann & Belkin 96)

27 Marti A. Hearst SIMS 202, Fall 1997 Effectiveness Results n Subjects with r.f. did 17-34% better performance than no r.f. n Subjects with penetration case did 15% better as a group than those in opaque and transparent cases.

28 Marti A. Hearst SIMS 202, Fall 1997 Number of iterations in formulating queries (from Koenemann & Belkin 96)

29 Marti A. Hearst SIMS 202, Fall 1997 Number of terms in created queries (from Koenemann & Belkin 96)

30 Marti A. Hearst SIMS 202, Fall 1997 Behavior Results n Search times approximately equal n Precision increased in first few iterations n Penetration case required fewer iterations to make a good query than transparent and opaque n R.F. queries much longer n but fewer terms in penetrable case -- users were more selective about which terms were added in.

31 Marti A. Hearst SIMS 202, Fall 1997 Relevance Feedback Summary n Iterative query modification can improve precision and recall for a standing query n In at least one study, users were able to make good choices by seeing which terms were suggested for r.f. and selecting among them n So … “more like this” can be useful!


Download ppt "Relevance Feedback Prof. Marti Hearst SIMS 202, Lecture 24."

Similar presentations


Ads by Google