Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 CS 430: Information Discovery Lecture 11 Cranfield and TREC.

Similar presentations


Presentation on theme: "1 CS 430: Information Discovery Lecture 11 Cranfield and TREC."— Presentation transcript:

1 1 CS 430: Information Discovery Lecture 11 Cranfield and TREC

2 2 Assignment 1 In the email sent with the grades, some comments were truncated. The full comments are being sent out.

3 3 Assignment 2 (hints) Dublin Core qualifiers: Use only qualifiers that are listed on the Dublin Core web site. Specifications: Reading and understanding a specification is an important part of this assignment. Specifications are often incomplete, obscure, ambiguous, or self- contradictory. If you encounter such difficulties, explain what the problem is and take a reasonable action.

4 4 Assignment 2 (hints) Programming: The heart of this assignment is in the metadata crosswalk, not in the programming. In analyzing the input data, you can do not have to fully parse the XML tags. (If you can find software that does this for you, please use it, with proper attribution.) The output from your program can be a simple printed list or table.

5 5 Retrieval Effectiveness Designing an information retrieval system has many decisions: Manual or automatic indexing? Natural language or controlled vocabulary? What stoplists? What stemming methods? What query syntax? etc. How do we know which of these methods are most effective? Is everything a matter of judgment?

6 6 Studies of Retrieval Effectiveness The Cranfield Experiments, Cyril W. Cleverdon, Cranfield College of Aeronautics, 1957 -1968 SMART System, Gerald Salton, Cornell University, 1964-1988 TREC, Donna Harman, National Institute of Standards and Technology (NIST), 1992 -

7 7 Cranfield Experiments Comparative efficiency of indexing systems: (Universal Decimal Classification, alphabetical subject index, a special facet classification, Uniterm system of co-ordinate indexing) Four indexes prepared manually for each document in three batches of 6,000 documents -- total 18,000 documents, each indexed four times. The documents were reports and paper in aeronautics. Indexes for testing were prepared on index cards and other cards. Very careful control of indexing procedures.

8 8 Cranfield Experiments (continued) Searching: 1,200 test questions, each satisfied by at least one document Reviewed by expert panel Searches carried out by 3 expert librarians Two rounds of searching to develop testing methodology Subsidiary experiments at English Electric Whetstone Laboratory and Western Reserve University

9 9 Cranfield Experiments -- Analysis Cleverdon introduced recall and precision, based on concept of relevance. precision (%) recall (%) practical systems

10 10 The Cranfield Data The Cranfield data was made widely available and used by other researchers Salton used the Cranfield data with the SMART system (a) to study the relationship between recall and precision, and (b) to compare automatic indexing with human indexing Sparc Jones and van Rijsbergen used the Cranfield data for experiments in relevance weighting, clustering, definition of test corpora, etc.

11 11 Some Cranfield Results The various indexing systems have similar retrieval efficiency Retrieval effectiveness using automatic indexing can be at least as effective as manual indexing with controlled vocabularies -> original results from the Cranfield experiments -> considered counter-intuitive -> other results since then have supported this conclusion

12 12 Text Retrieval Conferences (TREC) Led by Donna Harman (NIST), with DARPA support Annual since 1992 Corpus of several million textual documents, total of more than five gigabytes of data Researchers attempt a standard set of tasks -> search the corpus for topics provided by surrogate users -> match a stream of incoming documents against standard queries Participants include large commercial companies, small information retrieval vendors, and university research groups.

13 13 The TREC Corpus SourceSize# DocsMedian (Mbytes)words/doc Wall Street Journal, 87-8926798,732245 Associated Press newswire, 8925484,678446 Computer Selects articles24275,180200 Federal Register, 8926025,960391 abstracts of DOE publications184226,087111 Wall Street Journal, 90-9224274,520301 Associated Press newswire, 8823779,919438 Computer Selects articles17556,920182 Federal Register, 8820919,860396

14 14 The TREC Corpus (continued) SourceSize# DocsMedian (Mbytes)words/doc San Jose Mercury News 9128790,257379 Associated Press newswire, 9023778,321451 Computer Selects articles345161,021122 U.S. patents, 932436,7114,445 Financial Times, 91-94564210,158316 Federal Register, 9439555,630588 Congressional Record, 9323527,922288 Foreign Broadcast Information470130,471322 LA Times475131,896351

15 15 The TREC Corpus (continued) Notes: 1. The TREC corpus consists mainly of general articles. The Cranfield data was in a specialized engineering domain. 2. The TREC data is raw data: -> No stop words are removed; no stemming -> Words are alphanumeric strings -> No attempt made to correct spelling, sentence fragments, etc.

16 16 TREC Topic Statement Number: 409 legal, Pan Am, 103 Description: What legal actions have resulted from the destruction of Pan Am Flight 103 over Lockerbie, Scotland, on December 21, 1988? Narrative: Documents describing any charges, claims, or fines presented to or imposed by any court or tribunal are relevant, but documents that discuss charges made in diplomatic jousting are not relevant. A sample TREC topic statement

17 17 TREC Experiments 1.NIST provides text corpus on CD-ROM Participant builds index using own technology 2.NIST provides 50 natural language topic statements Participant converts to queries (automatically or manually) 3.Participant run search, returns up to 1,000 hits to NIST. NIST analyzes for recall and precision (all TREC participants use rank based methods of searching)

18 18 Relevance Assessment For each query, a pool of potentially relevant documents is assembled, using the top 100 ranked documents from each participant The human expert who set the query looks at every document in the pool and determines whether it is relevant. Documents outside the pool are not examined. In TREC-8: 7,100 documents in the pool 1,736 unique documents (eliminating duplicates) 94 judged relevant

19 19 A Cornell Footnote The TREC analysis uses a program developed by Chris Buckley, who spent 17 years at Cornell before completing his Ph.D. in 1995. Buckley has continued to maintain the SMART software and has been a participant at every TREC conference. SMART is used as the basis against which other systems are compared. During the early TREC conferences, the tuning of SMART with the TREC corpus led to steady improvements in retrieval efficiency, but after about TREC-5 a plateau was reached. TREC-8, in 1999, was the final year for this experiment.


Download ppt "1 CS 430: Information Discovery Lecture 11 Cranfield and TREC."

Similar presentations


Ads by Google