Presentation is loading. Please wait.

Presentation is loading. Please wait.

Digital Libraries: Steps toward information finding Many slides in this presentation are from Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze,

Similar presentations


Presentation on theme: "Digital Libraries: Steps toward information finding Many slides in this presentation are from Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze,"— Presentation transcript:

1 Digital Libraries: Steps toward information finding Many slides in this presentation are from Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press. 2008. Book available online at http://nlp.stanford.edu/IR-book/information-retrieval-book.htmlhttp://nlp.stanford.edu/IR-book/information-retrieval-book.html

2 A brief introduction to Information Retrieval Resource: Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press. 2008. The entire book is available online, free, at http://nlp.stanford.edu/IR- book/information-retrieval-book.html http://nlp.stanford.edu/IR- book/information-retrieval-book.html I will use some of the slides that they provide to go with the book.

3 Author’s definition Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers). Note the use of the word “usually.” We have seen DL examples where the material is not documents, and not text.

4 Examples and Scaling IR is about finding a needle in a haystack – finding some particular thing in a very large collection of similar things. Our examples are necessarily small, so that we can comprehend them. Do remember, that all that we say must scale to very large quantities.

5 Just how much information? Libraries are about access to information. – What sense do you have about information quantity? – How fast is it growing? – Are there implications for the quantity and rate of increase?

6 How much information is there? Data summarization, trend detection anomaly detection are key technologies Yotta Zetta Exa Peta Tera Giga Mega Kilo All books (words) All Books MultiMedia Everything Recorded ! A Photo 24 Yecto, 21 zepto, 18 atto, 15 femto, 12 pico, 9 nano, 6 micro, 3 milli Slide source Jim Gray – Microsoft Research (modified) A Book A movie See Mike Lesk: How much information is there: http://www.lesk.com/mlesk/ksg97/ksg.html See Lyman & Varian: How much information http://www.sims.berkeley.edu/research/projects/how-much-info// Soon most everything will be recorded and indexed Most bytes will never be seen by humans. These require algorithms, data and knowledge representation, and knowledge of the domain

7 Where does the information come from? Many sources – Corporations – Individuals – Interest groups – News organizations Accumulated through crawling

8 Once we have a collection How will we ever find the needle in the haystack? The one bit of information needed? After crawling, or other resource acquisition step, we need to create a way to query the information we have – Next step: Index Example content: Shakespeare’s plays

9 Searching Shakespeare Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia? – See http://www.rhymezone.com/shakespeare/http://www.rhymezone.com/shakespeare/ One could grep all of Shakespeare’s plays for Brutus and Caesar, then strip out lines containing Calpurnia? Why is that not the answer? – Slow (for large corpora) – NOT Calpurnia is non-trivial – Other operations (e.g., find the word Romans near countrymen) not feasible – Ranked retrieval (best documents to return)

10 Term-document incidence 1 if play contains word, 0 otherwise Brutus AND Caesar BUT NOT Calpurnia All the plays  All the terms  First approach – make a matrix with terms on one axis and plays on the other

11 Incidence Vectors So we have a 0/1 vector for each term. To answer query: take the vectors for Brutus, Caesar and Calpurnia (complemented)  bitwise AND. 110100 AND 110111 AND 101111 = 100100.

12 Answer to query Antony and Cleopatra, Act III, Scene ii Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain. Hamlet, Act III, Scene ii Lord Polonius: I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me.

13 Try another one What is the vector for the query – Antony and mercy What would we do to find Antony OR mercy?

14 Basic assumptions about information retrieval Collection: Fixed set of documents Goal: Retrieve documents with information that is relevant to the user’s information need and helps the user complete a task

15 The classic search model Corpus TASK Info Need Query Verbal form Results SEARCH ENGINE Query Refinement Ultimately, some task to perform. Some information is required in order to perform the task. The information need must be expressed in words (usually). The information need must be expressed in the form of a query that can be processed. It may be necessary to rephrase the query and try again

16 The classic search model Corpus TASK Info Need Query Verbal form Results SEARCH ENGINE Query Refinement Get rid of mice in a politically correct way Info about removing mice without killing them How do I trap mice alive? mouse trap Misconception?Mistranslation?Misformulation? Potential pitfalls between task and query results

17 How good are the results? Precision: How well do the results match the information need? Recall: What fraction of the available correct results were retrieved? These are the basic concepts of information retrieval evaluation.

18 Stop and think If you had to choose between precision and recall, which would you choose? Give an example when each would be preferable. Everyone, provide an example of each. Everyone, comment on an example of each provided by someone else. – Ideally, do this in real time. However, you may need to come back to do your comments after others have finished.

19 Discuss What was the best example of precision as being more important? What was the best example of recall being more important? Try to come to consensus in your discussion.

20 Size considerations Consider N = 1 million documents, each with about 1000 words. Avg 6 bytes/word including spaces/punctuation – 6GB of data in the documents. Say there are M = 500K distinct terms among these.

21 The matrix does not work 500K x 1M matrix has half-a-trillion 0’s and 1’s. But it has no more than one billion 1’s. – matrix is extremely sparse. What’s a better representation? – We only record the 1 positions. – i.e. We don’t need to know which documents do not have a term, only those that do. Why?

22 Inverted index For each term t, we must store a list of all documents that contain t. – Identify each by a docID, a document serial number Can we used fixed-size arrays for this? Brutus Calpurnia Caesar 124561657132 124113145173174 2 3154 101 What happens if we add document 14, which contains “Caesar.”

23 Inverted index  We need variable-size postings lists  On disk, a continuous run of postings is normal and best  In memory, can use linked lists or variable length arrays  Some tradeoffs in size/ease of insertion 23 Postings Sorted by docID (more later on why). Posting Brutus Calpurnia Caesar 124561657132 124113145173 231 174 54101

24 Tokenizer Token stream. Friends RomansCountrymen Inverted index construction Linguistic modules Modified tokens. friend romancountryman Indexer Inverted index. friend roman countryman 24 2 13 16 1 Documents to be indexed. Friends, Romans, countrymen. Stop words, stemming, capitalization, cases, etc.

25 Stop and think Look at the previous slide. Describe exactly what happened at each stage. – What does tokenization do? – What did the linguistic modules do? Why would we want these transformations?

26 Indexer steps: Token sequence  Sequence of (Modified token, Document ID) pairs. I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me. Doc 1 So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious Doc 2 Note the words in the table are exactly the same as in the two documents, and in the same order.

27 Indexer steps: Sort  Sort by terms  And then docID Core indexing step

28 Indexer steps: Dictionary & Postings  Multiple term entries in a single document are merged.  Split into Dictionary and Postings  Doc. frequency information is added. Number of documents in which the term appears ID of documents that contain the term

29 Spot check (See the separate assignment for this in Blackboard Complete the indexing for the following two “documents.” – Of course, the examples have to be very small to be manageable. Imagine that you are indexing the entire news stories. – Construct the charts as seen on the previous slide – Put your solution in the Blackboard Indexing – Spot Check 1. There is a discussion board in Blackboard. You will find it on the content homepage. Document 1: Pearson and Google Jump Into Learning Management With a New, Free System Document 2: Pearson adds free learning management tools to Google Apps for Education

30 How do we process a query? Using the index we just built, examine the terms in some order, looking for the terms in the query.

31 Query processing: AND Consider processing the query: Brutus AND Caesar – Locate Brutus in the Dictionary; Retrieve its postings. – Locate Caesar in the Dictionary; Retrieve its postings. – “Merge” the two postings: 31 128 34 248163264123581321 Brutus Caesar

32 The merge Walk through the two postings simultaneously, in time linear in the total number of postings entries 32 34 12824816 3264 12 3 581321 128 34 248163264123581321 Brutus Caesar 2 8 If the list lengths are x and y, the merge takes O(x+y) operations. What does that mean? Crucial: postings sorted by docID. Sec. 1.3

33 Spot Check Let’s assume that – the term mercy appears in documents 1, 2, 13, 18, 24,35, 54 – the term noble appears in documents 1, 5, 7, 13, 22, 24, 56 Show the document lists, then step through the merge algorithm to obtain the search results.

34 Stop and try it Make up a new pair of postings lists (like the ones we just saw). Post it on the discussion board. Take a pair of postings lists that someone else posted and walk through the merge process. Post a comment on the posting saying how many comparisons you had to do to complete the merge.

35 Intersecting two postings lists (a “merge” algorithm) 35

36 Boolean queries: Exact match The Boolean retrieval model is being able to ask a query that is a Boolean expression: – Boolean Queries are queries using AND, OR and NOT to join query terms Views each document as a set of words Is precise: document matches condition or not. – Perhaps the simplest model to build Primary commercial retrieval tool for 3 decades. Many search systems you still use are Boolean: – Email, library catalog, Mac OS X Spotlight 36

37 37 Query optimization Consider a query that is an and of n terms, n > 2 For each of the terms, get its postings list, then and them together Example query: BRUTUS AND CALPURNIA AND CAESAR What is the best order for processing this query? 37

38 38 Query optimization Example query: BRUTUS AND CALPURNIA AND CAESAR Simple and effective optimization: Process in order of increasing frequency Start with the shortest postings list, then keep cutting further In this example, first CAESAR, then CALPURNIA, then BRUTUS 38

39 39 Optimized intersection algorithm for conjunctive queries 39

40 40 More General optimization Example query: ( MADDING OR CROWD ) and ( IGNOBLE OR STRIFE ) Get frequencies for all terms Estimate the size of each or by the sum of its frequencies (conservative) Process in increasing order of or sizes 40

41 Scaling These basic techniques are pretty simple There are challenges – Scaling as everything becomes digitized, how well do the processes scale? – Intelligent information extraction I want information, not just a link to a place that might have that information.

42 Problem with Boolean search: feast or famine Boolean queries often result in either too few (=0) or too many (1000s) results. A query that is too broad yields hundreds of thousands of hits A query that is too narrow may yield no hits It takes a lot of skill to come up with a query that produces a manageable number of hits. – AND gives too few; OR gives too many

43 Ranked retrieval models Rather than a set of documents satisfying a query expression, in ranked retrieval models, the system returns an ordering over the (top) documents in the collection with respect to a query Free text queries: Rather than a query language of operators and expressions, the user’s query is just one or more words in a human language In principle, these are different options, but in practice, ranked retrieval models have normally been associated with free text queries and vice versa 43

44 Feast or famine: not a problem in ranked retrieval When a system produces a ranked result set, large result sets are not an issue –I–Indeed, the size of the result set is not an issue –W–We just show the top k ( ≈ 10) results –W–We don’t overwhelm the user –P–Premise: the ranking algorithm works

45 Scoring as the basis of ranked retrieval We wish to return, in order, the documents most likely to be useful to the searcher How can we rank-order the documents in the collection with respect to a query? Assign a score – say in [0, 1] – to each document This score measures how well document and query “match”.

46 Query-document matching scores We need a way of assigning a score to a query/document pair Let’s start with a one-term query If the query term does not occur in the document: score should be 0 The more frequent the query term in the document, the higher the score (should be) We will look at a number of alternatives for this.

47 Take 1: Jaccard coefficient A commonly used measure of overlap of two sets A and B – jaccard(A,B) = |A ∩ B| / |A ∪ B| – jaccard(A,A) = 1 – jaccard(A,B) = 0 if A ∩ B = 0 A and B don’t have to be the same size. Always assigns a number between 0 and 1.

48 Jaccard coefficient: Scoring example What is the query-document match score that the Jaccard coefficient computes for each of the two “documents” below? Query: ides of march Document 1: caesar died in march Document 2: the long march

49 Jaccard Example done Query: ides of march Document 1: caesar died in march Document 2: the long march A = {ides, of, march} B1 = {caesar, died, in, march} B2 = {the, long, march}

50 Issues with Jaccard for scoring It doesn’t consider term frequency (how many times a term occurs in a document) – Rare terms in a collection are more informative than frequent terms. Jaccard doesn’t consider this information We need a more sophisticated way of normalizing for length The problem with the first example was that document 2 “won” because it was shorter, not because it was a better match. We need a way to take into account document length so that longer documents are not penalized in calculating the match score.

51 Next week We will have time for questions and discussion of this material, but I will be looking to see how much you have handled on your own. You get credit for posting a good question You get credit for a reasonable answer to a good question You get credit for pointing out problems or expanding on answers to questions.

52 References Marcos André Gonçalves, Edward A. Fox, Layne T. Watson, and Neill A. Kipp. 2004. Streams, structures, spaces, scenarios, societies (5s): A formal model for digital libraries. ACM Trans. Inf. Syst. 22, 2 (April 2004), 270-312. DOI=10.1145/984321.984325 – http://doi.acm.org/10.1145/984321.984325 http://doi.acm.org/10.1145/984321.984325 – Let me know if you would like a copy of the paper. Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press. 2008. – Book available online at http://nlp.stanford.edu/IR-book/information-retrieval-book.htmlhttp://nlp.stanford.edu/IR-book/information-retrieval-book.html – Many of these slides are taken directly from the authors’ slides from the first chapter of the book.


Download ppt "Digital Libraries: Steps toward information finding Many slides in this presentation are from Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze,"

Similar presentations


Ads by Google