Presentation on theme: "Adapted from Information Retrieval and Web Search"— Presentation transcript:
1Adapted from Information Retrieval and Web Search Pandu Nayak and Prabhakar RaghavanLecture 1: Boolean retrieval
2Information Retrieval Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers).
3Unstructured (text) vs. structured (database) data in 1996
4Unstructured (text) vs. structured (database) data in 2009
5Sec. 1.1Unstructured data in 1680Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia?One could grep all of Shakespeare’s plays for Brutus and Caesar, then strip out lines containing Calpurnia?Why is that not the answer?Slow (for large corpora)NOT Calpurnia is non-trivialOther operations (e.g., find the word Romans near countrymen) not feasibleRanked retrieval (best documents to return)Later lecturesGrep is line-oriented; IR is document oriented.
6Term-document incidence Sec. 1.1Term-document incidence1 if play contains word, 0 otherwiseBrutus AND Caesar BUT NOT Calpurnia
7Incidence vectors So we have a 0/1 vector for each term. Sec. 1.1Incidence vectorsSo we have a 0/1 vector for each term.To answer query: take the vectors for Brutus, Caesar and Calpurnia (complemented) bitwise AND.AND AND =
8Answers to query Antony and Cleopatra, Act III, Scene ii Sec. 1.1Answers to queryAntony and Cleopatra, Act III, Scene iiAgrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus,When Antony found Julius Caesar dead,He cried almost to roaring; and he weptWhen at Philippi he found Brutus slain.Hamlet, Act III, Scene iiLord Polonius: I did enact Julius Caesar I was killed i' theCapitol; Brutus killed me.
9Basic assumptions of Information Retrieval Sec. 1.1Basic assumptions of Information RetrievalCollection: Fixed set of documentsGoal: Retrieve documents with information that is relevant to the user’s information need and helps the user complete a task
10The classic search model TASKGet rid of mice in a politically correct wayMisconception?Info NeedInfo about removing micewithout killing themMistranslation?Verbal formHow do I trap mice alive?Misformulation?Querymouse trapSEARCH ENGINEQuery RefinementResultsCorpus
11How good are the retrieved docs? Sec. 1.1How good are the retrieved docs?Precision : Fraction of retrieved docs that are relevant to user’s information needRecall : Fraction of relevant docs in collection that are retrievedMore precise definitions and measurements to follow in later lectures
12Sec. 1.1Bigger collectionsConsider N = 1 million documents, each with about 1000 words.Avg 6 bytes/word including spaces/punctuation6GB of data in the documents.Say there are M = 500K distinct terms among these.
13Sec. 1.1Can’t build the matrix500K x 1M matrix has half-a-trillion 0’s and 1’s.But it has no more than one billion 1’s.matrix is extremely sparse.What’s a better representation?We only record the 1 positions.Why?
14Sec. 1.2Inverted indexFor each term t, we must store a list of all documents that contain t.Identify each by a docID, a document serial numberCan we use fixed-size arrays for this?Brutus124113145173174Caesar124561657132Calpurnia23154101What happens if the word Caesar is added to document 14?
15Inverted index We need variable-size postings lists Sec. 1.2Inverted indexWe need variable-size postings listsOn disk, a continuous run of postings is normal and bestIn memory, can use linked lists or variable length arraysSome tradeoffs in size/ease of insertionPostingBrutus124113145173174DictionaryCaesar124561657132Linked lists generally preferred to arraysDynamic space allocationInsertion of terms into documents easySpace overhead of pointersCalpurnia23154101PostingsSorted by docID (more later on why).
16Inverted index construction Sec. 1.2Inverted index constructionDocuments tobe indexedFriends, Romans, countrymen.TokenizerToken streamFriendsRomansCountrymenMore onthese later.Linguistic modulesModified tokensfriendromancountrymanIndexerInverted indexfriendromancountryman2413161
17Indexer steps: Token sequence Sec. 1.2Indexer steps: Token sequenceSequence of (Modified token, Document ID) pairs.Doc 1Doc 2I did enact JuliusCaesar I was killedi' the Capitol;Brutus killed me.So let it be withCaesar. The nobleBrutus hath told youCaesar was ambitious
18Indexer steps: Sort Sort by terms Core indexing step And then docID Sec. 1.2Indexer steps: SortSort by termsAnd then docIDCore indexing step
19Indexer steps: Dictionary & Postings Sec. 1.2Indexer steps: Dictionary & PostingsMultiple term entries in a single document are merged.Split into Dictionary and PostingsDoc. frequency information is added.Why frequency?Will discuss later.
20Where do we pay in storage? Sec. 1.2Where do we pay in storage?Lists of docIDsTerms and countsLater in the course:How do we index efficiently?How much storage do we need?Pointers
21The index we just built How do we process a query? Today’s focus Sec. 1.3The index we just builtHow do we process a query?Later - what kinds of queries can we process?Today’s focus
22Query processing: AND Consider processing the query: Brutus AND Caesar Sec. 1.3Query processing: ANDConsider processing the query:Brutus AND CaesarLocate Brutus in the Dictionary;Retrieve its postings.Locate Caesar in the Dictionary;“Merge” the two postings:248163264128Brutus12358132134Caesar
23Sec. 1.3The mergeWalk through the two postings simultaneously, in time linear in the total number of postings entries341282481632641351321248163264128BrutusCaesar2812358132134If list lengths are x and y, merge takes O(x+y) operations.Crucial: postings sorted by docID.
24Intersecting two postings lists (a “merge” algorithm)
25Boolean queries: Exact match Sec. 1.3Boolean queries: Exact matchThe Boolean retrieval model is being able to ask a query that is a Boolean expression:Boolean Queries use AND, OR and NOT to join query termsViews each document as a set of wordsIs precise: document matches condition or not.Perhaps the simplest model to build an IR system onPrimary commercial retrieval tool for 3 decades.Many search systems you still use are Boolean:, library catalog, Mac OS X Spotlight
26Example: WestLaw http://www.westlaw.com/ Sec. 1.4Example: WestLawLargest commercial (paying subscribers) legal search service (started 1975; ranking added 1992)Tens of terabytes of data; 700,000 usersMajority of users still use boolean queriesExample query:What is the statute of limitations in cases involving the federal tort claims act?LIMIT! /3 STATUTE ACTION /S FEDERAL /2 TORT /3 CLAIM/3 = within 3 words, /S = in same sentence
27Example: WestLaw http://www.westlaw.com/ Sec. 1.4Example: WestLawAnother example query:Requirements for disabled people to be able to access a workplacedisabl! /p access! /s work-site work-place (employment /3 place)Note that SPACE is disjunction, not conjunction!Long, precise queries; proximity operators; incrementally developed; not like web searchMany professional searchers still like Boolean searchYou know exactly what you are gettingBut that doesn’t mean it actually works better….
28Boolean queries: More general merges Sec. 1.3Boolean queries: More general mergesExercise: Adapt the merge for the queries:Brutus AND NOT CaesarBrutus OR NOT CaesarCan we still run through the merge in time O(x+y)?What can we achieve?
29Merging What about an arbitrary Boolean formula? Sec. 1.3MergingWhat about an arbitrary Boolean formula?(Brutus OR Caesar) AND NOT(Antony OR Cleopatra)Can we always merge in “linear” time?Linear in what?Can we do better?
30Query optimization What is the best order for query processing? Sec. 1.3Query optimizationWhat is the best order for query processing?Consider a query that is an AND of n terms.For each of the n terms, get its postings, then AND them together.Brutus248163264128Caesar12358162134Calpurnia1316Query: Brutus AND Calpurnia AND Caesar30
31Query optimization example Sec. 1.3Query optimization exampleProcess in order of increasing freq:start with smallest set, then keep cutting further.This is why we keptdocument freq. in dictionaryBrutus248163264128Caesar12358162134Calpurnia1316Execute the query as (Calpurnia AND Brutus) AND Caesar.
32More general optimization Sec. 1.3More general optimizatione.g., (madding OR crowd) AND (ignoble OR strife)Get doc. freq.’s for all terms.Estimate the size of each OR by the sum of its doc. freq.’s (conservative).Process in increasing order of OR sizes.
33Exercise (tangerine OR trees) AND (marmalade OR skies) AND Recommend a query processing order for(tangerine OR trees) AND(marmalade OR skies) AND(kaleidoscope OR eyes)
34What’s ahead in IR? Beyond term search What about phrases?Stanford UniversityProximity: Find Gates NEAR Microsoft.Need index to capture position information in docs.Zones in documents: Find documents with (author = Ullman) AND (text contains automata).
35Evidence accumulation 1 vs. 0 occurrence of a search term2 vs. 1 occurrence3 vs. 2 occurrences, etc.Usually more seems betterNeed term frequency information in docs
36Ranking search results Boolean queries give inclusion or exclusion of docs.Often we want to rank/group resultsNeed to measure proximity from query to each doc.Need to decide whether docs presented to user are singletons, or a group of docs covering various aspects of the query.
37IR vs. databases: Structured vs unstructured data Structured data tends to refer to information in “tables”EmployeeManagerSalarySmithJones50000ChangSmith60000IvySmith50000Typically allows numerical range and exact match(for text) queries, e.g.,Salary < AND Manager = Smith.
38Unstructured data Typically refers to free-form text Allows Keyword queries including operatorsMore sophisticated “concept” queries, e.g.,find all web pages dealing with drug abuseClassic model for searching text documents
39Semi-structured data In fact almost no data is “unstructured” E.g., this slide has distinctly identified zones such as the Title and BulletsFacilitates “semi-structured” search such asTitle contains data AND Bullets contain search… to say nothing of linguistic structure
40More sophisticated semi-structured search Title is about Object Oriented Programming AND Author something like stro*rupwhere * is the wild-card operatorIssues:how do you process “about”?how do you rank results?The focus of XML search (IIR chapter 10)
41Clustering, classification and ranking Clustering: Given a set of docs, group them into clusters based on their contents.Classification: Given a set of topics, plus a new doc D, decide which topic(s) D belongs to.Ranking: Can we learn how to best order a set of documents, e.g., a set of search results
42More sophisticated information retrieval Cross-language information retrievalQuestion answeringSummarizationText mining…