Presentation is loading. Please wait.

Presentation is loading. Please wait.

Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22.

Similar presentations


Presentation on theme: "Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22."— Presentation transcript:

1 Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22

2 Marti A. Hearst SIMS 202, Fall 1997 Today n Document Ranking n term weights n similarity measures n vector space model n probabilistic models n Multi-dimensional spaces n Clustering

3 Marti A. Hearst SIMS 202, Fall 1997 Finding Out About n Three phases: n Asking of a question n Construction of an answer n Assessment of the answer n Part of an iterative process

4 Marti A. Hearst SIMS 202, Fall 1997 Ranking Algorithms n Assign weights to the terms in the query. n Assign weights to the terms in the documents. n Compare the weighted query terms to the weighted document terms. n Rank order the results.

5 Information need Index Pre-process Parse Collections Rank Query text input

6 Marti A. Hearst SIMS 202, Fall 1997 Vector Representation (revisited; see Salton article in Science) n Documents and Queries are represented as vectors. n Position 1 corresponds to term 1, position 2 to term 2, position t to term t n The weight of the term is stored in each position

7 Marti A. Hearst SIMS 202, Fall 1997 Assigning Weights to Terms n Raw term frequency n tf x idf n Automatically-derived thesaurus terms

8 Marti A. Hearst SIMS 202, Fall 1997 Assigning Weights to Terms n Raw term frequency n tf x idf n Recall the Zipf distribution n Want to weight terms highly if they are n frequent in relevant documents … BUT n infrequent in the collection as a whole n Automatically derived thesaurus terms

9 Marti A. Hearst SIMS 202, Fall 1997 Assigning Weights n tf x idf measure: n term frequency (tf) n inverse document frequency (idf) n Goal: assign a tf * idf weight to each term in each document

10 Marti A. Hearst SIMS 202, Fall 1997 tf x idf

11 Marti A. Hearst SIMS 202, Fall 1997 tf x idf normalization n Normalize the term weights (so longer documents are not unfairly given more weight) n normalize usually means force all values to fall within a certain range, usually between 0 and 1, inclusive.

12 Marti A. Hearst SIMS 202, Fall 1997 Vector space similarity (use the weights to compare the documents)

13 Marti A. Hearst SIMS 202, Fall 1997 Vector Space Similarity Measure combine tf x idf into a similarity measure

14 Marti A. Hearst SIMS 202, Fall 1997 To Think About n How does this ranking algorithm behave? n Make a set of hypothetical documents consisting of terms and their weights n Create some hypothetical queries n How are the documents ranked, depending on the weights of their terms and the queries’ terms?

15 Marti A. Hearst SIMS 202, Fall 1997 Computing Similarity Scores 1.0 0.8 0.6 0.8 0.4 0.60.41.00.2

16 Marti A. Hearst SIMS 202, Fall 1997 Computing a similarity score

17 Marti A. Hearst SIMS 202, Fall 1997 Other Major Ranking Schemes n Probabilistic Ranking n Attempts to be more theoretically sound than the vector space (v.s.) model n try to predict the probability of a document’s being relevant, given the query n there are many many variations n usually more complicated to compute than v.s. n usually many approximations are required n Usually can’t beat v.s. reliably using standard evaluation measures

18 Marti A. Hearst SIMS 202, Fall 1997 Other Major Ranking Schemes n Staged Logistic Regression n A variation on probabilistic ranking n Used successfully here at Berkeley in the Cheshire II system

19 Marti A. Hearst SIMS 202, Fall 1997 Staged Logistic Regression n Pick a set of X feature types n sum of frequencies of all terms in query x1 n sum of frequencies of all query terms in document x2 n query length x3 n document length x4 n sum of idf’s for all terms in query x5 n Determine weights, c, to indicate how important each feature type is (use training examples) n To assign a score to the document: n add up the feature weight times the term weight for each feature and each term in the query

20 Marti A. Hearst SIMS 202, Fall 1997 Multi-Dimensional Space n Documents exist in multi-dimensional space n What does this mean? n Consider a set of objects with features n different shapes n different sizes n different colors n In what ways can they be grouped? n The features define an abstract space that the objects can reside in. n Generalize this to terms in documents. n There are more than three kinds of terms!

21 Marti A. Hearst SIMS 202, Fall 1997 Text Clustering Clustering is “The art of finding groups in data.” -- Kaufmann and Rousseeu Term 1 Term 2

22 Marti A. Hearst SIMS 202, Fall 1997 Text Clustering Term 1 Term 2 Clustering is “The art of finding groups in data.” -- Kaufmann and Rousseeu

23 Marti A. Hearst SIMS 202, Fall 1997 Pair-wise Document Similarity novagalaxy heath’wood filmroledietfur 1 3 1 1 3 1 5 2 5 2 2 1 5 2 1 5 4 1 4 1 ABCDABCD How to compute document similarity?

24 Marti A. Hearst SIMS 202, Fall 1997 Pair-wise Document Similarity (no normalization for simplicity) novagalaxy heath’wood filmroledietfur 1 3 1 1 3 1 5 2 5 2 2 1 5 2 1 5 4 1 4 1 ABCDABCD

25 Marti A. Hearst SIMS 202, Fall 1997 Using Clustering n Cluster entire collection n Find cluster centroid that best matches the query n This has been explored extensively n it is expensive n it doesn’t work well

26 Marti A. Hearst SIMS 202, Fall 1997 Using Clustering n Alternative (scatter/gather): n cluster top-ranked documents n show cluster summaries to user n Seems useful n experiments show relevant docs tend to end up in the same cluster n users seem able to interpret and use the cluster summaries some of the time n More computationally feasible

27 Marti A. Hearst SIMS 202, Fall 1997 Clustering n Advantages: n See some main themes n Disadvantage: n Many ways documents could group together are hidden

28 Marti A. Hearst SIMS 202, Fall 1997 Using Clustering n Another alternative: n cluster entire collection n force results into a 2D space n display graphically to give an overview n looks neat but hasn’t been shown to be useful n Kohonen feature maps can be used instead of clustering to produce display of documents in 2D regions

29 Marti A. Hearst SIMS 202, Fall 1997 Clustering Multi-Dimensional Document Space (image from Wise et al 95)

30 Marti A. Hearst SIMS 202, Fall 1997 Clustering Multi-Dimensional Document Space (image from Wise et al 95)

31 Marti A. Hearst SIMS 202, Fall 1997 Concept “Landscapes” from Kohonen Feature Maps (X. Lin and H. Chen) Pharmocology Anatomy Legal Disease Hospitals

32 Marti A. Hearst SIMS 202, Fall 1997 Graphical Depictions of Clusters Problems: Problems: Either too many concepts, or too coarse Either too many concepts, or too coarse Only one concept per document Only one concept per document Hard to view titles Hard to view titles Browsing without search Browsing without search

33 Marti A. Hearst SIMS 202, Fall 1997 Another Approach to Term Weighting: Latent Semantic Indexing n Try to find words that are similar in meaning to other words by: n computing document by term matrix n a matrix is a two-dimensional vector n processing the matrix to pull out the main themes

34 Marti A. Hearst SIMS 202, Fall 1997 Document/Term Matrix

35 Marti A. Hearst SIMS 202, Fall 1997 Finding Similar Tokens Two terms are considered similar if they co-occur often in many documents.

36 Marti A. Hearst SIMS 202, Fall 1997 Document/Term Matrix n This approach doesn’t work well n Problems: n Word contexts too large n Polysemy n Alternative Approaches n Use Smaller Contexts n Machine-Readable Dictionaries n Local syntactic structure n LSI (Latent Semantic Indexing) n Find main themes within matrix


Download ppt "Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22."

Similar presentations


Ads by Google