Presentation is loading. Please wait.

Presentation is loading. Please wait.

Natural Language Processing Topics in Information Retrieval August, 2002.

Similar presentations


Presentation on theme: "Natural Language Processing Topics in Information Retrieval August, 2002."— Presentation transcript:

1 Natural Language Processing Topics in Information Retrieval August, 2002

2 Background on IR Retrieve textual information from document repositories. User enters a query describing the desired information The system returns a list of documents – exact match, ranked list

3 Text Categorization Attempt to assign documents to two or more pre-defined categories. Routing: Ranking of documents according to relevance. Training information in the form of relevance labels is available. Filtering: Absolute assessment of relevance.

4 Design Features of IR Systems Inverted Index: Primary data structure of IR systems Data structure that lists each word and its frequency in all documents. Including the position information allows us to search for phrases. Stop List (Function Words): Lists words unlikely to be useful for searching. Examples: the, from, to …. Excluding this reduces the size of the inverted index

5 Design Features (Cont.) Stemming: Simplified form of morphological analysis consisting simply of truncating a word. For example laughing, laughs, laugh and laughed are all stemmed to laugh. The problem is semantically different words like gallery and gall may both be truncated to gall making the stems unintelligible to users. Levins and Porter Stemmer Thesaurus: Widen search to include documents using related terms.

6 Evaluation Measures Precision: Percentage of relevant items returned. Recall: Percentage of all relevant documents in the collection that is in the returned set. Combine precision and recall: Cutoff Uninterpolated average precision Interpolated average precision Precision-Recall curves F measure

7 Probability Ranking Principle (PRP) Ranking documents in order of decreasing probability of relevance is optimal. View retrieval as a greedy search that aims to identify the most valuable document. Assumptions of PRP: Documents are independent. Complex information need is broken into a number of queries which are each optimized in isolation. Probability of relevance is only estimated.

8 The Vector Space Model Measure closeness between query and document. Queries and documents represented as n dimensional vectors. Each dimension corresponds to a word. Advantages: Conceptual simplicity and use of spatial proximity for semantic proximity.

9 Vector Similarity d = The man said that a space age man appeared d’ = Those men appeared to say their age

10 Vector Similarity (Cont.) cosine measure or normalized correlation coefficient Euclidean Distance:

11 Term Weighting Quantities used: tf i,j (Term frequency) : # of occurrences of w i in d i df i (Document frequency) : # of documents that w i occurs in cf i (Collection frequency) : total # of occurrences of w i in the collection

12 Term Weighting (Cont.) tf i,j = 1+log(tf), tf > 0 df i : indicator of informativeness Inverse document frequency (IDF weighting) TF.IDF (Term frequency & Inverse Document Frequency): indicator of semantically focussed words:

13 Term Distribution Models Develop a model for the distribution of a word and use this model to characterize its importance for retrieval. Estimate p i (k): p i (k) : proportion of times that word w i appears k times in document. Poisson, Two-Poisson and K mixture. We can derive the IDF from term distribution models.

14 The Poisson Distribution the parameter i > 0 is the average number of occurrences of w i per document. We are interested in the frequency of occurrence of a particular word w i in a document. Poisson distribution is good for estimating non- content words. 0 somefor );( !   i k i k i i ekp

15 The Two-Poisson Model Better fit to the frequency distribution Mixture of two poissons Non-privileged class: Low average # of occurrences Occurrences are accidental Privileged class: High average # of occurrences Central content word 21 21 )1(),,;(eekp ! k 2 k ! 1 k k      : probability of a document being in the privileged class 1-  : probability of a document being in the non-privileged class 1, 2 : average number of occurrence of word w i in each class

16 The K Mixture More accurate  : # of extra terms per document in which the term occurs  : absolute frequency of the term.

17 Latent Semantic Indexing Projects queries and documents into a space with “latent” semantic dimensions. Dimensionality reduction: the latent semantic space that we project into has fewer dimensions than the original space. Exploits co-occurrence: the fact that two or more terms occur in the same document more often than chance. Similarity metric: Co-occurring terms are projected onto the same dimensions.

18 Singular Value Decomposition SVD takes a document-by-term matrix A in n- dim space and projects it to A in a lower dimensional space k (n>>k). The 2-norm (distance) between the two matrices is minimized: ˆ

19 SVD (Cont) SVD projection: A txd – document-by-term matrix T txn – Terms in new space S nxn – Singular values of A in descending order D dxn – document matrix in new space N = min (t,d) T, D have orthonormal columns

20 LSI in IR Encode terms and documents using factors derived from SVD. Rank similarity of terms and docs to query via Euclidean distances or cosines.

21 LSI example

22 LSI example cont.

23 LSI example : original vs. dimension reduced 0.85 0.52 0.28 0.13 0.21 -0.08 0.36 0.36 0.16 -0.21 -0.03 -0.18 1.00 0.72 0.36 -0.05 0.16 -0.21 0.98 0.13 0.21 1.03 0.62 0.41 0.13 -0.39 -0.08 0.90 0.41 0.49 1 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 1 1 0 0 0 0 1 0 1 A = k =2 1.05 -0.03 0.61 -0.02 0.29 -0.31 0.15 0.92 -0.18 -0.05 -0.12 0.06 0.87 1.07 0.15 0.04 0.10 -0.05 1.03 -0.02 0.29 0.99 0.64 0.35 -0.02 0.01 -0.31 1.01 0.35 0.66 k =3

24 LSI example cont. Condensed representation of documents B=S 2*2 V 2*n =

25 LSI example - querying q’ = q T T k S k -1 For example: q=‘astronaut car’ =(0 1 0 1 0) q’ = (0.38 0.01) Query result cos(q’,B i ) = (0.96 0.56 0.81 0.72 0.91 0.40)

26 Discourse Segmentation Break documents into topically coherent multi-paragraph subparts. Detect topic shifts within document

27 TextTiling (Hearst and Plaunt, 1993) Search for vocabulary shifts from one subtopic to another. Divide text into fixed size blocks (20 words). Look for topic shifts in-between these blocks. Cohesion scorer: measures the topic continuity at each gap (point between two block). Depth scorer: at a gap determine how low the cohesion score is compared to surrounding gaps. Boundary selector: looks at the depth scores & selects the gaps that are the best segmentation points.


Download ppt "Natural Language Processing Topics in Information Retrieval August, 2002."

Similar presentations


Ads by Google