Presentation is loading. Please wait.

Presentation is loading. Please wait.

Modern Information Retrieval Chapter 5 Query Operations 報告人:林秉儀 學號: 89522022.

Similar presentations


Presentation on theme: "Modern Information Retrieval Chapter 5 Query Operations 報告人:林秉儀 學號: 89522022."— Presentation transcript:

1 Modern Information Retrieval Chapter 5 Query Operations 報告人:林秉儀 學號: 89522022

2 Introduction It is difficult to formulate queries which are well designed for retrieval purposes. Improving the initial query formulation through query expansion and term reweighting. Approaches based on: –feedback information from the user –information derived from the set of documents initially retrieved (called the local set of documents) –global information derived from the document collection

3 User Relevance Feedback User is presented with a list of the retrieved documents and, after examining them, marks those which are relevant. Two basic operation: –Query expansion : addition of new terms from relevant document –Term reweighting : modification of term weights based on the user relevance judgement

4 User Relevance Feedback The usage of user relevance feedback to: –expand queries with the vector model –reweight query terms with the probabilistic model –reweight query terms with a variant of the probabilistic model

5 Vector Model Define: –Weight: Let the k i be a generic index term in the set K = {k 1, …, k t }. A weight w i,j > 0 is associated with each index term k i of a document d j. –document index term vector: the document d j is associated with an index term vector d j representd by

6 Vector Model (cont’d) Define –from the chapter 2 the term weighting : the normalized frequency : freq i,j be the raw frequency of k i in the document d j nverse document frequency for k i : the query term weight:

7 Vector Model (cont’d) Define: –query vector: query vector q is defined as –D r : set of relevant documents identified by the: user –D n : set of non-relevant documents among the retrieved documents –C r : set of relevant documents among all documents in the collection –α,β,γ: tuning constants

8 Query Expansion and Term Reweighting for the Vector Model ideal case C r : the complete set C r of relevant documents to a given query q –the best query vector is presented by The relevant documents C r are not known a priori, should be looking for.

9 Query Expansion and Term Reweighting for the Vector Model (cont’d) 3 classic similar way to calculate the modified query –Standard_Rochio: –Ide_Regular: –Ide_Dec_Hi: the D r and D n are the document sets which the user judged

10 simialrity: the correlation between the vectors dj and this correlation can be quantified as: The probabilistic model according to the probabilistic ranking principle. – p(ki|R) : the probability of observing the term k i in the set R of relevant document – p(ki|R) : the probability of observing the term k i in the set R of non-relevant document Term Reweighting for the Probabilistic Model (5.2) djdj Q

11 The similarity of a document d j to a query q can be expressed as for the initial search –estimated above equation by following assumptions n i is the number of documents which contain the index term k i get Term Reweighting for the Probabilistic Model

12 Term Reweighting for the Probabilistic Model (cont’d) for the feedback search –The P(k i |R) and P(k i |R) can be approximated as: the D r is the set of relevant documents according to the user judgement the D r,i is the subset of D r composed of the documents contain the term k i –The similarity of d j to q: There is no query expansion occurs in the procedure.

13 Term Reweighting for the Probabilistic Model (cont’d) Adjusment factor –Because of |D r | and |D r,i | are certain small, take a 0.5 adjustment factor added to the P(k i |R) and P(k i |R) –alternative adjustment factor n i /N

14 A Variant of Probabilistic Term Reweighting 1983, Croft extended above weighting scheme by suggesting distinct initial search methods and by adapting the probabilistic formula to include within-document frequency weights. The variant of probabilistic term reweighting: the F i,j,q is a factor which depends on the triple [k i,d j,q].

15 using disinct formulations for the initial search and feedback searches –initial search: the fi,j is a normalized within-document frequency C and K should be adjusted according to the collection. feedback searches: empty text A Variant of Probabilistic Term Reweighting (cont’d)

16 Automatic Local Analysis Clustering : the grouping of documents which satisfy a set of common properties. Attempting to obtain a description for a larger cluster of relevant documents automatically : To identify terms which are related to the query terms such as: –Synonyms –Stemming –Variations –Terms with a distance of at most k words from a query term

17 Automatic Local Analysis (cont’d) The local strategy is that the documents retrieved for a given query q are examined at query time to determine terms for query expansion. Two basic types of local strategy: –Local clustering –Local context analysis Local strategies suit for environment of intranets, not for web documents.

18 Query Expansion Through Local Clustering Local feedback strategies are that expands the query with terms correlated to the query terms. Such correlated terms are those present in local clusters built from the local document set.

19 Query Expansion Through Local Clustering (cont’d) Definition: –Stem: A V(s) be a non-empty subset of words which are grammatical variants of each other. A canonical form s of V(s) is called a stem. Example: If V(s) = { polish, polishing, polished} then s=polish –D l :the local document set, the set of documents retrieved for a given query q Strategies for building local clusters: –Association clusters –Metric clusters –Scalar clusters

20 Association clusters An association cluster is based on the co-occurrence of stems inside the documents Definition: – fsi,j : the frequency of a stem s i in a document d j, –Let m=(m ij ) be an association matrix with |S l | row and |D l | columns, where m ij =f si,j. –The matrix s=mm is a local stem-stem association matrix. –Each element s u,v in s expresses a correlation c u,v between the stems s u and s v :

21 Association Clusters (cont’d) The correlation factor c u,v qunatifies the absolute frequencies of co-occurrence –The association matrix s unnormalized –Normalized

22 Association Clusters (cont’d) Build local association clusters: –Consider the u-th row in the association matrix –Let S u (n) be a function which takes the u-th row and returns the set of n largest values s u,v, where v varies over the set of local stems and vnotequaltou –Then s u (n) defines a local association cluster around the stem s u.

23 Metric Clusters Two terms which occur in the same sentence seem more correlated than two terms which occur far apart in a document. It migh be worthwhile to factor in the distance between two terms in the computation of their correlation factor.

24 Metric Clusters Let the distance r(ki, kj) between two keywords k i and k j in a same document. If k i and k j are in distinct documents we take r(ki, kj)=  A local stem-stem metric correlation matrix s is defined as : Each element s u,v of s expresses a metric correlation c u,v between the setms s u, and s v

25 Metric Clusters Given a local metric matrix s, to build local metric clusters: –Consider the u-th row in the association matrix –Let S u (n) be a function which takes the u-th row and returns the set of n largest values s u,v, where v varies over the set of local stems and v –Then s u (n) defines a local association cluster around the stem s u.

26 Scalar Clusters Two stems with similar neighborhoods have some synonymity relationship. The way to quantify such neighborhood relationships is to arrange all correlation values s u,i in a vector su, to arrange all correlation values s v,i in another vector sv, and to compare these vectors through a scalar measure.

27 Scalar Clusters Let su=(su1, su2, …,sun ) and sv =(sv1, sv2, svn) be two vectors of correlation values for the stems s u and s v. Let s=(su,v ) be a scalar association matrix. Each s u,v can be defined as Let S u (n) be a function which returns the set of n largest values s u,v, v=u. Then S u (n) defines a scalar cluster around the stem s u.

28 Interactive Search Formulation Stems(or terms) that belong to clusters associated to the query stems(or terms) can be used to expand the original query. A stem s u which belongs to a cluster (of size n) associated to another stem s v ( i.e. ) is said to be a neighbor of s v.

29 Interactive Search Formulation (cont’d) figure of stem s u as a neighbor of the stem s v                 svsv susu S v (n)  

30 Interactive Search Formulation (cont’d) For each stem, select m neighbor stems from the cluster S v (n) (which might be of type association, metric, or scalar) and add them to the query. Hopefully, the additional neighbor stems will retrieve new relevant documents. 新增的鄰近字根會找出新的 relevant documents. S v (n) may composed of stems obtained using correlation factors normalized and unnormalized. –normalized cluster tends to group stems which are more rare. –unnormalized cluster tends to group stems due to their large frequencies.

31 Interactive Search Formulation (cont’d) Using information about correlated stems to improve the search. –Let two stems s u and s v be correlated with a correlation factor c u,v. –If c u,v is larger than a predefined threshold then a neighbor stem of s u can also be interpreted as a neighbor stem of s v and vice versa. –This provides greater flexibility, particularly with Boolean queries. –Consider the expression (s u + s v ) where the + symbol stands for disjunction. –Let s u ' be an neighbor stem of s u. –Then one can try both(s u '+s v ) and (s u +s u ) as synonym search expressions, because of the correlation given by c u,v.

32 Query Expansion Through Local Context Analysis The local context analysis procedure operates in three steps: –1. retrieve the top n ranked passages using the original query. This is accomplished by breaking up the doucments initially retrieved by the query in fixed length passages (for instance, of size 300 words) and ranking these passages as if they were documents. –2. for each concept c in the top ranked passages, the similarity sim(q, c) between the whole query q (not individual query terms) and the concept c is computed using a variant of tf-idf ranking.

33 Query Expansion Through Local Context Analysis –3. the top m ranked concepts(accroding to sim(q, c) ) are added to the original query q. To each added concept is assigned a weight given by 1-0.9 × i/m where i is the position of the concept in the final concept ranking. The terms in the original query q might be stressed by assigning a weight equal to 2 to each of them.


Download ppt "Modern Information Retrieval Chapter 5 Query Operations 報告人:林秉儀 學號: 89522022."

Similar presentations


Ads by Google