Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Markov Random Field Model for Term Dependencies

Similar presentations


Presentation on theme: "A Markov Random Field Model for Term Dependencies"β€” Presentation transcript:

1 A Markov Random Field Model for Term Dependencies
Hongyu Li & Chaorui Chang

2 Background Dependencies exist between terms in a collection of text
Estimating statistical models for general term dependencies is infeasible due to data sparsity Most work on modeling term dependencies in the past has focused on phrases proximity or term co-occurrences

3 Hypothesis and Solution
Dependence models will be more effective for larger collections than smaller collections Incorporating several types of evidence into a dependence model will further improve effectiveness Introducing Markov Random Field

4 Markov Random Field Also called undirected graph models , model joint distributions In the paper used to model the joint distribution 𝑃 Ξ› 𝑄,𝐷 over queries Q and documents D Assume graph G consists of query nodes qi and a document node D Joint distribution is defined by 𝑃 Ξ› 𝑄,𝐷 = 1 𝑍 Ξ› π‘βˆˆπΆ(𝐺) πœ“(𝑐;Ξ›) Where C(G) is the set of cliques in G, each πœ“(.;Ξ›) is a non-negative potential function

5 3 variants of MRF model Full independence
Query terms qi are independent Sequential dependence Dependence between neighboring query terms Full dependence All query terms are in some way dependent

6 Potential functions Potential function for 2-clique
πœ“ 𝑇 𝑐 = πœ† 𝑇 log 𝑃 π‘ž 𝑖 𝐷 = πœ† 𝑇 log [ 1βˆ’ 𝛼 𝐷 𝑑𝑓 π‘ž 𝑖 ,𝐷 𝐷 + 𝛼 𝐷 𝑐𝑓 π‘ž 𝑖 |𝐢| ] Contiguously sets of query terms within the clique πœ“ 𝑂 𝑐 = πœ† 𝑂 log 𝑃 #1(π‘ž 𝑖 ,…, π‘ž 𝑖+π‘˜ ) 𝐷 = πœ† 𝑂 log [ 1βˆ’ 𝛼 𝐷 𝑑𝑓 #1(π‘ž 𝑖 ,…, π‘ž 𝑖+π‘˜ ),𝐷 𝐷 + 𝛼 𝐷 𝑐𝑓 #1(π‘ž 𝑖 ,…, π‘ž 𝑖+π‘˜ ) |𝐢| ] Non-contiguous sets of query terms πœ“ π‘ˆ 𝑐 = πœ† π‘ˆ log 𝑃 #𝑒𝑀N(π‘ž 𝑖 ,…, π‘ž 𝑗 ) 𝐷 = πœ† π‘ˆ log [ 1βˆ’ 𝛼 𝐷 𝑑𝑓 #𝑒𝑀N(π‘ž 𝑖 ,…, π‘ž 𝑗 ),𝐷 𝐷 + 𝛼 𝐷 𝑐𝑓 #𝑒𝑀N(π‘ž 𝑖 ,…, π‘ž 𝑗 ) |𝐢| ]

7 Ranking Define the ranking function
𝑃 Ξ› 𝐷|𝑄 = 𝑃 Ξ› 𝑄,𝐷 𝑃 Ξ› 𝑄 π‘Ÿπ‘Žπ‘›π‘˜ log 𝑃 Ξ› 𝑄,𝐷 βˆ’π‘™π‘œπ‘” 𝑃 Ξ› 𝑄 Potential function can be parameterized as πœ“ 𝑐;Ξ› =exp⁑( πœ† 𝑐 𝑓 𝑐 ) 𝑃 Ξ› 𝐷|𝑄 π‘Ÿπ‘Žπ‘›π‘˜ π‘βˆˆπΆ(𝐺) πœ† 𝑐 𝑓(𝑐) = π‘βˆˆπ‘‡ πœ† 𝑇 𝑓 𝑇 (𝑐) + π‘βˆˆπ‘‚ πœ† 𝑂 𝑓 𝑂 𝑐 + π‘βˆˆπ‘‚βˆͺπ‘ˆ πœ† π‘ˆ 𝑓 π‘ˆ (𝑐)

8 Training Set parameter values (πœ† 𝑇 , πœ† 𝑂 , πœ† π‘ˆ )
Train the model by directly maximizing mean average precision Ranking function is invariant to parameter scale, thus πœ† 𝑇 + πœ† 𝑂 + πœ† π‘ˆ =1 Example mean average precision surface for the GOV2 collection using the full dependence model

9 3.Experimental Results analyze the retrieval effectiveness across different collections
Journal &Press :small homogeneous collections Web Collections: larger and less homogeneous

10 3.1 Full Independence variant
the cliques are only members of the set T , and therefore we set πœ† 𝑂 = πœ† π‘ˆ = 0, πœ† 𝑇 = 1. Ranking function :

11 AvgP refers to mean average precision, is precision at 10 ranked documents, and Β΅ is the smoothing parameter used. This results provide a baseline to compare the sequential and full dependence variants Full independence variant results.

12 3.2 Sequential Dependence variant
Models of this form have cliques in T , O, and U. Ranking function : The unordered feature function, 𝑓 π‘ˆ , has a free parameter N that allows the size of the unordered window (scope of proximity) to vary. We explore window sizes of 2, sentence(8), 50, and β€œunlimited” to see what impact they have on effectiveness.

13 Show very little difference across the various window sizes.
For the AP, WT10g, and GOV2 collection, the sentence-sized windows performed the best. For the WSJ collection, N = 2 performed the best. The sequential dependence variant outperforms the full independence variant sequential dependence variant results

14 3.3 Full Dependence variant
Consists of cliques in T , O and U. ranking function : We set the parameter N in the feature function 𝑓 π‘ˆ to be four times the number of query terms in the clique c. We analyze the impact ordered and unordered window feature functions have on effectiveness.

15 AP collection, there is very little difference .
The results for the WSJ collection the ordered features produce a clear improvement over the unordered features, but there is very little difference between using ordered features and the combination of ordered and unordered. The results for the two web collections, WT10g and GOV2, are similar. In both, unordered features perform better than ordered features, but the combination of both ordered and unordered features led to noticeable improvements in mean average precision. full dependence variant results

16 Strict matching via ordered window features is more important for the smaller newswire collections, due to the homogeneous, clean nature of the documents For the web collections, the opposite is true.

17 4. CONCLUSIONS Three dependence model variants are described, where each captures different dependencies between query terms. Modeling dependencies can significantly improve retrieval effectiveness across a range of collections. Possible future work includes exploring a wider range of potential functions, applying the model to other retrieval tasks and so on.

18 thank you


Download ppt "A Markov Random Field Model for Term Dependencies"

Similar presentations


Ads by Google