Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Study of Poisson Query Generation Model for Information Retrieval

Similar presentations


Presentation on theme: "A Study of Poisson Query Generation Model for Information Retrieval"— Presentation transcript:

1 A Study of Poisson Query Generation Model for Information Retrieval
Qiaozhu Mei, Hui Fang, and ChengXiang Zhai University of Illinois at Urbana-Champaign

2 Outline Background of query generation in IR
Query generation with Poisson language model Smoothing in Poisson query generation model Poisson v.s. multinomial in query generation IR Analytical comparison Empirical experiments Summary

3 Query Generation IR Model [Ponte & Croft 98]
Document Language Model Query Likelihood d1 q d2 dN Scoring documents with query likelihood Known as the language modeling (LM) approach to IR Different from document generation

4 Interpretation of LM d
d : a model for queries posed by users who like document d [Lafferty and Zhai 01] Estimate d using document d  use d to approximate the queries used by users who like d Existing methods differ mainly in the choice of d and how d is estimated (smoothing) Multi-Bernoulli: e.g, [Ponte and Croft 98, Metzler et al 04] Multinomial: (most popular) e.g., [Hiemstra et al. 99, Miller et al. 99, Zhai and Lafferty 01]

5 Multi-Bernoulli vs. Multinomial
Flip a coin for each word Doc: d text mining … model Query q: “text mining” text mining model clustering H T Multinomial: Toss a die to choose a word text mining model Multi-bernoulli: doesn’t model frequency Multinomial: doesn’t model absence; sum-to-one assumption Query q: “text mining” text mining

6 Problems of Multinomial
Does not model term absence Sum-to-one over all terms Reality is harder than expected: Empirical estimates: mean (tf) < variance (tf) (Church & Gale 95) Estimates on AP88-89: All terms: : ; 2: Query terms: : ; 2: Multinomial/Bernoulli: mean > variance

7 Poisson? Poisson models frequency directly (including zero freq.)
No sum-to-one constraint on different w Mean = Variance Poisson is explored in document generation models, but not in query generation models

8 Related Work Poisson has been explored in document generation models, e.g., 2-Poisson  Okapi/BM25 (Robertson and Walker 94) Parallel derivation of probabilistic models (Roelleke and Wang 06) Our work add to this body of exploration of Poisson With query generation framework Explore specific features Poisson brings in LM

9 Research Questions How can we model query generation with Poisson language model? How can we smooth such a Poisson query generation model? How is a Poisson model different from a multinomial model in the context of query generation retrieval?

10 Query Generation with Poisson
Each term as an emitter Query: receiver Rates of arrival of w: : |q| text mining model clustering text [ ] 1 3/7 mining 2 [ ] 2/7 model [ ] / 1/7 clustering [ ] / 1/7 1 [ ] Query: “mining text mining systems”

11 Query Generation with Poisson (II)
q = ‹c(w1, q), c(w2 , q), …, c(wn , q)› |q| text mining model clustering text [ c(w1, q) ] w1 1 w2 mining [ c(w2, q) ] 2 w3 model [ c(w3, q) ] MLE 3 w4 clustering [ c(w4, q) ] 4 [ c(wN, q) ] wN N

12 Smoothing Poisson LM Background Collection text mining model system text mining 0.01 model 0.02 system 0 text mining model clustering Query: text mining systems + ? e.g., text:  * (1-  )* system:  * 0 + (1-  )* Different smoothing methods lead to different retrieval formulae

13 + Smoothing Poisson LM Interpolation (JM):
Bayesian smoothing with Gamma prior: Two stage smoothing: 1 Gamma prior 2

14 Smoothing Poisson LM (II)
Two-stage smoothing: Similar to multinomial 2-stage (Zhai and Lafferty 02) Verbose queries need to be smoothed more 3 A smoothed version of document model (from and ) e.g., A background model of user query preference Use when no user prior is known

15 Analytical Comparison: Basic Distributions
multi-Bernoulli multinomial Poisson Event space Appearance /absence V frequency Model absence? Yes No Model frequency? Model length? (document/query) Sum-to-one constraint?

16 Analytical: Equivalency of basic models
Equivalent with basic model and MLE: Poisson + Gamma Smoothing = multinomial + Dirichlet Smoothing Basic model + JM smoothing behaves similarly (with a variant component of document length normalization )

17 Benefits: Per-term Smoothing
Poisson doesn’t require “sum-to-one” over different terms (different event space) Thus  in JM smoothing and 2-stage smoothing can be made term dependent (per-term) multinomial cannot achieve per-term smoothing Can use EM algorithm to estimate ws. w w

18 Benefits: Modeling Background
Traditional: as a single model Not matching the reality as a mixture model: increase variance multinomial mixture (e.g., clusters, PLSA, LDA) Inefficient (no close form, iterative estimation) Poisson mixture (e.g., Katz’s K-Mixture, 2-Poisson, Negative Binomial) (Church & Gale 95) Have close forms, efficient computation

19 Hypotheses H1: With basic query generation retrieval models (JM smoothing and Gamma smoothing): Poisson behaves similarly to multinomial H2: Per-term smoothing with Poisson may out-perform term independent smoothing More help on verbose queries H3: Background efficiently modeled as Poisson mixtures may perform better than single Poisson

20 Experiment Setup Data: TREC collections and Topics Query type:
AP88-89, Trec7, Trec8, Wt2g Query type: Short keyword (keyword title); Short verbose (one sentence); Long verbose (multiple sentences); Measurement: Mean average precision (MAP)

21 H1: Basic models behave similarly
JM+Poisson JM+Multinomial Gamma/Dirichlet > JM (Poisson/ Multinomial) JM + Poisson JM + Multinomial Gamma/Dirichlet > JM (Poisson/ Multinomial) MAP

22 H2: Per-term outperforms term-independent smoothing
Data Q Gamma/ Dirichlet Per-term 2-stage SK 0.224 0.226 AP SV 0.204 0.217* LV 0.291 0.304* 0.186 0.185 Trec-7 0.182 0.196* 0.236* 0.257 0.256 Trec-8 0.228 0.246* 0.260 0.274* 0.302 0.307 Web 0.273 0.292* 0.283 0.311* Per-term > Non-per-term

23 Improvement Comes from Per-term
Data Q JM JM+per-term 2-stage 2-stage + per-term AP SK 0.203 0.206 0.223 0.226* SV 0.183 0.214* 0.204 0.217* Trec-7 0.168 0.174 0.186 0.185 0.176 0.198* 0.194 0.196 Trec-8 0.239 0.227 0.257 0.256 0.234 0.249* 0.242 0.246* Web 0.250 0.220* 0.291 0.307* 0.217 0.261* 0.273 0.292* JM + Per-term > JM 2-stage + Per-term > 2-stage Significant improvement on verbose query

24 H3: Poisson Mixture Background Improves Performance
Data Query c = single Poisson c = Katz’ K-Mixture AP SK 0.203 0.204 SV 0.183 0.188* Trec-7 0.168 0.169 0.176 0.178* Trec-8 0.239 0.234 0.238* Web 0.250 0.217 0.223* Katz’ K-Mixture > Single Poisson

25 Poisson Opens Other Potential Flexibilities
Document length penalization? JM introduced a variant component of document length normalization Require more expensive computation Pseudo-feedback? in the 2-stage smoothing Use feedback documents to estimate term dependent ws. Lead to future research directions

26 Summary Poisson: Another family of retrieval models based on query generation Basic models behave similarly to multinomial Benefits: per-term smoothing and efficient mixture background model Many other potential flexibilities Future work: explore document length normalization and pseudo-feedback better estimation of per-term smoothing coefficients

27 Thanks!


Download ppt "A Study of Poisson Query Generation Model for Information Retrieval"

Similar presentations


Ads by Google