Presentation is loading. Please wait.

Presentation is loading. Please wait.

Diversified Retrieval as Structured Prediction Redundancy, Diversity, and Interdependent Document Relevance (IDR 09) SIGIR 2009 Workshop Yisong Yue Cornell.

Similar presentations


Presentation on theme: "Diversified Retrieval as Structured Prediction Redundancy, Diversity, and Interdependent Document Relevance (IDR 09) SIGIR 2009 Workshop Yisong Yue Cornell."— Presentation transcript:

1 Diversified Retrieval as Structured Prediction Redundancy, Diversity, and Interdependent Document Relevance (IDR 09) SIGIR 2009 Workshop Yisong Yue Cornell University Joint work with Thorsten Joachims

2 Need for Diversity (in IR) Ambiguous Queries –Different information needs using same query –Jaguar –At least one relevant result for each information need Learning Queries –User interested in a specific detail or entire breadth of knowledge available [Swaminathan et al., 2008] –Results with high information diversity

3 Optimizing Diversity Interest in information retrieval –[Carbonell & Goldstein, 1998; Zhai et al., 2003; Zhang et al., 2005; Chen & Karger, 2006; Zhu et al., 2007; Swaminathan et al., 2008] Requires inter-document dependencies –Impossible with standard independence assumptions –E.g., probability ranking principle No consensus on how to measure diversity.

4 This Talk A method for representing and optimizing information coverage Discriminative training algorithm –Based on structural SVMs Appropriate forms of training data –Requires sufficient granularity (subtopic labels) Empirical evaluation

5 Choose top 3 documents Individual Relevance:D3 D4 D1 Pairwise Sim MMR:D3 D1 D2 Best Solution:D3 D1 D5

6 How to Represent Information? Discrete feature space to represent information –Decomposed into nuggets For query q and its candidate documents: –All the words (title words, anchor text, etc) –Cluster memberships (topic models / dim reduction) –Taxonomy memberships (ODP) We will focus on words and title words.

7 Weighted Word Coverage More distinct words = more information Weight word importance Will work automatically w/o human labels Goal: select K documents which collectively cover as many distinct (weighted) words as possible –Budgeted max coverage problem (Khuller et al., 1997) –Greedy selection yields (1-1/e) bound. –Need to find good weighting function (learning problem).

8 Example D1D2D3Best Iter D1 Iter 2 Marginal Benefit V1V2V3V4V5 D1XXX D2XXX D3XXXX WordBenefit V11 V22 V33 V44 V55 Document Word Counts

9 Example D1D2D3Best Iter D1 Iter 2--23D3 Marginal Benefit V1V2V3V4V5 D1XXX D2XXX D3XXXX WordBenefit V11 V22 V33 V44 V55 Document Word Counts

10 How to Weight Words? Not all words created equal –the Conditional on the query –computer is normally fairly informative… –…but not for the query ACM Learn weights based on the candidate set –(for a query)

11 Prior Work Essential Pages [Swaminathan et al., 2008] –Uses fixed function of word benefit –Depends on word frequency in candidate set – - Local version of TF-IDF – - Frequent words low weight – (not important for diversity) – - Rare words low weight – (not representative)

12 Linear Discriminant x = (x 1,x 2,…,x n ) - candidate documents v – an individual word We will use thousands of such features

13 Linear Discriminant x = (x 1,x 2,…,x n ) - candidate documents y – subset of x (the prediction) V(y) – union of words from documents in y. Discriminant Function: Benefit of covering word v is then w T (v,x)

14 Linear Discriminant Does NOT reward redundancy –Benefit of each word only counted once Greedy has (1-1/e)-approximation bound Linear (joint feature space) –Suitable for SVM optimization

15 More Sophisticated Discriminant Documents cover words to different degrees –A document with 5 copies of Thorsten might cover it better than another document with only 2 copies.

16 More Sophisticated Discriminant Documents cover words to different degrees –A document with 5 copies of Thorsten might cover it better than another document with only 2 copies. Use multiple word sets, V 1 (y), V 2 (y), …, V L (y) Each V i (y) contains only words satisfying certain importance criteria. Requires more sophisticated joint feature map.

17 Conventional SVMs Input: x (high dimensional point) Target: y (either +1 or -1) Prediction: sign(w T x) Training: subject to: The sum of slacks upper bounds the accuracy loss

18 Structural SVM Formulation Input: x (candidate set of documents) Target: y (subset of x of size K) Same objective function: Constraints for each incorrect labeling y. Score of best y at least as large as incorrect y plus loss Requires new training algorithm [Tsochantaridis et al., 2005]

19 Weighted Subtopic Loss Example: –x 1 covers t 1 –x 2 covers t 1,t 2,t 3 –x 3 covers t 1,t 3 Motivation –Higher penalty for not covering popular subtopics –Mitigates effects of label noise in tail subtopics # DocsLoss t1t1 31/2 t2t2 11/6 t3t3 21/3

20 Diversity Training Data TREC 6-8 Interactive Track –Queries with explicitly labeled subtopics –E.g., Use of robots in the world today Nanorobots Space mission robots Underwater robots –Manual partitioning of the total information regarding a query

21 Experiments TREC 6-8 Interactive Track Queries Documents labeled into subtopics. 17 queries used, –considered only relevant docs –decouples relevance problem from diversity problem 45 docs/query, 20 subtopics/query, 300 words/doc Trained using LOO cross validation

22 TREC 6-8 Interactive Track Retrieving 5 documents

23 Can expect further benefit from having more training data.

24 Moving Forward Larger datasets –Evaluate relevance & diversity jointly Different types of training data –Our framework can define loss in different ways –Can we leverage clickthrough data? Different feature representations Build on top of topic modeling approaches? –Can we incorporate hierarchical retrieval?

25 References & Code/Data Predicting Diverse Subsets Using Structural SVMs –[Yue & Joachims, ICML 2008] Source code and dataset available online –http://projects.yisongyue.com/svmdiv/ Work supported by NSF IIS , Microsoft Fellowship, and Yahoo! KTC Grant.

26 Extra Slides

27 More Sophisticated Discriminant Separate i for each importance level i. Joint feature map is vector composition of all i Greedy has (1-1/e)-approximation bound. Still uses linear feature space.

28 Maximizing Subtopic Coverage Goal: select K documents which collectively cover as many subtopics as possible. Perfect selection takes n choose K time. –Basically a set cover problem. Greedy gives (1-1/e)-approximation bound. –Special case of Max Coverage (Khuller et al, 1997)

29 Learning Set Cover Representations Given: –Manual partitioning of a space subtopics –Weighting for how items cover manual partitions subtopic labels + subtopic loss –Automatic partitioning of the space Words Goal: –Weighting for how items cover automatic partitions –The (greedy) optimal covering solutions agree

30 Essential Pages

31 [Swaminathan et al., 2008] Benefit of covering word v with document x i Importance of covering word v Intuition: - Frequent words cannot encode information diversity. - Infrequent words do not provide significant information x = (x 1,x 2,…,x n ) - set of candidate documents for a query y – a subset of x of size K (our prediction).

32 Structural SVMs

33 Minimizing Hinge Loss Suppose for incorrect y: Then: [Tsochantaridis et al., 2005]

34 Finding Most Violated Constraint A constraint is violated when Finding most violated constraint reduces to

35 Finding Most Violated Constraint Encode each subtopic as an additional word to be covered. Use greedy prediction to find approximate most violated constraint.

36 Illustrative Example Original SVM Problem Exponential constraints Most are dominated by a small set of important constraints Structural SVM Approach Repeatedly finds the next most violated constraint… …until set of constraints is a good approximation.

37 Illustrative Example Original SVM Problem Exponential constraints Most are dominated by a small set of important constraints Structural SVM Approach Repeatedly finds the next most violated constraint… …until set of constraints is a good approximation.

38 Illustrative Example Original SVM Problem Exponential constraints Most are dominated by a small set of important constraints Structural SVM Approach Repeatedly finds the next most violated constraint… …until set of constraints is a good approximation.

39 Illustrative Example Original SVM Problem Exponential constraints Most are dominated by a small set of important constraints Structural SVM Approach Repeatedly finds the next most violated constraint… …until set of constraints is a good approximation.

40 Approximate Constraint Generation Theoretical guarantees no longer hold. –Might not find an epsilon-close approximation to the feasible region boundary. Performs well in practice.

41 Approximate constraint generation seems to work perform well.

42 Experiments

43 TREC Experiments 12/4/1 train/valid/test split –Approx 500 documents in training set Permuted until all 17 queries were tested once Set K=5 (some queries have very few documents) SVM-div – uses term frequency thresholds to define importance levels SVM-div2 – in addition uses TFIDF thresholds

44 TREC Results MethodLoss Random0.469 Okapi0.472 Unweighted Model0.471 Essential Pages0.434 SVM-div0.349 SVM-div MethodsW / T / L SVM-div vs Ess. Pages 14 / 0 / 3 ** SVM-div2 vs Ess. Pages 13 / 0 / 4 SVM-div vs SVM-div2 9 / 6 / 2

45 Synthetic Dataset Trec dataset very small Synthetic dataset so we can vary retrieval size K 100 queries 100 docs/query, 25 subtopics/query, 300 words/doc 15/10/75 train/valid/test split

46 Consistently outperforms Essential Pages


Download ppt "Diversified Retrieval as Structured Prediction Redundancy, Diversity, and Interdependent Document Relevance (IDR 09) SIGIR 2009 Workshop Yisong Yue Cornell."

Similar presentations


Ads by Google