Download presentation

Presentation is loading. Please wait.

Published byPaige Kirby Modified over 4 years ago

1
Information Retrieval as Structured Prediction University of Massachusetts Amherst Machine Learning Seminar April 29 th, 2009 Yisong Yue Cornell University Joint work with: Thorsten Joachims, Filip Radlinski, and Thomas Finley

2
Supervised Learning Find function from input space X to output space Y such that the prediction error is low. Microsoft announced today that they acquired Apple for the amount equal to the gross national product of Switzerland. Microsoft officials stated that they first wanted to buy Switzerland, but eventually were turned off by the mountains and the snowy winters… x y 1 GATACAACCTATCCCCGTATATATATTCTA TGGGTATAGTATTAAATCAATACAACCTAT CCCCGTATATATATTCTATGGGTATAGTAT TAAATCAATACAACCTATCCCCGTATATAT ATTCTATGGGTATAGTATTAAATCAGATAC AACCTATCCCCGTATATATATTCTATGGGT ATAGTATTAAATCACATTTA x y x y 7.3

3
Examples of Complex Output Spaces Natural Language Parsing –Given a sequence of words x, predict the parse tree y. –Dependencies from structural constraints, since y has to be a tree. The dog chased the cat x S VPNP DetNV NP DetN y

4
Part-of-Speech Tagging –Given a sequence of words x, predict sequence of tags y. –Dependencies from tag-tag transitions in Markov model. Similarly for other sequence labeling problems, e.g., RNA Intron/Exon Tagging. The rain wet the cat x DetNV N y Examples of Complex Output Spaces

5
Information Retrieval –Given a query x, predict a ranking y. –Dependencies between results (e.g. avoid redundant hits) –Loss function over rankings (e.g. Average Precision) SVM x 1.Kernel-Machines 2.SVM-Light 3.Learning with Kernels 4.SV Meppen Fan Club 5.Service Master & Co. 6.School of Volunteer Management 7.SV Mattersburg Online … y

6
Goals of this Talk Learning to Rank –Optimizing Average Precision –Diversified Retrieval Structured Prediction –Complex Retrieval Goals –Structural SVMs (Supervised Learning)

7
Learning to Rank At intersection of ML and IR First generate input features

8
Learning to Rank Design a retrieval function f(x) = w T x –(weighted average of features) For each query q –Score all s q,d = w T x q,d –Sort by s q,d to produce ranking Which weight vector w is best?

9
Conventional SVMs Input: x (high dimensional point) Target: y (either +1 or -1) Prediction: sign(w T x) Training: subject to: The sum of slacks smooth upper bound for 0/1 loss

10
2 pairwise disagreements Optimizing Pairwise Agreements

11
Pairwise Preferences SVM Such that: Large Margin Ordinal Regression [Herbrich et al., 1999] Can be reduced to time [Joachims, 2005] Pairs can be reweighted to more closely model IR goals [Cao et al., 2006]

12
Mean Average Precision Consider rank position of each relevant doc –K 1, K 2, … K R Compute Precision@K for each K 1, K 2, … K R Average precision = average of P@K Ex: has AvgPrec of MAP is Average Precision across multiple queries/rankings

13
Optimization Challenges Rank-based measures are multivariate –Cannot decompose (additively) into document pairs –Need to exploit other structure Defined over rankings –Rankings do not vary smoothly –Discontinuous w.r.t model parameters –Need some kind of relaxation/approximation

14
[Yue & Burges, 2007]

15
Structured Prediction Let x be a structured input (candidate documents) Let y be a structured output (ranking) Use a joint feature map to encode the compatibility of predicting y for given x. –Captures all the structure of the prediction problem Consider linear models: after learning w, we can make predictions via

16
Linear Discriminant for Ranking Let x = (x 1,…x n ) denote candidate documents (features) Let y jk = {+1, -1} encode pairwise rank orders Feature map is linear combination of documents. Prediction made by sorting on document scores w T x i

17
Structural SVM Let x denote a structured input (candidate documents) Let y denote a structured output (ranking) Standard objective function: Constraints are defined for each incorrect labeling y over the set of documents x. [Tsochantaridis et al., 2005]

18
Minimizing Hinge Loss Suppose for incorrect y: Then: [Tsochantaridis et al., 2005]

19
Structural SVM for MAP Minimize subject to where ( y jk = {-1, +1} ) and Sum of slacks is smooth upper bound on MAP loss. [Yue et al., SIGIR 2007]

20
Too Many Constraints! For Average Precision, the true labeling is a ranking where the relevant documents are all ranked in the front, e.g., An incorrect labeling would be any other ranking, e.g., This ranking has Average Precision of about 0.8 with (y) ¼ 0.2 Intractable number of rankings, thus an intractable number of constraints!

21
Cutting Plane Method Original SVM Problem Exponential constraints Most are dominated by a small set of important constraints Structural SVM Approach Repeatedly finds the next most violated constraint… …until set of constraints is a good approximation. [Tsochantaridis et al., 2005]

22
Cutting Plane Method Original SVM Problem Exponential constraints Most are dominated by a small set of important constraints Structural SVM Approach Repeatedly finds the next most violated constraint… …until set of constraints is a good approximation. [Tsochantaridis et al., 2005]

23
Cutting Plane Method Original SVM Problem Exponential constraints Most are dominated by a small set of important constraints Structural SVM Approach Repeatedly finds the next most violated constraint… …until set of constraints is a good approximation. [Tsochantaridis et al., 2005]

24
Cutting Plane Method Original SVM Problem Exponential constraints Most are dominated by a small set of important constraints Structural SVM Approach Repeatedly finds the next most violated constraint… …until set of constraints is a good approximation. [Tsochantaridis et al., 2005]

25
Finding Most Violated Constraint A constraint is violated when Finding most violated constraint reduces to Highly related to inference/prediction:

26
Finding Most Violated Constraint Observations MAP is invariant on the order of documents within a relevance class –Swapping two relevant or non-relevant documents does not change MAP. Joint SVM score is optimized by sorting by document score, w T x j Reduces to finding an interleaving between two sorted lists of documents [Yue et al., SIGIR 2007]

27
Finding Most Violated Constraint Start with perfect ranking Consider swapping adjacent relevant/non-relevant documents [Yue et al., SIGIR 2007]

28
Finding Most Violated Constraint Start with perfect ranking Consider swapping adjacent relevant/non-relevant documents Find the best feasible ranking of the non-relevant document [Yue et al., SIGIR 2007]

29
Finding Most Violated Constraint Start with perfect ranking Consider swapping adjacent relevant/non-relevant documents Find the best feasible ranking of the non-relevant document Repeat for next non-relevant document [Yue et al., SIGIR 2007]

30
Finding Most Violated Constraint Start with perfect ranking Consider swapping adjacent relevant/non-relevant documents Find the best feasible ranking of the non-relevant document Repeat for next non-relevant document Never want to swap past previous non-relevant document [Yue et al., SIGIR 2007]

31
Finding Most Violated Constraint Start with perfect ranking Consider swapping adjacent relevant/non-relevant documents Find the best feasible ranking of the non-relevant document Repeat for next non-relevant document Never want to swap past previous non-relevant document Repeat until all non-relevant documents have been considered [Yue et al., SIGIR 2007]

33
Structural SVM for MAP Treats rankings as structured objects Optimizes hinge-loss relaxation of MAP –Provably minimizes the empirical risk –Performance improvement over conventional SVMs Relies on subroutine to find most violated constraint –Computationally compatible with linear discriminant

34
Structural SVM Recipe Joint feature map Inference method Loss function Loss-augmented (most violated constraint)

35
Need for Diversity (in IR) Ambiguous Queries –Users with different information needs issuing the same textual query –Jaguar or Apple –At least one relevant result for each information need Learning Queries –User interested in a specific detail or entire breadth of knowledge available [Swaminathan et al., 2008] –Results with high information diversity

36
Example Choose K documents with maximal information coverage. For K = 3, optimal set is {D1, D2, D10}

37
Diversity via Set Cover Documents cover information –Assume information is partitioned into discrete units. Documents overlap in the information covered. Selecting K documents with maximal coverage is a set cover problem –NP-complete in general –Greedy has (1-1/e) approximation [Khuller et al., 1997]

38
Diversity via Subtopics Current datasets use manually determined subtopic labels –E.g., Use of robots in the world today Nanorobots Space mission robots Underwater robots –Manual partitioning of the total information –Relatively reliable –Use as training data

39
Weighted Word Coverage Use words to represent units of information More distinct words = more information Weight word importance Does not depend on human labeling Goal: select K documents which collectively cover as many distinct (weighted) words as possible –Greedy selection yields (1-1/e) bound. –Need to find good weighting function (learning problem).

40
Example D1D2D3Best Iter 1121110D1 Iter 2 Marginal Benefit V1V2V3V4V5 D1XXX D2XXX D3XXXX WordBenefit V11 V22 V33 V44 V55 Document Word Counts

41
Example D1D2D3Best Iter 1121110D1 Iter 2--23D3 Marginal Benefit V1V2V3V4V5 D1XXX D2XXX D3XXXX WordBenefit V11 V22 V33 V44 V55 Document Word Counts

42
Related Work Comparison Essential Pages [Swaminathan et al., 2008] –Uses fixed function of word benefit –Depends on word frequency in candidate set Our goals –Automatically learn a word benefit function Learn to predict set covers Use training data Minimize subtopic loss –No prior ML approach (to our knowledge)

43
Linear Discriminant x = (x 1,x 2,…,x n ) - candidate documents y – subset of x V(y) – union of words from documents in y. Discriminant Function: (v,x) – frequency features (e.g., ¸ 10%, ¸ 20%, etc). Benefit of covering word v is then w T (v,x) [Yue & Joachims, ICML 2008]

44
Linear Discriminant Does NOT reward redundancy –Benefit of each word only counted once Greedy has (1-1/e)-approximation bound Linear (joint feature space) –Allows for SVM optimization (Used more sophisticated discriminant in experiments.) [Yue & Joachims, ICML 2008]

45
Weighted Subtopic Loss Example: –x 1 covers t 1 –x 2 covers t 1,t 2,t 3 –x 3 covers t 1,t 3 Motivation –Higher penalty for not covering popular subtopics # DocsLoss t1t1 31/2 t2t2 11/6 t3t3 21/3 [Yue & Joachims, ICML 2008]

46
Finding Most Violated Constraint Encode each subtopic as an additional word to be covered. Use greedy algorithm: (1-1/e)-approximation

47
TREC Experiments TREC 6-8 Interactive Track Queries Documents labeled into subtopics. 17 queries used, –considered only relevant docs –decouples relevance problem from diversity problem 45 docs/query, 20 subtopics/query, 300 words/doc Trained using LOO cross validation

48
TREC 6-8 Interactive Track Retrieving 5 documents

49
Can expect further benefit from having more training data.

50
Learning Set Cover Representations Given: –Manual partitioning of a space subtopics –Weighting for how items cover manual partitions subtopic labels + subtopic loss –Automatic partitioning of the space Words Goal: –Weighting for how items cover automatic partitions –The (greedy) optimal covering solutions agree

51
Summary Information Retrieval as Structured Prediction –Can be used to reason about complex retrieval goals –E.g., mean average precision, diversity –Software & papers available at www.yisongyue.comwww.yisongyue.com Beyond Supervised Learning –Training data expensive to gather –Current work on interactive learning (with users) Thanks to Andrew McCallum, David Mimno & Marc Cartwright Work supported by NSF IIS-0713483, Microsoft Graduate Fellowship, and Yahoo! KTC Grant.

52
Extra Slides

53
Structural SVM Training STEP 1: Solve the SVM objective function using only the current working set of constraints. STEP 2: Using the model learned in STEP 1, find the most violated constraint from the global set of constraints. STEP 3: If the constraint returned in STEP 2 is violated by more than epsilon, add it to the working set. Repeat STEP 1-3 until no additional constraints are added. Return the most recent model that was trained in STEP 1. STEP 1-3 is guaranteed to loop for at most O(1/epsilon^2) iterations. [Tsochantaridis et al., 2005] *This is known as a cutting plane method.

54
Structural SVM Training Suppose we only solve the SVM objective over a small subset of constraints (working set). Some constraints from global set might be violated. When finding a violated constraint, only y is free, everything else is fixed –ys and xs fixed from training –w and slack variables fixed from solving SVM objective Degree of violation of a constraint is measured by:

55
SVM-map

56
Experiments Used TREC 9 & 10 Web Track corpus. Features of document/query pairs computed from outputs of existing retrieval functions. (Indri Retrieval Functions & TREC Submissions) Goal is to learn a recombination of outputs which improves mean average precision.

59
SVM-div

60
Result #18 Top of First Page Bottom of First Page Results From 11/27/2007 Query: Jaguar

61
More Sophisticated Discriminant Documents cover words to different degrees –A document with 5 copies of Microsoft might cover it better than another document with only 2 copies. Use multiple word sets, V 1 (y), V 2 (y), …, V L (y) Each V i (y) contains only words satisfying certain importance criteria. [Yue & Joachims, ICML 2008]

62
More Sophisticated Discriminant Separate i for each importance level i. Joint feature map is vector composition of all i Greedy has (1-1/e)-approximation bound. Still uses linear feature space. [Yue & Joachims, ICML 2008]

63
Essential Pages [Swaminathan et al., 2008] Benefit of covering word v with document x i Importance of covering word v Intuition: - Frequent words cannot encode information diversity. - Infrequent words do not provide significant information x = (x 1,x 2,…,x n ) - set of candidate documents for a query y – a subset of x of size K (our prediction).

64
Approximate Constraint Generation Theoretical guarantees no longer hold. –Might not find an epsilon-close approximation to the feasible region boundary. Performs well in practice.

65
TREC Experiments 12/4/1 train/valid/test split –Approx 500 documents in training set Permuted until all 17 queries were tested once Set K=5 (some queries have very few documents) SVM-div – uses term frequency thresholds to define importance levels SVM-div2 – in addition uses TFIDF thresholds

66
TREC Interactive Track Results MethodLoss Random0.469 Okapi0.472 Unweighted Model0.471 Essential Pages0.434 SVM-div0.349 SVM-div20.382 MethodsW / T / L SVM-div vs Ess. Pages 14 / 0 / 3 ** SVM-div2 vs Ess. Pages 13 / 0 / 4 SVM-div vs SVM-div2 9 / 6 / 2

67
Approximate constraint generation seems to work perform well.

68
Synthetic Dataset Trec dataset very small Synthetic dataset so we can vary retrieval size K 100 queries 100 docs/query, 25 subtopics/query, 300 words/doc 15/10/75 train/valid/test split

69
Consistently outperforms Essential Pages

Similar presentations

OK

Online Max-Margin Weight Learning for Markov Logic Networks Tuyen N. Huynh and Raymond J. Mooney Machine Learning Group Department of Computer Science.

Online Max-Margin Weight Learning for Markov Logic Networks Tuyen N. Huynh and Raymond J. Mooney Machine Learning Group Department of Computer Science.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To ensure the functioning of the site, we use **cookies**. We share information about your activities on the site with our partners and Google partners: social networks and companies engaged in advertising and web analytics. For more information, see the Privacy Policy and Google Privacy & Terms.
Your consent to our cookies if you continue to use this website.

Ads by Google

Ppt on recycling of waste management Ppt on ozone layer depletion and its effects Ppt on tcp/ip protocol driver Ppt on road accidents pictures Ppt on review writing software Ppt on nepali culture animated Ppt on water conservation in industries Ppt on types of clothes Ppt on waves tides and ocean currents powerpoint Ppt on any one mathematician john