Download presentation

Presentation is loading. Please wait.

Published byTimothy Beach Modified over 4 years ago

1
**ICML 2009 Yisong Yue Thorsten Joachims Cornell University**

Interactively Optimizing Information Retrieval Systems as a Dueling Bandits Problem ICML 2009 Yisong Yue Thorsten Joachims Cornell University

2
**Learning To Rank Supervised Learning Problem**

Extension of classification/regression Relatively well understood High applicability in Information Retrieval Requires explicitly labeled data Expensive to obtain Expert judged labels == search user utility? Doesn’t generalize to other search domains.

3
**Our Contribution Learn from implicit feedback (users’ clicks)**

Reduce labeling cost More representative of end user information needs Learn using pairwise comparisons Humans are more adept at making pairwise judgments Via Interleaving [Radlinski et al., 2008] On-line framework (Dueling Bandits Problem) We leverage users when exploring new retrieval functions Exploration vs exploitation tradeoff (regret)

4
**Team-Game Interleaving**

(u=thorsten, q=“svm”) f1(u,q) r1 f2(u,q) r2 1. Kernel Machines 2. Support Vector Machine 3. An Introduction to Support Vector Machines 4. Archives of SUPPORT-VECTOR-MACHINES 5. SVM-Light Support Vector Machine light/ NEXT PICK 1. Kernel Machines 2. SVM-Light Support Vector Machine light/ 3. Support Vector Machine and Kernel ... References 4. Lucent Technologies: SVM demo applet 5. Royal Holloway Support Vector Machine Interleaving(r1,r2) 1. Kernel Machines T2 2. Support Vector Machine T1 3. SVM-Light Support Vector Machine T2 light/ 4. An Introduction to Support Vector Machines T1 5. Support Vector Machine and Kernel ... References T2 6. Archives of SUPPORT-VECTOR-MACHINES ... T1 7. Lucent Technologies: SVM demo applet T2 Invariant: For all k, in expectation same number of team members in top k from each team. This is the evaluation method. Get the ranking for the learned retrieval function and for the standard retrieval function (e.g. Google). Combine both rankings into one combined ranking in a “fair and unbiased” way. This means, that at each position in the combined ranking the number of links from “learned” equals the number of links from “google” plus/minus 1. So, if user have no preference for a ranking function, with 50/50 chance they will click on links from either ranking function. We then evaluate, if the users click on links from one ranking function significantly more often. In the example, the lowest click in the combined ranking is 7. Due to the “fair” merging, the user has seen the top 4 from both rankings. Tracing back where the clicked on links came from, 3 links were in the top 4 from “Learned”, but only one in the top 4 from “google”. So, “learned” wins on this query. Note that this is a blind test. Users do not know which retrieval function the link came from. In particular, we use the same abstract generator. Interpretation: (r2 Â r1) ↔ clicks(T2) > clicks(T1) [Radlinski, Kurup, Joachims; CIKM 2008]

5
**Dueling Bandits Problem**

Continuous space bandits F E.g., parameter space of retrieval functions (i.e., weight vectors) Each time step compares two bandits E.g., interleaving test on two retrieval functions Comparison is noisy & independent

6
**Dueling Bandits Problem**

Continuous space bandits F E.g., parameter space of retrieval functions (i.e., weight vectors) Each time step compares two bandits E.g., interleaving test on two retrieval functions Comparison is noisy & independent Choose pair (ft, ft’) to minimize regret: (% users who prefer best bandit over chosen ones)

7
**Example 1 Example 2 Example 3 P(f* > f) = 0.9 P(f* > f’) = 0.8**

Incurred Regret = 0.7 Example 2 P(f* > f) = 0.7 P(f* > f’) = 0.6 Incurred Regret = 0.3 Example 3 P(f* > f) = 0.51 P(f* > f) = 0.55 Incurred Regret = 0.06

8
**Modeling Assumptions Each bandit f 2F has intrinsic value v(f)**

Never observed directly Assume v(f) is strictly concave ( unique f* ) Comparisons based on v(f) P(f > f’) = σ( v(f) – v(f’) ) P is L-Lipschitz For example: Want to find assumptions that are minimal, realistic and yields good algorithms. These modeling assumptions are one attempt to do so.

9
**Probability Functions**

Same global optimum Partially convex Gradient descent attractive

10
**Dueling Bandit Gradient Descent**

Maintain ft Compare with ft’ (close to ft -- defined by step size) Update if ft’ wins comparison Expectation of update close to gradient of P(ft > f’) Builds on Bandit Gradient Descent [Flaxman et al., 2005]

11
**Dueling Bandit Gradient Descent**

δ – explore step size γ – exploit step size Current point Losing candidate Winning candidate Dueling Bandit Gradient Descent

12
**Dueling Bandit Gradient Descent**

δ – explore step size γ – exploit step size Current point Losing candidate Winning candidate Dueling Bandit Gradient Descent

13
**Dueling Bandit Gradient Descent**

δ – explore step size γ – exploit step size Current point Losing candidate Winning candidate Dueling Bandit Gradient Descent

14
**Dueling Bandit Gradient Descent**

δ – explore step size γ – exploit step size Current point Losing candidate Winning candidate Dueling Bandit Gradient Descent

15
**Dueling Bandit Gradient Descent**

δ – explore step size γ – exploit step size Current point Losing candidate Winning candidate Dueling Bandit Gradient Descent

16
**Dueling Bandit Gradient Descent**

δ – explore step size γ – exploit step size Current point Losing candidate Winning candidate Dueling Bandit Gradient Descent

17
**Dueling Bandit Gradient Descent**

δ – explore step size γ – exploit step size Current point Losing candidate Winning candidate Dueling Bandit Gradient Descent

18
**Dueling Bandit Gradient Descent**

δ – explore step size γ – exploit step size Current point Losing candidate Winning candidate Dueling Bandit Gradient Descent

19
**Dueling Bandit Gradient Descent**

δ – explore step size γ – exploit step size Current point Losing candidate Winning candidate Dueling Bandit Gradient Descent

20
**Analysis (Sketch) Dueling Bandit Gradient Descent**

Sequence of partially convex functions ct(f) = P(ft > f) Random binary updates (expectation close to gradient) Bandit Gradient Descent [Flaxman et al., SODA 2005] Sequence of convex functions Use randomized update (expectation close to gradient) Can be extended to our setting (Assumes more information)

21
**Analysis (Sketch) Convex functions satisfy**

Both additive and multiplicative error Depends on exploration step size δ Main analytical contribution: bounding multiplicative error

22
**Regret Bound Regret grows as O(T3/4):**

Average regret shrinks as O(T-1/4) In the limit, we do as well as knowing f* in hindsight δ = O(1/T-1/4 ) γ = O(1/T-1/2 )

23
**Practical Considerations**

Need to set step size parameters Depends on P(f > f’) Cannot be set optimally We don’t know the specifics of P(f > f’) Algorithm should be robust to parameter settings Set parameters approximately in experiments

24
**50 dimensional parameter space Value function v(x) = -xTx **

Logistic transfer function Random point has regret almost 1 More experiments in paper.

25
**Web Search Simulation Leverage web search dataset**

1000 Training Queries, 367 Dimensions Simulate “users” issuing queries Value function based on (ranking measure) Use logistic to make probabilistic comparisons Use linear ranking function. Not intended to compete with supervised learning Feasibility check for online learning w/ users Supervised labels difficult to acquire “in the wild”

26
**Chose parameters with best final performance**

Curves basically identical for validation and test sets (no over-fitting) Sampling multiple queries makes no difference

27
**What Next? Better simulation environments DBGD simple and extensible**

More realistic user modeling assumptions DBGD simple and extensible Incorporate pairwise document preferences Deal with ranking discontinuities Test on real search systems Varying scales of user communities Sheds on insight / guides future development

28
Extra Slides

29
**Active vs Passive Learning**

Passive Data Collection (offline) Biased by current retrieval function Point-wise Evaluation Design retrieval function offline Evaluate online Active Learning (online) Automatically propose new rankings to evaluate Our approach

30
**Relative vs Absolute Metrics**

Our framework based on relative metrics E.g., comparing pairs of results or rankings Relatively recent development Absolute Metrics E.g., absolute click-through rate More common in literature Suffers from presentation bias Less robust to the many different sources of noise

31
**What Results do Users View/Click?**

[Joachims et al., TOIS 2007]

33
**Analysis (Sketch) Convex functions satisfy**

We have both multiplicative and additive error Depends on exploration step size δ Main technical contribution: bounding multiplicative error Existing results yields sub-linear bounds on:

34
**Analysis (Sketch) We know how to bound Regret:**

We can show using Lipschitz and symmetry of σ:

35
**More Simulation Experiments**

Logistic transfer function σ(x) = 1/(1+exp(-x)) 4 choices of value functions δ, γ set approximately

37
**NDCG Normalized Discounted Cumulative Gain**

Multiple Levels of Relevance DCG: contribution of ith rank position: Ex: has DCG score of NDCG is normalized DCG best possible ranking as score NDCG = 1

38
**Considerations NDCG is discontinuous w.r.t. function parameters**

Try larger values of δ, γ Try sampling multiple queries per update Homogenous user values Not an optimization concern Modeling limitation Not intended to compete with supervised learning Sanity check of feasibility for online learning w/ users

Similar presentations

OK

Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:

Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google