Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning to Rank: New Techniques and Applications Martin Szummer Microsoft Research Cambridge, UK.

Similar presentations


Presentation on theme: "Learning to Rank: New Techniques and Applications Martin Szummer Microsoft Research Cambridge, UK."— Presentation transcript:

1 Learning to Rank: New Techniques and Applications Martin Szummer Microsoft Research Cambridge, UK

2 Microsoft Research Why learning to rank? Current rankers use many features, in complex combinations Applications – Web search ranking, enterprise search – Image search – Ad selection – Merging multiple results lists The good: uses training data to find combinations of features that optimize IR metrics The bad: requires judged training data. Expensive, subjective, not provided by end-users, out-of-date Martin Szummer 2

3 Microsoft Research This talk Learning to rank with IR metrics A single, simple yet competition-winning, recipe. Works for NDCG, MAP, Precision with linear or non- linear ranking functions (neural nets, boosted trees etc) Semi-supervised ranking A new technique. Reduce the amount of judged training data required. Learning to merge Application: merging results lists from multiple query reformulations Martin Szummer 3 Actually – I apply the same recipe in three different settings!

4 Microsoft Research Ranking Background Martin Szummer 4 score function query-doc features parameters

5 Microsoft Research From Ranking Function to the Ranking Martin Szummer 5

6 Microsoft Research Learning to Rank Martin Szummer 6 given determine w preference pairs

7 Microsoft Research Learning to Rank for IR metrics IR metrics such as NDCG, MAP or Precision depend on: – sorted order of items – ranks of items: weight the top of the ranking more Recipe 1)Express the metric as a sum of pairwise swap deltas 2)Smooth it by multiplying by a Bradley-Terry term 3)Optimize parameters by gradient descent over a judged training set LambdaRank & LambdaMART [Burges et al] are instances of this recipe. The latter won the Yahoo! Learning to rank challenge (2010). Martin Szummer 7

8 Microsoft Research Example: Apply recipe to NDCG metric Unpublished material. Email me if interested. Martin Szummer 8

9 Microsoft Research Gradients - intuition Martin Szummer 9 x L r 1234512345

10 Microsoft Research Semi-supervised Ranking 10 Martin Szummer [with Emine Yilmaz] Train with judged AND unjudged query-document pairs

11 Microsoft Research Semi-supervised Ranking Applications – (Pseudo) Relevance feedback – Reduce the number of (expensive) human judgments – Use when judgments are hard to obtain Customers may not want to judge their collections – adaptation to a specific company in enterprise search – ranking for small markets, special interest domains, Approach – preference learning – end-to-end optimization of ranking metrics (NDCG, MAP) – multiple and completely unlabeled rank instances – scalability Martin Szummer 11

12 Microsoft Research How to benefit from unlabeled data? Unlabeled data gives information about the data distribution P(x). We must make assumptions about what the structure of the unlabeled data tells us about the ranking distribution P(R|x). A common assumption: the cluster assumption Unlabeled data defines the extent of clusters, Labeled data determines the class/function value of each cluster Martin Szummer 12

13 Microsoft Research Semi-supervised classification: similar documents Þ same class regression: similar documents Þ similar function value ranking: similar documents Þ similar preference i.e. neither is preferred to the other Differences from classification & regression: – Preferences provide weaker constraints than function values or classes Martin Szummer 13 is a type of regularizer on the function we are learning. Similarity can be defined based on content. Does not require judgments.

14 Microsoft Research Quantify Similarity similar documents Þ similar preference i.e. neither is preferred to the other Unpublished material. Email me if interested. Martin Szummer 14

15 Microsoft Research Semi-supervised Gradients Martin Szummer 15 x L

16 Microsoft Research Experiments Relevance Feedback task: 1) user issues a query and labels a few of the resulting documents from a traditional ranker (BM25) 2) system trains query-specific ranker, and re-ranks Data: TREC collection. 528,000 documents, 150 queries 1000 total documents per query; 2-15 docs are labeled Features: ranking features (q, d): 22 features from LETOR content features (d1, d2): TF-IDF dist between top 50 words Neighbors in input space using either of the above Note: at test time, only ranking features are used; method allows using features of type (d1, d2) and (q, d1, d2) at training that other algos cannot use Ranking function f(): neural network, 3 hidden units K=5 neighbors Martin Szummer 16

17 Microsoft Research Relevance Feedback Task Martin Szummer 17 LambdaRank L&U Cont LambdaRank L&U LambdaRank L TSVM L&U RankBoost L&U RankingSVM L RankBoost L

18 Microsoft Research Novel Queries Task Martin Szummer 18 90,000 training documents 3500 preference pairs 40 million unlabeled pairs

19 Microsoft Research Novel Queries Task Martin Szummer 19 LambdaRank L&U Cont LambdaRank L&U LambdaRank L Upper Bound

20 Microsoft Research Learning to Merge Task: learn a ranker that merges results from other rankers 20 Martin Szummer Example application users do not know the best way to express their web search query a single query may not be enough to reach all relevant documents merge results wp7 wp7 phone reformulate in parallel: microsoft wp7 user: Solution

21 Microsoft Research Merging Multiple Queries [with Sheldon, Shokouhi, Craswell] Traditional approach: alter before retrieval Merging: alter after retrieval – Prospecting: see results first, then decide – Flexibility: any is rewrite allowed, arbitrary features – Upside potential: better than any individual list – Increased query load on engine: use cache to mitigate it Martin Szummer 21

22 Microsoft Research LambdaMerge: learn to merge A weighted mixture of ranking function Martin Szummer 22 Rewrite features: Rewrite-difficulty: ListMean, ListStd, Clarity Rewrite-drift: IsRewrite, RewriteRank, RewriteScore,Overlap@N Scoring features: Dynamic rank score, BM25, Rank, IsTopN rewrite feat score feat jupiters massmass of jupiter

23 Microsoft Research Martin Szummer 23

24 Microsoft Research Martin Szummer 24

25 Microsoft Research Martin Szummer 25 Reformulation – Original NDCG Merged – Original NDCG

26 Microsoft Research Summary Learning to Rank – An indispensable tool – Requires judgments: but semi-supervised learning can help crowd-sourcing is also a possibility research frontier: implicit judgments from clicks – Many applications beyond those shown Merging: multiple local search engines, multiple language engines Rank recommendations in collaborative filtering Many thresholding tasks (filtering) can be posed as ranking Rank ads for relevance Elections – Use it! Martin Szummer 26


Download ppt "Learning to Rank: New Techniques and Applications Martin Szummer Microsoft Research Cambridge, UK."

Similar presentations


Ads by Google