Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation Methods and Challenges. 2 Deepak Agarwal & Bee-Chung ICML’11 Evaluation Methods Ideal method –Experimental Design: Run side-by-side.

Similar presentations


Presentation on theme: "Evaluation Methods and Challenges. 2 Deepak Agarwal & Bee-Chung ICML’11 Evaluation Methods Ideal method –Experimental Design: Run side-by-side."— Presentation transcript:

1 Evaluation Methods and Challenges

2 2 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Evaluation Methods Ideal method –Experimental Design: Run side-by-side experiments on a small fraction of randomly selected traffic with new method (treatment) and status quo (control) –Limitation Often expensive and difficult to test large number of methods Problem: How do we evaluate methods offline on logged data? –Goal: To maximize clicks/revenue and not prediction accuracy on the entire system. Cost of predictive inaccuracy for different instances vary. E.g. 100% error on a low CTR article may not matter much because it always co-occurs with a high CTR article that is predicted accurately

3 3 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Usual Metrics Predictive accuracy –Root Mean Squared Error (RMSE) –Mean Absolute Error (MAE) –Area under the Curve, ROC Other rank based measures based on retrieval accuracy for top-k –Recall in test data What Fraction of items that user actually liked in the test data were among the top-k recommended by the algorithm (fraction of hits, e.g. Karypsis, CIKM 2001) One flaw in several papers –Training and test split are not based on time. Information leakage Even in Netflix, this is the case to some extent –Time split per user, not per event. For instance, information may leak if models are based on user-user similarity.

4 4 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Metrics continued.. Recall per event based on Replay-Match method –Fraction of clicked events where the top recommended item matches the clicked one. This is good if logged data collected from a randomized serving scheme, with biased data this could be a problem –We will be inventing algorithms that provide recommendations that are similar to the current one No reward for novel recommendations

5 5 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Details on Replay-Match method (Li, Langford, et al) x: feature vector for a visit r = [r 1,r 2,…,r K ]: reward vector for the K items in inventory h(x): recommendation algorithm to be evaluated Goal: Estimate expected reward for h(x) s(x): recommendation scheme that generated logged-data x 1,..,x T : visits in the logged data r ti : reward for visit t, where i = s(x t )

6 6 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Replay-Match continued Estimator If importance weights and –It can be shown estimator is unbiased E.g. if s(x) is random serving scheme, importance weights are uniform over the item set If s(x) is not random, importance weights have to be estimated through a model

7 7 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Back to Multi-Objective Optimization Recommender EDITORIAL content Clicks on FP links influence downstream supply distribution AD SERVER PREMIUM display (GUARANTEED) Spot Market (Cheaper) Downstream engagement (Time spent)

8 8 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Serving Content on Front Page: Click Shaping What do we want to optimize? Current: Maximize clicks (maximize downstream supply from FP) But consider the following –Article 1: CTR=5%, utility per click = 5 –Article 2: CTR=4.9%, utility per click=10 By promoting 2, we lose 1 click/100 visits, gain 5 utils If we do this for a large number of visits --- lose some clicks but obtain significant gains in utility? –E.g. lose 5% relative CTR, gain 40% in utility (revenue, engagement, etc)

9 9 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Why call it Click Shaping? Supply distribution Changes BEFORE AFTER SHAPING can happen with respect to any downstream metrics (like engagement)

10 10 Deepak Agarwal & Bee-Chung Chen @ ICML’11 10 Multi-Objective Optimization A 1 A 2 A n n articles K properties news finance omg … … S 1 S 2 S m m user segments … CTR of user segment i on article j: p ij Time duration of i on j: d ij known p ij, d ij x ij : variables

11 11 Deepak Agarwal & Bee-Chung Chen @ ICML’11 11 Multi-Objective Program  Scalarization Goal Programming Simplex constraints on x iJ is always applied Constraints are linear Every 10 mins, solve x Use this x as the serving scheme in the next 10 mins

12 12 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Pareto-optimal solution (more in KDD 2011) 12

13 13 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Summary Modern recommendation systems on the web crucially depend on extracting intelligence from massive amounts of data collected on a routine basis Lots of data and processing power not enough, the number of things we need to learn grows with data size Extracting grouping structures at coarser resolutions based on similarity (correlations) is important –ML has a big role to play here Continuous and adaptive experimentation in a judicious manner crucial to maximize performance –Again, ML has a big role to play Multi-objective optimization is often required, the objectives are application dependent. –ML has to work in close collaboration with engineering, product & business execs

14 Challenges

15 15 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Recall: Some examples Simple version –I have an important module on my page, content inventory is obtained from a third party source which is further refined through editorial oversight. Can I algorithmically recommend content on this module? I want to drive up total CTR on this module More advanced –I got X% lift in CTR. But I have additional information on other downstream utilities (e.g. dwell time). Can I increase downstream utility without losing too many clicks? Highly advanced –There are multiple modules running on my website. How do I take a holistic approach and perform a simultaneous optimization?

16 16 Deepak Agarwal & Bee-Chung Chen @ ICML’11 For the simple version Multi-position optimization –Explore/exploit, optimal subset selection Explore/Exploit strategies for large content pool and high dimensional problems –Some work on hierarchical bandits but more needs to be done Constructing user profiles from multiple sources with less than full coverage –Couple of papers at KDD 2011 Content understanding Metrics to measure user engagement (other than CTR)

17 17 Deepak Agarwal & Bee-Chung Chen @ ICML’11 Other problems Whole page optimization –Incorporating correlations Incentivizing User generated content Incorporating Social information for better recommendation Multi-context Learning


Download ppt "Evaluation Methods and Challenges. 2 Deepak Agarwal & Bee-Chung ICML’11 Evaluation Methods Ideal method –Experimental Design: Run side-by-side."

Similar presentations


Ads by Google