Presentation is loading. Please wait.

Presentation is loading. Please wait.

Optimizing Recommender Systems as a Submodular Bandits Problem Yisong Yue Carnegie Mellon University Joint work with Carlos Guestrin & Sue Ann Hong.

Similar presentations


Presentation on theme: "Optimizing Recommender Systems as a Submodular Bandits Problem Yisong Yue Carnegie Mellon University Joint work with Carlos Guestrin & Sue Ann Hong."— Presentation transcript:

1 Optimizing Recommender Systems as a Submodular Bandits Problem Yisong Yue Carnegie Mellon University Joint work with Carlos Guestrin & Sue Ann Hong

2

3 Optimizing Recommender Systems Must predict what the user finds interesting Receive feedback (training data) “on the fly” 10K articles per day Must Personalize!

4 Sports Like! Topic# Likes# DisplayedAverage Sports111 Politics00N/A Economy00N/A Celebrity00N/A Day 1

5 Topic# Likes# DisplayedAverage Sports111 Politics010 Economy00N/A Celebrity00N/A Politics Boo! Day 2

6 Topic# Likes# DisplayedAverage Sports111 Politics010 Economy111 Celebrity00N/A Day 3 Economy Like!

7 Topic# Likes# DisplayedAverage Sports120.5 Politics010 Economy111 Celebrity00N/A Day 4 Boo! Sports

8 Topic# Likes# DisplayedAverage Sports120.5 Politics020 Economy111 Celebrity00N/A Day 5 Boo! Politics

9 Topic# Likes# DisplayedAverage Sports120.5 Politics020 Economy111 Celebrity00N/A Goal: Maximize total user utility (total # likes) Celebrity Economy Exploit:Explore: How to behave optimally at each round? Sports Best:

10 Often want to recommend multiple articles at a time!

11 Making Diversified Recommendations  “Israel implements unilateral Gaza cease-fire :: WRAL.com” “Israel unilaterally halts fire, rockets persist” “Gaza truce, Israeli pullout begin | Latest News” “Hamas announces ceasefire after Israel declares truce - …” “Hamas fighters seek to restore order in Gaza Strip - World - Wire …” “Israel implements unilateral Gaza cease-fire :: WRAL.com” “Obama vows to fight for middle class” “Citigroup plans to cut 4500 jobs” “Google Android market tops 10 billion downloads” “UC astronomers discover two largest black holes ever found”

12 Outline Optimally diversified recommendations – Minimize redundancy – Maximize information coverage Exploration / exploitation tradeoff – Don’t know user preferences a priori – Only receives feedback for recommendations Incorporating prior knowledge – Reduce the cost of exploration

13 Choose top 3 documents Individual Relevance:D3 D4 D1 Greedy Coverage Solution:D3 D1 D5

14 Choose top 3 documents Individual Relevance:D3 D4 D1 Greedy Coverage Solution:D3 D1 D5

15 Choose top 3 documents Individual Relevance:D3 D4 D1 Greedy Coverage Solution:D3 D1 D5

16 Choose top 3 documents Individual Relevance:D3 D4 D1 Greedy Coverage Solution:D3 D1 D5

17 Choose top 3 documents Individual Relevance:D3 D4 D1 Greedy Coverage Solution:D3 D1 D5

18 Choose top 3 documents Individual Relevance:D3 D4 D1 Greedy Coverage Solution:D3 D1 D5

19 Choose top 3 documents Individual Relevance:D3 D4 D1 Greedy Coverage Solution:D3 D1 D5 This diminishing returns property is called submodularity

20 F (A) Submodular Coverage Model F c (A) = how well A “covers” c Diminishing returns: Submodularity Set of articles: A User preferences: w Goal: NP-hard in general Greedy: (1-1/e) guarantee [Nemhauser et al., 1978]

21 Submodular Coverage Model a1 = “China's Economy Is on the Mend, but Concerns Remain” a2 = “US economy poised to pick up, Geithner says” a3 = “Who's Going To The Super Bowl?” w = [0.6, 0.4] A = Ø

22 Submodular Coverage Model a1 = “China's Economy Is on the Mend, but Concerns Remain” a2 = “US economy poised to pick up, Geithner says” a3 = “Who's Going To The Super Bowl?” w = [0.6, 0.4] A = Ø F 1 (A+a)-F 1 (A)F 2 (A+a)-F 2 (A) a10.90 a20.80 a300.5 a1a2a3Best Iter 10.540.480.2a1 Iter 2 Incremental BenefitIncremental Coverage

23 Submodular Coverage Model a1 = “China's Economy Is on the Mend, but Concerns Remain” a2 = “US economy poised to pick up, Geithner says” a3 = “Who's Going To The Super Bowl?” w = [0.6, 0.4] A = {a1} a1a2a3Best Iter 10.540.480.2a1 Iter 2--0.060.2a3 Incremental CoverageIncremental Benefit F 1 (A+a)-F 1 (A)F 2 (A+a)-F 2 (A) a1-- a20.1 (0.8)0 (0) a30 (0)0.5 (0.5)

24 Example: Probabilistic Coverage Each article a has independent prob. Pr(i|a) of covering topic i. Define F i (A) = 1-Pr(topic i not covered by A) Then F i (A) = 1 – Π(1-P(i|a)) [El-Arini et al., KDD 2009] “noisy or”

25 Outline Optimally diversified recommendations – Minimize redundancy – Maximize information coverage Exploration / exploitation tradeoff – Don’t know user preferences a priori – Only receives feedback for recommendations Incorporating prior knowledge – Reduce the cost of exploration

26 Outline Optimally diversified recommendations – Minimize redundancy – Maximize information coverage Exploration / exploitation tradeoff – Don’t know user preferences a priori – Only receives feedback for recommendations Incorporating prior knowledge – Reduce the cost of exploration Submodular information coverage model Diminishing returns property, encourages diversity Parameterized, can fit to user’s preferences Locally linear (will be useful later)

27 Learning Submodular Coverage Models Submodular functions well-studied – [Nemhauser et al., 1978] Applied to recommender systems – Parameterized submodular functions – [Leskovec et al., 2007; Swaminathan et al., 2009; El-Arini et al., 2009] Learning submodular functions: – [Yue & Joachims, ICML 2008] – [Yue & Guestrin, NIPS 2011] Interactively from user feedback We want to personalize!

28 Sports Politics World Interactive Personalization -- 00111 # Shown Average Likes : 0

29 Sports Politics World -- 1.00.0 00111 Average Likes # Shown : 1 Interactive Personalization

30 Sports Politics World Politics Economy Sports -- 1.00.0 01221 Average Likes # Shown : 1 Interactive Personalization

31 Sports Politics World Politics Economy Sports --1.0 0.0 01221 Average Likes # Shown : 3 Interactive Personalization

32 Sports Politics World Politics Economy Sports Politics Economy Politics --1.0 0.0 02421 Average Likes # Shown : 3 Interactive Personalization

33 Sports Politics World Politics Economy Sports Politics Economy Politics --0.50.750.0 02421 Average Likes # Shown … : 4 Interactive Personalization

34 Exploration vs Exploitation --0.50.750.0 02421 Average Likes # Shown : 4 Goal: Maximize total user utility Politics Exploit:Explore: Celebrity Economy Politics World Celebrity Best: World Politics World

35 Linear Submodular Bandits Problem For time t = 1…T – Algorithm recommends articles A t – User scans articles in order and rates them E.g., like or dislike each article (reward) Expected reward is F(A t |w * ) (discussed later) – Algorithm incorporates feedback [Yue & Guestrin, NIPS 2011] Regret: Best possible recommendations

36 Opportunity cost of not knowing preferences “no-regret” if R(T)/T  0 – Efficiency measured by convergence rate Regret: Time Horizon Linear Submodular Bandits Problem Best possible recommendations [Yue & Guestrin, NIPS 2011]

37 Local Linearity Incremental Coverage Utility Previous articles Current article User’s preferences

38 User Model Politics Economy Celebrity a a A A a User scans articles in order Generates feedback y Obeys: Independent of other feedback “Conditional Submodular Independence” [Yue & Guestrin, NIPS 2011]

39 Estimating User Preferences w w Δ Δ Y Y = Observed Feedback Submodular Coverage Features of Recommendations User [Yue & Guestrin, NIPS 2011] Linear regression to estimate w!

40 Balancing Exploration vs Exploitation For each slot: Example below: select article on economy Estimated Gain by Topic Uncertainty of Estimate + Uncertainty Estimated gain

41 Sports Politics World [Yue & Guestrin, NIPS 2011] Balancing Exploration vs Exploitation C(a|A) shrinks as roughly: #times topic was shown

42 Sports Politics World [Yue & Guestrin, NIPS 2011] Balancing Exploration vs Exploitation C(a|A) shrinks as roughly: #times topic was shown

43 Sports Politics World Politics Economy Celebrity [Yue & Guestrin, NIPS 2011] Balancing Exploration vs Exploitation C(a|A) shrinks as roughly: #times topic was shown

44 Sports Politics World Politics Economy Celebrity [Yue & Guestrin, NIPS 2011] Balancing Exploration vs Exploitation C(a|A) shrinks as roughly: #times topic was shown

45 Sports Politics World Politics Economy Politics Economy Celebrity Sports … [Yue & Guestrin, NIPS 2011] C(a|A) shrinks as roughly: Balancing Exploration vs Exploitation #times topic was shown

46 LSBGreedy Loop: – Compute least squares estimate – Start with A t empty – For i=1,…,L Recommend article a that maximizes – Receive feedback y t,1,…,y t,L Uncertainty Estimated gain Least Squares Regression

47 Regret Guarantee – Builds upon linear bandits to submodular setting [Dani et al., 2008; Li et al., 2010; Abbasi-Yadkori et al., 2011] – Leverages conditional submodular independence No-regret algorithm! (regret sub-linear in T) – Regret convergence rate: d/(LT) 1/2 – Optimally balances explore/exploit trade-off [Yue & Guestrin, NIPS 2011] # Topics Time Horizon # Articles per Day

48 Other Approaches Multiplicative Weighting [El-Arini et al. 2009] – Does not employ exploration – No guarantees (can show doesn’t converge) Ranked bandits [Radlinski et al. 2008; Streeter & Golovin 2008] – Reduction, treats each slot as a separate bandit – Use LinUCB [Dani et al. 2008; Li et al. 2010; Abbasi-Yadkori et al 2011] – Regret guarantee O(dLT 1/2 ) (factor L 1/2 worse) ε-Greedy – Explore with probability ε – Regret guarantee O(d(LT) 2/3 ) (factor (LT) 1/3 worse)

49 Simulations LSBGreedy RankLinUCB e-Greedy MW

50 Simulations LSBGreedy RankLinUCB e-Greedy MW

51 User Study Tens of thousands of real news articles T=10 days L=10 articles per day d=18 topics Users rate articles Count #likes Users heterogeneous Requires personalization

52 User Study ~27 users in study Submodular Bandits Wins Static Weights Submodular Bandits Wins Ties Losses Multiplicative Updates (no exploration) Submodular Bandits Wins Ties Losses RankLinUCB (doesn’t directly model diversity)

53 Comparing Learned Weights vs MW MW overfits to “world” topic Few liked articles. MW did not learn anything

54 Outline Optimally diversified recommendations – Minimize redundancy – Maximize information coverage Exploration / exploitation tradeoff – Don’t know user preferences a priori – Only receives feedback for recommendations Incorporating prior knowledge – Reduce the cost of exploration Submodular information coverage model Diminishing returns property, encourages diversity Parameterized, can fit to user’s preferences Locally linear (will be useful later) Linear Submodular Bandits Problem Characterizes exploration/exploitation Provably near-optimal algorithm User study

55 The Price of Exploration This is the price of exploration – Region of uncertainty depends linearly on |w * | – Region of uncertainty depends linearly on d – Unavoidable without further assumptions # Topics Time Horizon # Articles per day User’s Preferences

56 Have: preferences of previous users Goal: learn faster for new users? [Yue, Hong & Guestrin, ICML 2012] Previous Users Observation: Systems do not serve users in a vacuum

57 Assumption: Users are similar to “stereotypes” Stereotypes described by low dimensional subspace Use SVD-style approach to estimate stereotype subspace E.g., [Argyriou et al., 2007] [Yue, Hong & Guestrin, ICML 2012] Have: preferences of previous users Goal: learn faster for new users?

58 Suppose w * mostly in subspace – Dimension k << d – “Stereotypical preferences” Two tiered exploration – First in subspace – Then in full space Suppose: w*w* Original Guarantee: [Yue, Hong & Guestrin, ICML 2012] Coarse-to-Fine Bandit Learning 16x Lower Regret!

59 Coarse-to-Fine Hierarchical Exploration Loop: Least squares in subspace Least squares in full space Start with A t empty For i=1,…,L Recommend article a that maximizes Receive feedback y t,1,…,y t,L Uncertainty in Subspace Uncertainty in Full Space regularized to

60 Simulation Comparison Naïve (LSBGreedy from before) Reshaped Prior in Full Space (LSBGreedy w/ prior) – Estimated using pre-collected user profiles Subspace (LSBGreedy on the subspace) – Often what people resort to in practice Coarse-to-Fine Approach – Our approach – Combines full space and subspace approaches

61 Naïve BaselinesReshaped Prior on Full space SubspaceCoarse-to-Fine Approach “Atypical Users” [Yue, Hong, Guestrin, ICML 2012]

62 User Study Similar setup as before T=10 days L=10 articles per day d=100 topics k=5 (5-dim subspace) (estimated from real users) Tens of thousands of real news articles Users rate articles Count #likes

63 User Study ~27 users in study Coarse-to-Fine Wins Naïve LSBGreedy Coarse-to-Fine Wins Ties Losses LSBGreedy with Optimal Prior in Full Space

64 Learning Submodular Functions Parameterized submodular functions – Diminishing returns – Flexible Linear Submodular Bandit Problem – Balance Explore/Exploit – Provably optimal algorithms – Faster convergence using prior knowlege Practical bandit learning approaches Research supported by ONR (PECASE) N000141010672 and ONR YIP N00014-08-1-0752


Download ppt "Optimizing Recommender Systems as a Submodular Bandits Problem Yisong Yue Carnegie Mellon University Joint work with Carlos Guestrin & Sue Ann Hong."

Similar presentations


Ads by Google