Presentation is loading. Please wait.

Presentation is loading. Please wait.

Active Learning for Preferences Elicitation in Recommender Systems Lior Rokach Department of Information System Engineering :

Similar presentations


Presentation on theme: "Active Learning for Preferences Elicitation in Recommender Systems Lior Rokach Department of Information System Engineering :"— Presentation transcript:

1 Active Learning for Preferences Elicitation in Recommender Systems Lior Rokach Department of Information System Engineering :

2 Agenda Background - Active Learning and Recommender Systems Proposed Method Experimental Procedure Results and Discussion Conclusions and Future Work

3 Recommender Systems Users are overloaded by options to consider before making a decision –such as item to purchase Recommender systems aim at supporting the user in the processes of –decision-making –planning –purchasing

4 Collaborative Filtering Maintain users’ ratings of a variety of items. For a given user: –Find other similar users whose ratings strongly correlate with the current user –Recommend items rated highly by these similar users, but not rated by the current user. Almost all existing commercial recommenders use this approach (e.g. Amazon).

5 Collaborative Filtering

6 Active Learning Traditional supervised learning algorithms –passively accept labeled training data and induce prediction model Active learning –useful when unlabeled data is abundant –labels are expensive –allows intelligent selection of which examples to label. Passive Learning Active Learning

7 Using Active Learning for Initial Preferences Elicitation The cold start problem –very little is known about the preferences of new users Possible modus operandi –Ask the user to rate a few items –Which items ? Active Learning

8 Using Active Learning for Initial Preferences Elicitation Active Learning

9 Active Learning in Critique-Based Recommender Systems (Ricci and Nguyen, 2007) A series of interaction cycles to –narrow down the user’s query –until the desired item is obtained

10 Integrating Active Learning in CF-based Recommender Systems Active Learning (AL) in RecSys –accurately predicts items of interest to the user –while gaining information about her preferences. In this lecture we focus on –Uncertainty Active Collaborative Filtering Boutilier et al. (2003) Rong and Luo (2004) …

11 Incorporate exploration and exploitation trade-off. Work local – think global –Use the ratings of one user to contribute to other users Introduce Cost-Sensitivity (Not going to talk about that) Our Contributions the value of information of new ratings the alternative utility for not presenting the best items VS

12 Agenda Background - Active Learning and Recommender Systems Proposed Method Experimental Procedure Results and Discussion Conclusions and Future Work

13 Preliminaries Binary rating: Like/Dislike – –Explicit –Implicit - Based on user actions such as: Buy Click the item for additional details Provide a recommendation of top n items –User selects from this list –Ignore the fact she can browse the remaining items. We use a simple item-to-item NN CF –similarity measure such as Pearson correlation.

14 Item-to-Item NN CF with Binary Ratings r ui * can be used to approximate the probability that user u would like item i. Some use Jaccard coefficient instead

15 Probabilistic Approach Employ rule of succession (Laplace correction) –find the conditional probability for positive response in the next presentation of item i to user u: where itemSim should be normalized such that:

16 Mathematical interlude: Rule of succession The proportion p of positive response is treated as a uniformly distributed random variable –Some claim that p is not random, but uncertain –We assign a probability distribution to p to express uncertainty, not to attribute randomness Let X i,j indicator variable –equals 1 when user i positively responded to an item j with probability p j of success (0 otherwise) –has a Bernoulli distribution.

17 Mathematical interlude: Rule of succession – cont. Suppose these Xs are conditionally independent given p j thus the likelihood is: The conditional probability distribution of p j given the data X i,j, i = 1,..., n, is the multiplication of the "prior" (i.e., marginal) probability measure assigned to p j by the likelihood function (Bayes' theorem)

18 The posterior probability density function is This is a beta distribution with expected value Rule of succession implies –the conditional probability for positive response in the next presentation of item j given p j, is just p j. Mathematical interlude: Rule of succession – cont.

19 The Benefit and Risk of a Top 1 Recommendation A simple scenario: –Recommend the best (top 1) item from only two possible items P(u,i)r* ui Item 0.250.22101 0.1820.153202 The risk: The presented item (item1) is not selected by the user, but if item 2 was presented to the user it would have been chosen

20 Risk Reduction Risk reduces as more ratings become available P(u,i)r* ui Item 0.2270.24201 0.1660.156402

21 Risk Reduction Calculation r ui Positive With probability P(u,i)=0.2 Estimate CurrentRisk Estimate NewRisk RiskReduction = CurrentRisk - NewRisk Estimate NewRisk Negative with probability 1-P(u,i)=0.8 Rebuild Recommendation List assuming r ui =0 Rebuild Recommendation List assuming r ui =1

22 Loss/Utility If the net revenues of the items are known, the risk/benefit is easily converted into loss/utility. PriceP(u,i)r* ui Item 20.250.22101 30.180.153202

23 The Benefit and Risk of Top 1 Recommendation Extended scenario: –Recommending the best (top 1) item from n possible items P(u,i)r* ui Item 0.250.22101 0.1820.153202 0.136 0.12205 0.167 0.11103 0.091 0.051204 As Before

24 The Benefit and Risk of Top 1 Recommendation P(u,i)r* ui Item 0.250.22101 0.1820.153202 0.136 0.12205 0.167 0.11103 0.091 0.051204 High number of items limits the use of this formula in practice Fortunately easy to calculate tight lower and upper bounds exist (Prekopa and Gao, 2005)

25 The Benefit and Risk of Top n Recommendation P(u,i)r* ui Item 0.250.22101 0.1820.153202 0.136 0.12205 0.167 0.11103 0.091 0.051204

26 Cascaded risk reduction for top n Assumptions: –User selects only one item (positive response) –User reviews the items according to the their order in the list r u1 P(u,1) Estimate CurrentRisk Estimate NewRisk 1-P(u,1) r u2 Estimate NewRisk r u3 Estimate NewRisk P(u,2)1-P(u,2) P(u,3) 1-P(u,3)......

27 Multiple Users When user u provides an additional rating, –not only the risk/benefit of user u evolves –but also the risk/benefit of other users (Collaborative Filtering)

28 Goal Formulation U – set of Users I – set of Items DRL j – default ranked list for user j. –For example the list which would be selected by CF according to r* ui. –A Ranked List is an ordered set of pairs

29 Goal Formulation – cont. Find PRL u (ranked list to be presented to user u) such that: w v – weight for user v –Frequent users should have larger weights. T k is used to control the exploration/exploitation tradeoff –We employ simulated annealing with a simple and common exponential schedule:

30 Switching from active to passive Risk reduction converges to zero as number of ratings tends to infinity. When sufficient ratings are achieved, go from active to passive

31 Proposition 1: Who is affected? When a new rating for item i by user u, is added to an item-to-item NN CF, –the recommendation list of user v≠u is revised iff user v has rated at least one item that has been rated by user u. Proof: –Straightforward

32 Illustration of Proposition 1 U7U7 U6U6 U5U5 U4U4 U3U3 U2U2 U1U1 XXXXIAIA XXXIBIB XXXICIC XXIDID X Not affected

33 Proposition 2: How many are affected? Assumption: the provided ratings are scattered uniformly over the item-users matrix, Expected proportion of users affected by adding a new rating is: where –N is the total number of items –n is the mean number of ratings provided by a single user Example –N=2,000,000, n=210  prop=2% –N=17,000, n=210  prop=91%

34 A Greedy Algorithm Finding the optimal ranked list for user u is a computationally intractable problem Approximated solution –Greedily select the items to be presented from the top k·l items of user u, l is the number of items presented in a single page, k is a small integer –Calculate the risk reduction for a sample of m users selected randomly from all potentially affected users –Approximate the actual reduction by simple scaling

35 Computational Complexity Greedily select items in the list Risk reduction Benefit Assuming hash map structures for: Rated items for each user Rating users for each item DRL for each user

36 Agenda Background - Active Learning and Recommender Systems Proposed Method Experimental Procedure Results and Discussion Conclusions and Future Work

37 The Experiments’ Goals Compare the proposed active learning algorithm to passive learning. Evaluate –contribution of the global effect –Monte-Carlo procedure for selecting the affected users –scalability of the greedy algorithm Our main evaluation criterion: –Precision

38 Data We actively select items to be presented to the user and expect to obtain the user’s response to these items. Available offline datasets (such as Netflix) are sparse and therefore cannot guarantee response to all items we present. Three options: Find several sub-matrices that are dense Filter DRLs according to the items known to be rated by the user. Work online.

39 Offline evaluation Six mutually exclusive dense submatrices of 50 users over 50 movies were extracted from Netflix Provided ratings where transform it into a binary scale (ratings above user’s average are considered positive). In each iteration we randomly selected a user and simulate a request for obtaining a recommendation assuming l=5, k=5. Initial probability estimation of items for all users is assumed to be uniform.

40 Agenda Background - Active Learning and Recommender Systems Proposed Method Experimental Procedure Results and Discussion Conclusions and Future Work

41 Offline (Netflix) Results Passive vs. Active Both methods display a unimodal peak quadratic-like growth. Both converges to the same value. The positive effect of active learning is maximally observed around of 200 sessions with an improvement of 15%.

42 Offline (Netflix) Results Passive vs. Active

43 Offline (Netflix) Results The effect of recommendation list size (k)

44 Offline (Netflix) Results The effect of number of referred users (m)

45 How much time it really takes? In a real application –8,000,000 items –1,000,000 users l=10,k=10,m=200 –About 12 msec with Intel Core Duo CPU E7400 @ 2.80GHz. l=10,k=5,m=200 –About 1.5 msec

46 Agenda Background - Active Learning and Recommender Systems Proposed Method Experimental Procedure Results and Discussion Conclusions and Future Work

47 Conclusions A new Uncertainty Active Collaborative Filtering method has been developed. The new method takes into consideration the global effect. The new method can improve objective and subjective performance.

48 Drawbacks Like any Uncertainty-based AL reducing uncertainty may not always improve accuracy (Rubens et al., 2010) A more intensive calculation than the passive CF.

49 Future Work Evaluate on a large dataset (under investigation) Extends the method to other CF algorithms and compare to other Active Learning CF (under investigation) Evaluate the method on a large scale online system (Scheduled to 4/2010) Extend the algorithm to a non-binary scale (5 stars) Develop a batch mode algorithm Develop a better sampling method for selection of the affected users Consider other heuristics Taking into consideration the temporal aspect (Netflix)

50 Thank You, Lior Rokach Email: liorrk@bgu.ac.il

51 References Boutilier, C., Zemel, R., & Marlin, B. (2003). Active collaborative filtering. Proceedings of the Nineteenth Annual Conference on Uncertainty in Artificial Intelligence (pp. 98–106). Francesco Ricci, Quang Nhat Nguyen: Acquiring and Revising Preferences in a Critique-Based Mobile Recommender System. IEEE Intelligent Systems 22(3): 22-29 (2007) Rong J. and Luo S. (2004), A Bayesian approach toward active learning for collaborative filtering, Proceedings of the 20th conference on Uncertainty in artificial intelligence, pp. 278—285. Andras Prekopa, Linchun Gao, Bounding the probability of the union of events by aggregation and disaggregation in linear programs, Discrete Applied Mathematics, Volume 145, Issue 3, 30 January 2005, Pages 444- 454


Download ppt "Active Learning for Preferences Elicitation in Recommender Systems Lior Rokach Department of Information System Engineering :"

Similar presentations


Ads by Google