Download presentation

Presentation is loading. Please wait.

Published byEmilio Noyce Modified about 1 year ago

1
Group Recommendation: Semantics and Efficiency Sihem Amer-Yahia (Yahoo! Labs), Senjuti Basu-Roy (Univ. of Arlington), Ashish Chawla (Yahoo! Inc), Gautam Das (Univ. of Arlington), and Cong Yu (Yahoo! Labs).

2
Recommendation Individual User Recommendation : “Which movie should I watch?” “What city should I visit?” “What book should I read?” “What web page has the information I need?” 2

3
Group Recommendation Individual User Recommendation User seeking intelligent ways to search through the enormous volume of information available to her. How to Recommend? Solution : Group Recommendation Helps socially acquainted individuals find content of interest to all of them together. Movies – for a family! Restaurants – for a work group lunch! 3 Places to visit – using a travel agency!

4
Existing Solution No prior work on efficient processing for Group Recommendation Existing solutions aggregate ratings (referred to as relevance) among group members Preference Aggregation : aggregates group members’ prior ratings into a single virtual user then computes recommendations for that user Rating Aggregation : aggregate individual ratings on the fly using Average Least Misery: computes min rating 4

5
Why Is Rating Aggregation Not Enough? Task: recommend a movie to group G ={u1, u2,u3} relevance (u1,”God Father”) = 5 relevance (u2, “God Father”) = 1 relevance (u3, ”God Father”) = 1 relevance (u1, ”Roman Holiday”) = 3 relevance (u2, “Roman Holiday”) = 3 relevance (u3, ”Roman Holiday”) = 1 Average Relevance and Least Misery fail to distinguish between “God Father” and “Roman Holiday” But, group members agree more on “Roman Holiday” with higher ratings Difference in opinion (Disagreement) between members may be important in Group Recommendation Semantics. 5

6
Outline Motivation Modeling & Problem Definition Algorithms Optimization Opportunities Experimental Evaluation Conclusion & Future Work 6

7
Semantics in Group Recommendation Relevance (average or least misery) and Disagreement in a recommendation’s score computed using a consensus function 7 Group G ={u1, u2,u3} relevance(u1,”God Father”) = 5, relevance(u2, “God Father”) = 1, relevance=(u3,” God Father”) = 1 Average Pair-wise Variance = (|(5-1)|+|(1-1)|+|(1-5)|)/ 3 = [( ) 2 + (1-2.33) 2 +(1-2.33) 2 ] / 3 score (G, “God Father”) = w1 x.86 + w2 x (1-.2) (after normalization)

8
Problem Definition Top-k Group Recommendation: Given a user group G and a consensus function F, return a list of k items to the group such that each item is new to all users in G and the returned k- items are sorted in decreasing value of F 8

9
Efficient Recommendation Computation Average and Least Misery are monotone Sort user relevance list in decreasing value Apply Threshold Algorithm TA 9 i1,2 i3,2 i2,2 i1,0 i4,2 i3,2 i2,0i4,0 IL u1 IL u2 For a user group G = {u1, u2}, 4 Sorted Accesses are required to compute top-1 item for Group Recommendation. Pruning operates only on the relevance component of consensus function. How to do pruning on Disagreement component also?

10
Role of Disagreement Lists Pair-wise disagreement lists are computed from individual relevance lists Disagreement lists are sorted in increasing values Items encountered in disagreement lists play a significant role in attaining early stoppage 10 IL u1 IL u2 DL (u1,u2) i1,2 i3,2 i2,2 i1,0 i4,2 i3,2 i4,2 i1,2 i3,0 i2,0i4,0 i2,2 Top-1 item for u1, u2 can be obtained after 3 Sorted Accesses if DL(u1,u2) is present. Without DL(u1,u2) 4 Sorted Accesses is required.

11
Outline Motivation Modeling & Problem Definition Algorithms Optimization Opportunities Experimental Evaluation Conclusion & Future Work 11

12
Relevance Only (RO) Algorithm Task : Recommend Top-1 item for a group of 3 users, u1, u2, and u3. Input: 3 Relevance Lists (IL u1,IL u2,IL u3 ) Relevance lists are sorted in decreasing scores Lists are chosen in round-robin fashion during Top-k computation. Performance is calculated by computing no of Sorted Access 12 Threshold is : 1.93 IL u1 i1,4 i3,4 i4,4, i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i4,3 i2,3 Top-k Buffer i1,1.33

13
Relevance Only (RO) Algorithm 13 Threshold is : 1.86 IL u1 i1,4 i3,4 i4,4, i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i4,3 i2,3 Top-k Buffer i1,1.33 i2,1.33

14
Relevance Only (RO) Algorithm 14 Threshold is : 1.73 IL u1 i1,4 i3,4 i4,4, i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i4,3 i2,3 Top-k Buffer i1,1.33 i2,1.33 i3,1.33

15
Relevance Only (RO) Algorithm 15 Threshold is : 1.73 IL u1 i1,4 i3,4 i4,4, i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i4,3 i2,3 Top-k Buffer i1,1.33 i2,1.33 i3,1.33

16
Relevance Only (RO) Algorithm 16 Threshold is : 1.73 IL u1 i1,4 i3,4 i4,4, i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i4,3 i2,3 Top-k Buffer i4,1.6 i1,1.33 i2,1.33 i3,1.33

17
Relevance Only (RO) Algorithm 17 Threshold is : 1.73 IL u1 i1,4 i3,4 i4,4, i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i4,3 i2,3 Top-k Buffer i4,1.6 i1,1.33 i2,1.33 i3,1.33

18
Relevance Only (RO) Algorithm 18 Threshold is : 1.73 IL u1 i1,4 i3,4 i4,4 i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i4,3 i2,3 Top-k Buffer i4,1.6 i1,1.33 i2,1.33 i3,1.33

19
Relevance Only (RO) Algorithm After 8 Sorted Accesses (3 on IL(u1), 3 on IL(u2) and 2 on IL(u3)) 19 Threshold is : 1.6 IT STOPS! Top-1 Item is i4 IL u1 i1,4 i3,4 i4,4 i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i4,3 i2,3 Top-k Buffer i4,1.6 i1,1.33 i2,1.33 i3,1.33

20
Full Materialization (FM) Algorithm Input: 3 Relevance Lists (IL u1,IL u2,IL u3 ) and 3 pair-wise materialized disagreement lists (DL u1, u2, DL u1,u3, DL u2,u3 ) Relevance lists are sorted in decreasing scores and Disagreement lists are sorted in increasing disagreement scores. 20 Threshold is : 1.93 IL u1 i1,4 i3,4 i4,4 i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL U3 i3,3 i1,3 i3,3 i2,3 DL u1,u2 i4,0 i3,2 i2,2 i1,2 DL u1,u3 i4,1 i3,1 i2,1 i1,2 DL u2,u3 i4,1 i1,1 i3,1 i2,1 i1,1.33 Top-k Buffer

21
Full Materialization (FM) Algorithm 21 Threshold is : 1.86 IL u1 i1,4 i3,4 i4,4 i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i3,3 i2,3 DL u1,u2 i4,0 i3,2 i2,2 i1,2 DL u1,u3 i4,1 i3,1 i2,1 i1,2 DL u2,u3 i4,1 i1,1 i3,1 i2,1 i1,1.33 i2,1.33 Top-k Buffer

22
Full Materialization (FM) Algorithm 22 Threshold is : 1.73 IL u1 i1,4 i3,4 i4,4 i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i3,3 i2,3 DL u1,u2 i4,0 i3,2 i2,2 i1,2 DL u1,u3 i4,1 i3,1 i2,1 i1,2 DL u2,u3 i4,1 i1,1 i3,1 i2,1 i1,1.33 i2,1.33 i3,1.33 Top-k Buffer

23
Full Materialization (FM) Algorithm 23 Threshold is : 1.73 IL u1 i1,4 i3,4 i4,4 i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i3,3 i2,3 DL u1,u2 i4,0 i3,2 i2,2 i1,2 DL u1,u3 i4,1 i3,1 i2,1 i1,2 DL u2,u3 i4,1 i1,1 i3,1 i2,1 i4,1.6 i2,1.33 i3,1.23 i4,1.33 Top-k Buffer

24
Full Materialization (FM) Algorithm 24 Threshold is : 1.66 IL u1 i1,4 i3,4 i4,4 i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i3,3 i2,3 DL u1,u2 i4,0 i3,2 i2,2 i1,2 DL u1,u3 i4,1 i3,1 i2,1 i1,2 DL u2,u3 i4,1 i1,1 i3,1 i2,1 Top-k Buffer i4,1.6 i2,1.33 i3,1.23 i4,1.33

25
Full Materialization (FM) Algorithm 25 After 6 Sorted Accesses(1 on each list) Threshold is : 1.6 Score(i4) = Threshold = 1.6 IT STOPS! Top-1 item is i4 IL u1 i1,4 i3,4 i4,4 i2,2 IL u2 i2,4 i4,4 i1,2 i3,2 IL u3 i3,3 i1,3 i3,3 i2,3 DL u1,u2 i4,0 i3,2 i2,2 i1,2 DL u1,u3 i4,1 i3,1 i2,1 i1,2 DL u2,u3 i4,1 i1,1 i3,1 i2,1 Top-k Buffer i4,1.6 i2,1.33 i3,1.23 i4,1.33

26
Outline Motivation Modeling & Problem Definition Algorithms Optimization Opportunities Experimental Evaluation Conclusion & Future Work 26

27
Optimization Opportunities Partial Materialization : Given a space budget to materialize only a subset of n(n-1)/2 disagreement lists, which m lists to materialize? Threshold Sharpening in Recommendation Computation: How to sharpen threshold during top-k computation? 27

28
Why Partial Materialization ? A set of 10,000 users has disagreement lists Only 10% of the disagreement lists can be materialized, given a space budget Problem : Which lists should we choose so that those gives “maximum benefit” during query processing? Intuition : Materialize only those lists that significantly improves efficiency. Recommendation Algorithm needs to be adapted to it (refer to as PM in the paper) 28

29
Disagreement lists Materialization Algorithm Sort the table with decreasing difference (#SAs) and consider first m rows 29 User Pair #SAs without disagreement list #SAs with disagreement lists Difference in #SAs {u 1,u 2 } {u 3,u 4 } {u 10,u 9 } {u 6,u 7 } {u 2,u 3 } {u 5,u 6 } {u 7,u 8 } m

30
Threshold Sharpening Can we exploit the dependencies between relevance and disagreement lists and sharpen thresholds in FM, RO and PM algorithms? 30 Maximize (i u1 + i u2 )/2 + (1- |i u1 -i u2 |) s.t. 0 <= i u1 <= <=| i u1 – i u2 |<= 1 Threshold = 1.3 New Threshold = 1.2 IL u1 i1,0.5 i3, IL u2 i2,0.5 i3, DL u1,u2 i3,0.2 I1,

31
Outline Motivation Modeling & Problem Definition Algorithms Optimization Opportunities Experimental Evaluation Conclusion & Future Work 31

32
Experiments Used Dataset MovieLens data set 71,567 users, 10,681 movies, 10,000,054 ratings User Studies Compare the effectiveness of proposed Group Recommendation algorithms with existing approaches using Amazon Mechanical Turk users. Small and large groups of similar, dissimilar and random users are formed. Algorithms Average Relevance Only (AR), Least Misery Only (LM), Consensus with Pair-wise Disagreements (RP), Consensus with Disagreement Variance (RV) are compared Performance Experiments Performance (no of sorted accesses) comparison of FM, RO and PM varying group size, similarity and no of returned items. Effectiveness of partial Materialization Effectiveness of Threshold Sharpening 32

33
Disagreement is important for Dissimilar User Group Misery Only (MO) is the best model for similar user group Disagreement is important for dissimilar users. Consensus with Disagreement Variance (RV80) is the best model there. 33

34
Summary of Performance Results Presence of Disagreement lists improves performance for dissimilar user groups Sometime Partial Materialization (PM) is the best solution For the same query, different disagreement lists contribute differently in Top-k processing Optimization during threshold calculation improves overall performance 34

35
Performance Results Less sorted accesses (SAs) are required for more similar user groups Disagreement lists are important for Dissimilar user groups FM is the best performer for very dissimilar user groups, RO is the best algorithm for very similar user groups. Sometimes only few disagreement lists attain the best performance. Therefore Partial Materialization is important Optimization during threshold calculation always achieves better performance (less #SAs) than without optimization case. 35

36
Outline Motivation Modeling & Problem Definition Algorithms Optimization Opportunities Experimental Evaluation Conclusion & Future Work 36

37
Conclusion Disagreement impacts both quality and efficiency of Group Recommendation. Threshold algorithm, TA can be adapted to compute group recommendations. Novel optimization opportunities are present. 37

38
Ongoing and Future Work Can disagreement lists be optimized such that they consume less space and contain same information? Can group recommendation algorithms be adapted to work with those optimized lists? 38

39
Thank You ! 39

40
Modeling Semantics in Group Recommendation Distinguish Relevance and Disagreement in a recommendation’s score Combine Average Relevance with Disagreement Combine Least Misery with Disagreement 40 Disagreement = difference in relevance among group members Group G ={u1, u2,u3} relevance(u1,”God Father”) = 5, relevance(u2, “God Father”) = 1, relevance=(u3,” God Father”) = 1 Average Pair-wise Disagreements (G,” God Father”) = (|(5-1)|+|(1-1)|+|(1-5)|)/ 3 Disagreement Variance Disagreement Variance (G,” God Father”) = [( ) 2 + (1-2.33) 2 + (1-2.33) 2 ] / 3

41
Consensus Function and Problem Definition A weighted sum of relevance and disagreement such that for each item, its group relevance is maximized and group disagreement is minimized. Top-k Group Recommendation: Given a user group G and a consensus function F, return a list of k items to the group such that each item is new to all users in G and the returned k-items are sorted in decreasing value of F. 41 score (G, “God Father”) = w1 x.86 + w2 x (1-.2) (after normalization)

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google