Presentation is loading. Please wait.

Presentation is loading. Please wait.

Improving Recommendation Lists Through Topic Diversification CaiNicolas Ziegler, Sean M. McNee,Joseph A. Konstan, Georg Lausen WWW '05 報告人 : 謝順宏 1.

Similar presentations


Presentation on theme: "Improving Recommendation Lists Through Topic Diversification CaiNicolas Ziegler, Sean M. McNee,Joseph A. Konstan, Georg Lausen WWW '05 報告人 : 謝順宏 1."— Presentation transcript:

1 Improving Recommendation Lists Through Topic Diversification CaiNicolas Ziegler, Sean M. McNee,Joseph A. Konstan, Georg Lausen WWW '05 報告人 : 謝順宏 1

2 Outline Introduction On collaborative filtering Evaluation metrics Topic diversification Empirical analysis Related work Conclusion 2

3 Introduction To reflect the user’s complete spectrum of interests. Improves user satisfaction. Many recommendations seem to be “similar” with respect to content. Traditionally, recommender system projects have focused on optimizing accuracy using metrics such as precision/recall or mean absolute error. 3

4 Introduction Topic diversification Intra-list similarity metric. Accuracy versus satisfaction. – “accuracy does not tell the whole story” 4

5 On collaborative filtering(CF) Collaborative filtering (CF) still represents the most commonly adopted technique in crafting academic and commercial recommender systems. Its basic idea refers to making recommendations based upon ratings that users have assigned to products. 5

6 User-based Collaborative Filtering a set of users a set of products partial rating function for each user, 6

7 User-based Collaborative Filtering Two major steps: Neighborhood formation. – Pearson correlation – Cosine distance Rating prediction 7

8 Itembased Collaborative Filtering Unlike user-based CF, similarity values c are computed for items rather than users. 8

9 Evaluation metrics Accuracy Metrics – Predictive Accuracy Metrics – Decision Support Metrics Beyond Accuracy – Coverage – Novelty and Serendipity Intra-List Similarity 9

10 Accuracy Metrics Predictive Accuracy Metrics – Mean absolute error (MAE) – Mean squared error(MSE) Decision Support Metrics – Recall – Precision 10

11 Beyond Accuracy Coverage – Coverage measures the percentage of elements part of the problem domain for which predictions can be made. Novelty and Serendipity – Novelty and serendipity metrics thus measure the “non-obviousness” of recommendations made, avoiding “cherry-picking”. 11

12 Intra List Similarity(ILS) To measure the similarity between product 12

13 Topic Diversification “Law of Diminishing Marginal Returns” Suppose you are offered your favorite drink. Let p1 denote the price you are willing to pay for that product. Assuming your are offered a second glass of that particular drink, the amount p2 of money you are inclined to spend will be lower, i.e., p1 > p2. Same for p3, p4, and so forth. 13

14 Topic Diversification Taxonomy-based similarity metric To compute the similarity between product sets based upon their classification. 14

15 Topic Diversification Topic Diversification Algorithm – Re-ranking the recommendation list from applying topic diversification. 15

16 Topic Diversification ΘF defines the impact that dissimilarity rank exerts on the eventual overall output. Large ΘF favors diversification over a’s original relevance order. The input lists muse be considerably larger than the final top-N list. 16

17 Recommendation dependency We assume that recommended products along with their content descriptions, only relevance weight ordering must hold for recommendation list items, no other dependencies are assumed. An item b’s current dissimilarity rank with respect to preceding recommendations plays an important role and may influence the new ranking. 17

18 Empirical analysis Dataset – BookCrossing (http://www.bookcrossing.com)http://www.bookcrossing.com – 278,858 members – 1,157,112 ratings – 271,379 distinct ISBN 18

19 Data clean & condensation Discarded all books missing taxonomic descriptions. Only community members with at least 5 ratings each were kept. – 10339 users – 6708books – 316349 ratings 19

20 Evaluation Framework Setup Did not compute MAE metric values Adopted K-folding (K=4) We were interested in seeing how accuracy, captured by precision and recall, behaves whe increasing θF. 20

21 Empirical analysis ΘF=0, 21

22 Empirical analysis 22

23 Empirical analysis 23

24 Conclusion We found that diversification appears detrimental to both user-based and item- based CF along precision and recall metrics. Item-based CF seems more susceptible to topic diversification than user-based CF, backed by result from precision, recall and ILS metric analysis. 24

25 Empirical analysis 25

26 Conclusion Diversification factor impact Human perception Interaction with accuracy 26

27 Multiple Linear Regression 27

28 Related work Northern Light (http://www.northernlight.com) Google (http://www.google.com) 28

29 Conclusion An algorithmic framework to increase the diversity of a top-N list of recommended products. New intra-list similarity metric. 29


Download ppt "Improving Recommendation Lists Through Topic Diversification CaiNicolas Ziegler, Sean M. McNee,Joseph A. Konstan, Georg Lausen WWW '05 報告人 : 謝順宏 1."

Similar presentations


Ads by Google