Presentation is loading. Please wait.

Presentation is loading. Please wait.

FISM: Factored Item Similarity Models for Top-N Recommender Systems

Similar presentations


Presentation on theme: "FISM: Factored Item Similarity Models for Top-N Recommender Systems"— Presentation transcript:

1 FISM: Factored Item Similarity Models for Top-N Recommender Systems
Santosh Kabbur, Xia Ning, George Karypis Presented by : Muhammad Fiqri Muthohar Chonnam National University

2 Presentation Outline Introduction Relevant Research Motivation
FISM – Factored Item Similarity Methods Experimental Evaluation Results Conclusion

3 Introduction Top-N recommender system have been widely used to recommend ranked list of items and help user identifying items that best fit their personal taste. Collaborative filtering methods utilizing User/Item co-rating information to build models. Sparsity affects the effectiveness of existing top-N recommendation methods.

4 Relevant Research SLIM NSVD

5 SLIM 𝑟 𝑢 = 𝑟 𝑢 𝑆 Where 𝑟 𝑢 is the rating vector of 𝑢 on all items and 𝑆 is a 𝑚 × 𝑚 sparse matrix of aggregation coefficients. minimize s R−RS 𝐹 2 + 𝛽 2 S 𝐹 2 +𝜆 S 1 Subject to 𝑆 ≥ 0, diag(S) = 0

6 NSVD In this method, an item-item similarity was learned as a product of two low-rank matrices, P and Q, where: 𝑃∈ ℝ 𝑚×𝑘 ,𝑄∈ ℝ 𝑚×𝑘 , and 𝑘 ≪𝑚 𝑟 𝑢𝑖 = 𝑏 𝑢 + 𝑏 𝑖 + 𝑗∈ ℛ 𝑢 𝑝 𝑗 𝑞 𝑖 T Where 𝑏 𝑢 and 𝑏 𝑖 are the user and item biases and ℛ 𝑢 + is the set of items rated by 𝑢. The parameters of this model are estimated as the minimizer to the following optimization problem: minimize P,Q 𝑢∈𝐶 𝑖∈ ℛ 𝑢 𝑟 𝑢𝑖 − 𝑟 𝑢𝑖 𝐹 2 + 𝛽 2 𝐏 𝐹 2 + 𝐐 𝐹 2

7 Motivation Sparse user-item rating matrix results in Item based and SLIM, which rely on learning similarities between items, fail to capture the dependencies between items that have not been co-rated by at least one user. Methods based on matrix factorization, alleviate this problem by projecting the data onto a low dimensional space, thereby implicitly learning better relationships between the users and items (including items which are not co- rated). However, such methods are consistently out-performed by SLIM NSVD does not exclude the diagonal entries while estimating the ratings during learning and prediction phases. In this case it can lead to rather trivial estimates, in which an item ends up recommending itself.

8 Basic FISM Estimated Value
Where ℛ 𝑢 + is the set of items rated by user 𝑢, 𝑝 𝑗 and 𝑞 𝑖 are the learned item latent factors, 𝑛 𝑢 + is the number of items rated by 𝑢, and 𝛼 is a user specified parameter between 0 and 1.

9 FISMrmse Loss Function: RMSE ℒ ∙ = 𝑖∈𝒟 𝑢∈𝒞 𝑟 𝑢𝑖 − 𝑟 𝑢𝑖 2
ℒ ∙ = 𝑖∈𝒟 𝑢∈𝒞 𝑟 𝑢𝑖 − 𝑟 𝑢𝑖 2 Estimated value 𝑟 𝑢𝑖 = 𝑏 𝑢 + 𝑏 𝑖 + 𝑛 𝑢 + −1 −𝛼 𝑗∈ ℛ 𝑢 + \ 𝑖 𝑝 𝑗 𝑞 𝑖 T Regularized optimization problem minimize P,Q 𝑢,𝑖∈𝑅 𝑟 𝑢𝑖 − 𝑟 𝑢𝑖 𝐹 2 + 𝛽 2 𝐏 𝐹 2 + 𝐐 𝐹 2 + 𝜆 2 𝑏 𝑢 𝛾 2 𝑏 𝑖 2 2

10 FISMrmse algorithm

11 FISMauc Loss Function: AUC
ℒ ∙ = 𝑢∈𝒞 𝑖∈ ℛ 𝑢 + ,𝑗∈ ℛ 𝑢 − ( 𝑟 𝑢𝑖 − 𝑟 𝑢𝑗 )−( 𝑟 𝑢𝑖 − 𝑟 𝑢𝑗 ) 2 Estimated value 𝑟 𝑢𝑖 = 𝑏 𝑢 + 𝑏 𝑖 + 𝑛 𝑢 + −1 −𝛼 𝑗∈ ℛ 𝑢 + \ 𝑖 𝑝 𝑗 𝑞 𝑖 T Regularized optimization problem minimize P,Q 𝑢∈𝒞 𝑖∈ ℛ 𝑢 + ,𝑗∈ ℛ 𝑢 − ( 𝑟 𝑢𝑖 − 𝑟 𝑢𝑗 )−( 𝑟 𝑢𝑖 − 𝑟 𝑢𝑗 ) 𝐹 2 + 𝛽 2 𝐏 𝐹 2 + 𝐐 𝐹 2 + 𝛾 2 𝑏 𝑖 minimize P,Q 𝑢∈𝒞 𝑖∈ ℛ 𝑢 + ,𝑗∈ ℛ 𝑢 − ( 𝑟 𝑢𝑖 − 𝑟 𝑢𝑗 )−( 𝑟 𝑢𝑖 − 𝑟 𝑢𝑗 ) 𝐹 2 + 𝛽 2 𝐏 𝐹 2 + 𝐐 𝐹 2 + 𝛾 2 𝑏 𝑖 2 2

12 FISMauc algorithm

13 Evaluation Datasets ML100K: ML100K-1, ML100K-2, ML100K-3
Netflix: Netflix-1, Netflix-2, Netflix-3 Yahoo Music: Yahoo-1, Yahoo-2, Yahoo-3 Only ML100K-3, Netflix-3, Yahoo-2 are presented in the paper due the lack of space

14 Table 1. Datasets

15 Evaluation Methodology Metrics
5-fold Leave-One-Out-Cross-Validation (LOOCV) Metrics Hit Rate (HR) 𝐻𝑅= #ℎ𝑖𝑡𝑠 #𝑢𝑠𝑒𝑟𝑠 Average Reciprocal Hit Rank (ARHR) 𝐴𝑅𝐻𝑅= 1 #𝑢𝑠𝑒𝑟𝑠 𝑖=1 #ℎ𝑖𝑡𝑠 1 𝑝𝑜𝑠 𝑖

16 Evaluation Comparison Algorithms ItemKNN(cos) ItemKNN(cprob)
ItemKNN(log) PureSVD BPRkNN BPRMF SLIM FISMrmse FISMauc

17 Experiment Results

18 Experiment Results

19 Experiment Results

20 Experiment Results

21 Experiment Results

22 Experiment Results

23 Experimental Results

24 Experimental Results

25 Conclusion This paper proposed a factored item similarity based method (FISM), which learns the item similarities as the product of two matrices. FISM can well cope with data sparsity problem, and better estimators are achieved as the number of factors increases. FISM outperforms other state-of-the-art top-N recommendation algorithms.

26 Thank You 감사합니다


Download ppt "FISM: Factored Item Similarity Models for Top-N Recommender Systems"

Similar presentations


Ads by Google