Adopted from Bin Liu @ UIC Recommender Systems Adopted from Bin Liu @ UIC.

Slides:



Advertisements
Similar presentations
Jeff Howbert Introduction to Machine Learning Winter Collaborative Filtering Nearest Neighbor Approach.
Advertisements

1 RegionKNN: A Scalable Hybrid Collaborative Filtering Algorithm for Personalized Web Service Recommendation Xi Chen, Xudong Liu, Zicheng Huang, and Hailong.
Sean Blong Presents: 1. What are they…?  “[…] specific type of information filtering (IF) technique that attempts to present information items (movies,
Intro to RecSys and CCF Brian Ackerman 1. Roadmap Introduction to Recommender Systems & Collaborative Filtering Collaborative Competitive Filtering 2.
2. Introduction Multiple Multiplicative Factor Model For Collaborative Filtering Benjamin Marlin University of Toronto. Department of Computer Science.
Rubi’s Motivation for CF  Find a PhD problem  Find “real life” PhD problem  Find an interesting PhD problem  Make Money!
Recommendations via Collaborative Filtering. Recommendations Relevant for movies, restaurants, hotels…. Recommendation Systems is a very hot topic in.
Customizable Bayesian Collaborative Filtering Denver Dash Big Data Reading Group 11/19/2007.
1 Collaborative Filtering: Latent Variable Model LIU Tengfei Computer Science and Engineering Department April 13, 2011.
Collaborative Filtering Matrix Factorization Approach
Chapter 12 (Section 12.4) : Recommender Systems Second edition of the book, coming soon.
Performance of Recommender Algorithms on Top-N Recommendation Tasks
Cao et al. ICML 2010 Presented by Danushka Bollegala.
מערכות המלצה / Collaborative Filtering ד " ר אבי רוזנפלד.
Distributed Networks & Systems Lab. Introduction Collaborative filtering Characteristics and challenges Memory-based CF Model-based CF Hybrid CF Recent.
Privacy risks of collaborative filtering Yuval Madar, June 2012 Based on a paper by J.A. Calandrino, A. Kilzer, A. Narayanan, E. W. Felten & V. Shmatikov.
EMIS 8381 – Spring Netflix and Your Next Movie Night Nonlinear Programming Ron Andrews EMIS 8381.
Chengjie Sun,Lei Lin, Yuan Chen, Bingquan Liu Harbin Institute of Technology School of Computer Science and Technology 1 19/11/ :09 PM.
Presented By :Ayesha Khan. Content Introduction Everyday Examples of Collaborative Filtering Traditional Collaborative Filtering Socially Collaborative.
Toward the Next generation of Recommender systems
Online Learning for Collaborative Filtering
SINGULAR VALUE DECOMPOSITION (SVD)
Investigation of Various Factorization Methods for Large Recommender Systems G. Takács, I. Pilászy, B. Németh and D. Tikk 10th International.
Evaluation of Recommender Systems Joonseok Lee Georgia Institute of Technology 2011/04/12 1.
1 Collaborative Filtering & Content-Based Recommending CS 290N. T. Yang Slides based on R. Mooney at UT Austin.
Recommender Systems Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata Credits to Bing Liu (UIC) and Angshul Majumdar.
Cosine Similarity Item Based Predictions 77B Recommender Systems.
The Summary of My Work In Graduate Grade One Reporter: Yuanshuai Sun
Singular Value Decomposition and Item-Based Collaborative Filtering for Netflix Prize Presentation by Tingda Lu at the Saturday Research meeting 10_23_10.
Singular Value Decomposition and Item-Based Collaborative Filtering for Netflix Prize Presentation by Tingda Lu at the Saturday Research meeting 10_23_10.
Recommendation Algorithms for E-Commerce. Introduction Millions of products are sold over the web. Choosing among so many options is proving challenging.
Collaborative Filtering via Euclidean Embedding M. Khoshneshin and W. Street Proc. of ACM RecSys, pp , 2010.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Item-Based Collaborative Filtering Recommendation Algorithms Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl GroupLens Research Group/ Army.
Recommendation Systems By: Bryan Powell, Neil Kumar, Manjap Singh.
Collaborative Filtering - Pooja Hegde. The Problem : OVERLOAD Too much stuff!!!! Too many books! Too many journals! Too many movies! Too much content!
Item-Based Collaborative Filtering Recommendation Algorithms
Chapter 14 – Association Rules and Collaborative Filtering © Galit Shmueli and Peter Bruce 2016 Data Mining for Business Analytics (3rd ed.) Shmueli, Bruce.
1 Dongheng Sun 04/26/2011 Learning with Matrix Factorizations By Nathan Srebro.
COMP423 Intelligent Agents. Recommender systems Two approaches – Collaborative Filtering Based on feedback from other users who have rated a similar set.
Author(s): Rahul Sami, 2009 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution Noncommercial.
Matrix Factorization and Collaborative Filtering
He Xiangnan Research Fellow National University of Singapore
Statistics 202: Statistical Aspects of Data Mining
Data Mining: Concepts and Techniques
Recommender Systems & Collaborative Filtering
Collaborative Filtering for Streaming data
CF Recommenders.
Chapter 7. Classification and Prediction
Preface to the special issue on context-aware recommender systems
Boosting and Additive Trees
MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER SYSTEMS
Asymmetric Correlation Regularized Matrix Factorization for Web Service Recommendation Qi Xie1, Shenglin Zhao2, Zibin Zheng3, Jieming Zhu2 and Michael.
Author(s): Rahul Sami, 2009 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution Noncommercial.
Recommender Systems.
Collaborative Filtering Nearest Neighbor Approach
Advanced Artificial Intelligence
Q4 : How does Netflix recommend movies?
Collaborative Filtering Matrix Factorization Approach
Movie Recommendation System
ITEM BASED COLLABORATIVE FILTERING RECOMMENDATION ALGORITHEMS
The European Conference on e-learing ,2017/10
Recommender Systems Copyright: Dietmar Jannah, Markus Zanker and Gerhard Friedrich (slides based on their IJCAI talk „Tutorial: Recommender Systems”)
Matrix Factorization & Singular Value Decomposition
Personalization & Recommender Systems
Collaborative Filtering Non-negative Matrix Factorization
Recommendation Systems
Recommender Systems Group 6 Javier Velasco Anusha Sama
Data Pre-processing Lecture Notes for Chapter 2
Recommender System.
Presentation transcript:

Adopted from Bin Liu @ UIC Recommender Systems Adopted from Bin Liu @ UIC

Road Map Introduction Content-based recommendation Collaborative filtering based recommendation K-nearest neighbor Association rules Matrix factorization

Introduction Recommender systems are widely used on the Web for recommending products and services to users. Most e-commerce sites have such systems. These systems serve two important functions. They help users deal with the information overload by giving them recommendations of products, etc. They help businesses make more profits, i.e., selling more products.

E.g., movie recommendation The most common scenario is the following: A set of users has initially rated some subset of movies (e.g., on the scale of 1 to 5) that they have already seen. These ratings serve as the input. The recommendation system uses these known ratings to predict the ratings that each user would give to those not rated movies by him/her. Recommendations of movies are then made to each user based on the predicted ratings.

Recommendation Problem

Different variations In some applications, there is no rating information while in some others there are also additional attributes about each user (e.g., age, gender, income, marital status, etc), and/or about each movie (e.g., title, genre, director, leading actors or actresses, etc). When no rating information, the system will not predict ratings but predict the likelihood that a user will enjoy watching a movie.

The Recommendation Problem We have a set of users U and a set of items S to be recommended to the users. Let p be an utility function that measures the usefulness of item s ( S) to user u ( U), i.e., p:U×S  R, where R is a totally ordered set (e.g., non-negative integers or real numbers in a range) Objective Learn p based on the past data Use p to predict the utility value of each item s ( S) to each user u ( U)

As Prediction Rating prediction, i.e., predict the rating score that a user is likely to give to an item that s/he has not seen or used before. E.g., rating on an unseen movie. In this case, the utility of item s to user u is the rating given to s by u. Item prediction, i.e., predict a ranked list of items that a user is likely to buy or use.

Two basic approaches Content-based recommendations: The user will be recommended items similar to the ones the user preferred in the past; Collaborative filtering (or collaborative recommendations): The user will be recommended items that people with similar tastes and preferences liked in the past. Hybrids: Combine collaborative and content-based methods.

Road Map Introduction Content-based recommendation Collaborative filtering based recommendation K-nearest neighbor Association rules Matrix factorization

Content-Based Recommendation Perform item recommendations by predicting the utility of items for a particular user based on how “similar” the items are to those that he/she liked in the past. E.g., In a movie recommendation application, a movie may be represented by such features as specific actors, director, genre, subject matter, etc. The user’s interest or preference is also represented by the same set of features, called the user profile.

Content-based recommendation (contd) Recommendations are made by comparing the user profile with candidate items expressed in the same set of features. The top-k best matched or most similar items are recommended to the user. The simplest approach to content-based recommendation is to compute the similarity of the user profile with each item.

Road Map Introduction Content-based recommendation Collaborative filtering based recommendations K-nearest neighbor Association rules Matrix factorization

Collaborative filtering Collaborative filtering (CF) is perhaps the most studied and also the most widely-used recommendation approach in practice. k-nearest neighbor, association rules based prediction, and matrix factorization Key characteristic of CF: it predicts the utility of items for a user based on the items previously rated by other like-minded users.

k-nearest neighbor kNN (which is also called the memory-based approach) utilizes the entire user-item database to generate predictions directly, i.e., there is no model building. This approach includes both User-based methods Item-based methods

User-based kNN CF A user-based kNN collaborative filtering method consists of two primary phases: the neighborhood formation phase and the recommendation phase. There are many specific methods for both. Here we only introduce one for each phase.

Neighborhood formation phase Let the record (or profile) of the target user be u (represented as a vector), and the record of another user be v (v  T). The similarity between the target user, u, and a neighbor, v, can be calculated using the Pearson’s correlation coefficient:

Recommendation Phase Use the following formula to compute the rating prediction of item i for target user u where V is the set of k similar users, rv,i is the rating of user v given to item i,

Issue with the user-based kNN CF The problem with the user-based formulation of collaborative filtering is the lack of scalability: it requires the real-time comparison of the target user to all user records in order to generate predictions. A variation of this approach that remedies this problem is called item-based CF.

Item-based CF The item-based approach works by comparing items based on their pattern of ratings across users. The similarity of items i and j is computed as follows:

Recommendation phase After computing the similarity between items we select a set of k most similar items to the target item and generate a predicted value of user u’s rating where J is the set of k similar items

Road Map Introduction Content-based recommendation Collaborative filtering based recommendation K-nearest neighbor Association rules Matrix factorization

Association rule-based CF Association rules obviously can be used for recommendation. Each transaction for association rule mining is the set of items bought by a particular user. We can find item association rules, e.g., buy_X, buy_Y -> buy_Z Rank items based on measures such as confidence, etc.

Road Map Introduction Content-based recommendation Collaborative filtering based recommendation K-nearest neighbor Association rules Matrix factorization

Matrix factorization The idea of matrix factorization is to decompose a matrix M into the product of several factor matrices, i.e., where n can be any number, but it is usually 2 or 3.

CF using matrix factorization Matrix factorization has gained popularity for CF in recent years due to its superior performance both in terms of recommendation quality and scalability. Part of its success is due to the Netflix Prize contest for movie recommendation, which popularized a Singular Value Decomposition (SVD) based matrix factorization algorithm. The prize winning method of the Netflix Prize Contest employed an adapted version of SVD

The abstract idea Matrix factorization a latent factor model. Latent variables (also called features, aspects, or factors) are introduced to account for the underlying reasons of a user purchasing or using a product. When the connections between the latent variables and observed variables (user, product, rating, etc.) are estimated during the training recommendations can be made to users by computing their possible interactions with each product through the latent variables.

Netflix Prize Contest

Netflix Prize Task Training data: Quadruples of the form (user, movie, rating, time) For our purpose here, we only use triplets, i.e., (user, movie, rating) For example, (132456, 13546, 4) means that the user with ID 132456 gave the movie with ID 13546 a rating of 4 (out of 5). Testing: predict the rating of each triplet: (user, movie, ?)

SVD factorization The technique discussed here is based on the SVD method given by Simon Funk at his blog site, the derivation of Funk’s method described by Wagman in the Netflix forums. the paper by Takacs et al. The method was later improved by Koren et al., Paterek and several other researchers.

Simon Funk’s SVD method where U = [u1, u2, …, uI] and M = [m1, m2, …, mJ]

SVD method (contd) Let us use K = 90 latent aspects (K needs to be set experimentally). Then, each movie will be described by only ninety aspect values indicating how much that movie exemplifies each aspect. Correspondingly, each user is also described by ninety aspect values indicating how much he/she prefers each aspect.

SVD method (contd) To combine these together into a rating, we multiply each user preference by the corresponding movie aspect, and then sum them up to give a rating to indicate how much that user likes that movie: U = [u1, u2, …, uI] and M = [m1, m2, …, mJ] Using SVD, we can perform the task

SVD method (contd) SVD is a mathematical way to find these two smaller matrices which minimizes the resulting approximation error, the mean square error (MSE). We can use the resulting matrices U and M to predict the ratings in the test set.

SVD method (contd)

SVD method (contd) To minimize the error, the gradient descent approach is used. For gradient descent, we take the partial derivative of the square error with respect to each parameter, i.e. with respect to each uki and mkj.

SVD method (contd)

SVD method (contd)

The final update rules By the same reasoning, we can also compute the update rule for mkj. Finally, we have both rules The final prediction uses Eq. (11)

Further improvements The two basic rules need some improvements to make them work well. There are also some pre-processing. Time was also added later. Etc Note: Funk used stochastic gradient descent Not the batch (global) gradient descent.