Association Rule Mining

Slides:



Advertisements
Similar presentations
Huffman Codes and Asssociation Rules (II) Prof. Sin-Min Lee Department of Computer Science.
Advertisements

Data Mining Techniques Association Rule
DATA MINING Association Rule Discovery. AR Definition aka Affinity Grouping Common example: Discovery of which items are frequently sold together at a.
Data Mining of Very Large Data
Association Rules Spring Data Mining: What is it?  Two definitions:  The first one, classic and well-known, says that data mining is the nontrivial.
IT 433 Data Warehousing and Data Mining Association Rules Assist.Prof.Songül Albayrak Yıldız Technical University Computer Engineering Department
MIS2502: Data Analytics Association Rule Mining. Uses What products are bought together? Amazon’s recommendation engine Telephone calling patterns Association.
Association rules The goal of mining association rules is to generate all possible rules that exceed some minimum user-specified support and confidence.
1 of 25 1 of 45 Association Rule Mining CIT366: Data Mining & Data Warehousing Instructor: Bajuna Salehe The Institute of Finance Management: Computing.
Association Rules l Mining Association Rules between Sets of Items in Large Databases (R. Agrawal, T. Imielinski & A. Swami) l Fast Algorithms for.
Association Analysis. Association Rule Mining: Definition Given a set of records each of which contain some number of items from a given collection; –Produce.
Data Mining Techniques So Far: Cluster analysis K-means Classification Decision Trees J48 (C4.5) Rule-based classification JRIP (RIPPER) Logistic Regression.
Data Mining Association Analysis: Basic Concepts and Algorithms Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
732A02 Data Mining - Clustering and Association Analysis ………………… Jose M. Peña Association rules Apriori algorithm FP grow algorithm.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Association Rule Mining Part 2 (under construction!) Introduction to Data Mining with Case Studies Author: G. K. Gupta Prentice Hall India, 2006.
Fast Algorithms for Mining Association Rules * CS401 Final Presentation Presented by Lin Yang University of Missouri-Rolla * Rakesh Agrawal, Ramakrishnam.
Association Rules Presented by: Anilkumar Panicker Presented by: Anilkumar Panicker.
6/23/2015CSE591: Data Mining by H. Liu1 Association Rules Transactional data Algorithm Applications.
1 ACCTG 6910 Building Enterprise & Business Intelligence Systems (e.bis) Association Rule Mining Olivia R. Liu Sheng, Ph.D. Emma Eccles Jones Presidential.
Association Rule Mining Part 1 Introduction to Data Mining with Case Studies Author: G. K. Gupta Prentice Hall India, 2006.
Association Rule Mining (Some material adapted from: Mining Sequential Patterns by Karuna Pande Joshi)‏
Fast Algorithms for Association Rule Mining
Mining Association Rules
1 Fast Algorithms for Mining Association Rules Rakesh Agrawal Ramakrishnan Srikant Slides from Ofer Pasternak.
Mining Association Rules
Mining Association Rules in Large Databases. What Is Association Rule Mining?  Association rule mining: Finding frequent patterns, associations, correlations,
Data Mining Chapter 2 Association Rule Mining
Association Discovery from Databases Association rules are a simple formalism for expressing positive connections between columns in a 0/1 matrix. A classical.
Association Rules. 2 Customer buying habits by finding associations and correlations between the different items that customers place in their “shopping.
David Corne, and Nick Taylor, Heriot-Watt University - These slides and related resources:
Approximate Frequency Counts over Data Streams Loo Kin Kong 4 th Oct., 2002.
Modul 7: Association Analysis. 2 Association Rule Mining  Given a set of transactions, find rules that will predict the occurrence of an item based on.
Association Rules. CS583, Bing Liu, UIC 2 Association rule mining Proposed by Agrawal et al in Initially used for Market Basket Analysis to find.
ASSOCIATION RULE DISCOVERY (MARKET BASKET-ANALYSIS) MIS2502 Data Analytics Adapted from Tan, Steinbach, and Kumar (2004). Introduction to Data Mining.
M. Sulaiman Khan Dept. of Computer Science University of Liverpool 2009 COMP527: Data Mining Association Rule Mining March 5, 2009.
EXAM REVIEW MIS2502 Data Analytics. Exam What Tool to Use? Evaluating Decision Trees Association Rules Clustering.
CS 8751 ML & KDDSupport Vector Machines1 Mining Association Rules KDD from a DBMS point of view –The importance of efficiency Market basket analysis Association.
Instructor : Prof. Marina Gavrilova. Goal Goal of this presentation is to discuss in detail how data mining methods are used in market analysis.
Association Rule Mining Data Mining and Knowledge Discovery Prof. Carolina Ruiz and Weiyang Lin Department of Computer Science Worcester Polytechnic Institute.
Association Rule.. Association rule mining  It is an important data mining model studied extensively by the database and data mining community.  Assume.
Association rule mining Goal: Find all rules that satisfy the user-specified minimum support (minsup) and minimum confidence (minconf). Assume all data.
Data Mining Find information from data data ? information.
Frequent-Itemset Mining. Market-Basket Model A large set of items, e.g., things sold in a supermarket. A large set of baskets, each of which is a small.
ASSOCIATION RULES (MARKET BASKET-ANALYSIS) MIS2502 Data Analytics Adapted from Tan, Steinbach, and Kumar (2004). Introduction to Data Mining.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Association Analysis This lecture node is modified based on Lecture Notes for.
Associations and Frequent Item Analysis. 2 Outline  Transactions  Frequent itemsets  Subset Property  Association rules  Applications.
Charles Tappert Seidenberg School of CSIS, Pace University
Data Mining (and machine learning) The A Priori Algorithm.
Elsayed Hemayed Data Mining Course
Chapter 8 Association Rules. Data Warehouse and Data Mining Chapter 10 2 Content Association rule mining Mining single-dimensional Boolean association.
Association Rules Carissa Wang February 23, 2010.
Chap 6: Association Rules. Rule Rules!  Motivation ~ recent progress in data mining + warehousing have made it possible to collect HUGE amount of data.
Data Mining Association Rules Mining Frequent Itemset Mining Support and Confidence Apriori Approach.
1 Data Mining Lecture 6: Association Analysis. 2 Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of.
Chapter 3 Data Mining: Classification & Association Chapter 4 in the text box Section: 4.3 (4.3.1),
MIS2502: Data Analytics Association Rule Mining David Schuff
MIS2502: Data Analytics Association Rule Mining Jeremy Shafer
Mining Association Rules in Large Database This work is created by Dr. Anamika Bhargava, Ms. Pooja Kaul, Ms. Priti Bali and Ms. Rajnipriya Dhawan and licensed.
Association Rules Repoussis Panagiotis.
Market Basket Many-to-many relationship between different objects
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Rules Assoc.Prof.Songül Varlı Albayrak
Association Rule Mining
Data Mining Association Analysis: Basic Concepts and Algorithms
Frequent patterns and Association Rules
MIS2502: Data Analytics Association Rule Mining
Market Basket Analysis and Association Rules
Association Analysis: Basic Concepts
Presentation transcript:

Association Rule Mining Introduction to Data Mining with Case Studies Author: G. K. Gupta

Introduction As noted earlier, huge amount of data is stored electronically in many retail outlets due to barcoding of goods sold. Natural to try to find some useful information from this mountains of data. A conceptually simple yet interesting technique is to find association rules from these large databases. The problem was invented by Rakesh Agarwal at IBM. Basket analysis is useful in determining what products customers are likely to purchase together. The analysis can also be useful in determining what purchases are made together over a period of time. An example might be that a person who has bought a lounge suit is likely to buy a TV next. This type of analysis is useful in marketing (e.g. coupning and advertising), store layout etc.

Introduction Association rules mining (or market basket analysis) searches for interesting customer habits by looking at associations. The classical example is the one where a store in USA was reported to have discovered that people buying nappies tend also to buy beer. Not sure if this is actually true. Applications in marketing, store layout, customer segmentation, medicine, finance, and many more. Basket analysis is useful in determining what products customers are likely to purchase together. The analysis can also be useful in determining what purchases are made together over a period of time. An example might be that a person who has bought a lounge suit is likely to buy a TV next. This type of analysis is useful in marketing (e.g. coupning and advertising), store layout etc.

A Simple Example Consider the following ten transactions of a shop selling only nine items. We will return to it after explaining some terminology.

Terminology Let the number of different items sold be n : 9 Let the number of transactions be N :10 Let the set of items be {i1, i2, …, in}. The number of items may be large, perhaps several thousands. Let the set of transactions be {t1, t2, …, tN}. Each transaction ti contains a subset of items from the itemset {i1, i2, …, in}. These are the things a customer buys when they visit the supermarket. N is assumed to be large, perhaps in millions. Not considering the quantities of items bought. Basket analysis is useful in determining what products customers are likely to purchase together. The analysis can also be useful in determining what purchases are made together over a period of time. An example might be that a person who has bought a lounge suit is likely to buy a TV next. This type of analysis is useful in marketing (e.g. coupning and advertising), store layout etc.

Terminology Want to find a group of items that tend to occur together frequently. The association rules are often written as X→Y meaning that whenever X appears Y also tends to appear. X and Y may be single items or sets of items but the same item does not appear in both. X – antecedent Y-consequent Basket analysis is useful in determining what products customers are likely to purchase together. The analysis can also be useful in determining what purchases are made together over a period of time. An example might be that a person who has bought a lounge suit is likely to buy a TV next. This type of analysis is useful in marketing (e.g. coupning and advertising), store layout etc.

Terminology Suppose X and Y appear together in only 10% of the transactions but whenever X appears there is 80% chance that Y also appears. The 10% presence of X and Y together is called the support (or prevalence) of the rule and 80% is called the confidence (or predictability) of the rule. These are measures of interestingness of the rule. It is important to define the concept of lift which is commonly used in mail-order marketing. Suppose 5% of customers to a store buy TV but customers who buy a lounge suite are much more likely to buy a TV. Let that likelihood be 20%. The ratio of 20% to 5% (i.e. 4) is called lift. Lift essentially measures the strength of the relationship X=>Y.

Terminology Confidence denotes the strength of the association between X and Y. Support indicates the frequency of the pattern. A minimum support is necessary if an association is going to be of some business value. Let the chance of finding an item X in the N transactions is x% then we can say probability of X is P(X) = x/100 since probability values are always between 0.0 and 1.0.

Terminology Now suppose we have two items X and Y with probabilities of P(X) = 0.2 and P(Y) = 0.1. What does the product P(X) x P(Y) mean? What is likely to be the chance of both items X and Y appearing together, that is P(X and Y) = P(X U Y)?

Terminology Suppose we know P(X U Y) then what is the probability of Y appearing if we know that X already exists. It is written as P(Y|X) The support for X→Y is the probability of both X and Y appearing together, that is P(X U Y). The confidence of X→Y is the conditional probability of Y appearing given that X exists. It is written as P(Y|X) and read as P of Y given X.

Terminology Sometime the term lift is also used. Lift is defined as Support(X U Y)/P(X)P(Y). P(X)P(Y) is the probability of X and Y appearing together if both X and Y appear randomly. As an example, if support of X and Y is 1%, and X appears 4% in the transactions while Y appears in 2%, then lift = 0.01/0.04 x 0.02 = 1.25. What does it tell us about X and Y? What if lift was 1?

The task Want to find all associations which have at least p% support with at least q% confidence such that all rules satisfying any user constraints are found the rules are found efficiently from large databases the rules found are actionable As an example, let us consider that a furniture and appliances store has 100,000 records of sale. Let 5,000 records contain lounge suites (5%), Let 10,000 records contain TVs (10%). It is expected that the 5,000 records containing lounge suites will contain 10% TVs (i.e. 500). If the number of TV sales in the 5,000 records is in fact 2,000 then we have a lift of 4 or confidence of 40%.

A Naïve Algorithm example of n = 4 (Bread, Cheese, Juice, Milk) and N = 4. We want to find rules with minimum support of 50% and minimum confidence of 75%.

Example N = 4 results in the following frequencies. All items have the 50% support we require. Only two pairs have the 50% support and no 3-itemset has the support. We will look at the two pairs that have the minimum support to find if they have the required confidence.

Terminology Items or itemsets that have the minimum support are called frequent. In our example, all the four items and two pairs are frequent. We will now determine if the two pairs {Bread, Cheese} and {Cheese, Juice} lead to association rules with 75% confidence. Every pair {A, B} can lead to two rules A → B and B → A if both satisfy the minimum confidence. Confidence of A → B is given by the support for A and B together divided by the support of A.

Rules We have four possible rules and their confidence is given as follows: Bread  Cheese with confidence of 2/3 = 67% Cheese  Bread with confidence of 2/3 = 67% Cheese  Juice with confidence of 2/3 = 67% Juice  Cheese with confidence of 100% Therefore only the last rule Juice  Cheese has confidence above the minimum 75% and qualifies. Rules that have more than user-specified minimum confidence are called confident.

Problems with Brute Force This simple algorithm works well with four items since there were only a total of 16 combinations that we needed to look at but if the number of items is say 100, the number of combinations is much larger, in billions. The number of combinations becomes about a million with 20 items since the number of combinations is 2n with n items (why?). The naïve algorithm can be improved to deal more effectively with larger data sets.

Improved Brute Force Rather than counting all possible item combinations we can look at each transaction and count only the combinations that actually occur as shown below (that is, we don’t count itemsets with zero frequency).

Improved Brute Force Removing zero frequencies leads to the smaller table below. We then proceed as before but the improvement is not large and we need better techniques.

The Apriori Algorithm To find associations, this classical Apriori algorithm may be simply described by a two step approach: Step 1 ─ discover all frequent (single) items that have support above the minimum support required Step 2 ─ use the set of frequent items to generate the association rules that have high enough confidence level

Apriori Algorithm Terminology A k-itemset is a set of k items. The set Ck is a set of candidate k-itemsets that are potentially frequent. The set Lk is a subset of Ck and is the set of k-itemsets that are frequent.

Step-by-step Apriori Algorithm Step 1 – Computing L1 Scan all transactions. Find all frequent items that have support above the required p%. Let these frequent items be labeled L1. Step 2 – Apriori-gen Function Use the frequent items L1 to build all possible item pairs like {Bread, Cheese} if Bread and Cheese are in L1. The set of these item pairs is called C2, the candidate set.

The Apriori Algorithm Step 3 – Pruning Scan all transactions and find all pairs in the candidate pair setC2 that are frequent. Let these frequent pairs be L2. Step 4 – General rule A generalization of Step 2. Build candidate set of k items Ck by combining frequent itemsets in the set Lk-1.

The Apriori Algorithm Step 5 – Pruning Step 5, generalization of Step 3. Scan all transactions and find all item sets in Ck that are frequent. Let these frequent itemsets be Lk. Step 6 – Continue Continue with Step 4 unless Lk is empty. Step 7 – Stop Stop when Lk is empty.

Transactions Storage Consider only eight transactions with transaction IDs {10, 20, 30, 40, 50, 60, 70, 80}. This set of eight transactions with six items can be represented in at least three different ways as follows

Horizontal Representation Consider only eight transactions with transaction IDs {10, 20, 30, 40, 50, 60, 70, 80}. This set of eight transactions with six items can be represented in at least three different ways as follows

Vertical Representation Consider only eight transactions with transaction IDs {10, 20, 30, 40, 50, 60, 70, 80}. This set of eight transactions with six items can be represented in at least three different ways as follows