Data & Text Mining1 Introduction to Association Analysis Zhangxi Lin ISQS 3358 Texas Tech University.

Slides:



Advertisements
Similar presentations
Mining Sequence Data. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Sequence Data ObjectTimestampEvents A102, 3, 5 A206, 1 A231 B114,
Advertisements

Association Rule Mining
Association Rule Mining. 2 The Task Two ways of defining the task General –Input: A collection of instances –Output: rules to predict the values of any.
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
CPS : Information Management and Mining
Frequent Item Mining.
Association Analysis. Association Rule Mining: Definition Given a set of records each of which contain some number of items from a given collection; –Produce.
Data Mining Techniques So Far: Cluster analysis K-means Classification Decision Trees J48 (C4.5) Rule-based classification JRIP (RIPPER) Logistic Regression.
Data Mining Association Analysis: Basic Concepts and Algorithms Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Organization “Association Analysis”
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Minqi Zhou Minqi Zhou Introduction.
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms
Mining Association Rules in Large Databases
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Analysis: Basic Concepts and Algorithms.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms
Mining Association Rules in Large Databases
© Vipin Kumar CSci 8980 Fall CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer.
DATA MINING LECTURE 2 Frequent Itemsets Association Rules.
Eick, Tan, Steinbach, Kumar: Association Analysis Part1 Organization “Association Analysis” 1. What is Association Analysis? 2. Association Rules 3. The.
Data Mining Association Analysis: Basic Concepts and Algorithms Based on Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
What Is Association Mining? l Association rule mining: – Finding frequent patterns, associations, correlations, or causal structures among sets of items.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Minqi Zhou © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining By Tan, Steinbach, Kumar Lecture.
Modul 7: Association Analysis. 2 Association Rule Mining  Given a set of transactions, find rules that will predict the occurrence of an item based on.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Eick, Tan, Steinbach, Kumar: Association Analysis Part1 Organization “Association Analysis” 1. What is Association Analysis? 2. Association Rules 3. The.
Supermarket shelf management – Market-basket model:  Goal: Identify items that are bought together by sufficiently many customers  Approach: Process.
Data Mining Association Rules: Advanced Concepts and Algorithms
Data Mining Association Analysis Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/
Sequential Pattern Mining
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
CSE4334/5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai.
Data Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes Introduction to Data Mining by Tan, Steinbach, Kumar.
Data Mining Association Rules: Advanced Concepts and Algorithms
1 What is Association Analysis: l Association analysis uses a set of transactions to discover rules that indicate the likely occurrence of an item based.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Association Analysis This lecture node is modified based on Lecture Notes for.
1. Basic Association Analysis (IDM ch. 6) 1. Review 2. Maximal and Closed Itemsets 3. Rule Generation 4. Kuis 2. Support Vector Machines / SVM (IDM ch.
1. UTS 2. Basic Association Analysis (IDM ch. 6) 3. Practical: 1. Project Proposal 2. Association Rules Mining (DMBAR ch. 16) 1. online radio 2. predicting.
Mining Sequential Patterns © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 Slides are adapted from Introduction to Data Mining by Tan, Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
1 Data Mining Lecture 6: Association Analysis. 2 Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of.
Introduction to Data Mining Mining Association Rules Reference: Tan et al: Introduction to data mining. Some slides are adopted from Tan et al.
DATA MINING: ASSOCIATION ANALYSIS (2) Instructor: Dr. Chun Yu School of Statistics Jiangxi University of Finance and Economics Fall 2015.
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Frequent Pattern Mining
COMP 5331: Knowledge Discovery and Data Mining
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Rule Mining
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Rules: Advanced Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Analysis: Basic Concepts and Algorithms
Mining Association Rules in Large Databases
Mining Association Rules in Large Databases
Association Analysis: Basic Concepts
Presentation transcript:

Data & Text Mining1 Introduction to Association Analysis Zhangxi Lin ISQS 3358 Texas Tech University

Data & Text Mining 2 Outline Basic concepts Itemset generation - Apriori principle Association rule discovery and generation Evaluation of association patterns Sequential pattern analysis

Data & Text Mining3 Basic Concepts

Data & Text Mining 4 Association Rule Mining Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction Market-Basket transactions Example of Association Rules {Diaper}  {Beer}, {Milk, Bread}  {Eggs,Coke}, {Beer, Bread}  {Milk}, Implication means co-occurrence, not causality!

Data & Text Mining 5 Definition: Frequent Itemset Itemset  A collection of one or more items Example: {Milk, Bread, Diaper}  k-itemset An itemset that contains k items Support count (  )  Frequency of occurrence of an itemset  E.g.  ({Milk, Bread,Diaper}) = 2 Support  Fraction of transactions that contain an itemset  E.g. s({Milk, Bread, Diaper}) = 2/5 Frequent Itemset  An itemset whose support is greater than or equal to a minsup threshold

Data & Text Mining 6 Definition: Association Rule Example: Association Rule  An implication expression of the form X  Y, where X and Y are itemsets  Example: {Milk, Diaper}  {Beer} Rule Evaluation Metrics  Support (s) Fraction of transactions that contain both X and Y  Confidence (c) Measures how often items in Y appear in transactions that contain X

Data & Text Mining 7 Count (CKG, SVG) = 1 Support = 1 / 5 = 20% Count (CKG) = 3 Confidence = 1 / 3 = 0.33 Count (~CKG) = 2 Count (~CKG, SVG) = 2 Confidence (~CKG, SVG) = 2 / 2 = 100%

Data & Text Mining 8 Formal Definitions Support s( X  Y) = Confidence, c(X  Y) =

Data & Text Mining9 Itemset generation - Apriori principle

Data & Text Mining 10 Association Rule Mining Task Given a set of transactions T, the goal of association rule mining is to find all rules having  support ≥ minsup threshold  confidence ≥ minconf threshold Brute-force approach:  List all possible association rules  Compute the support and confidence for each rule  Prune rules that fail the minsup and minconf thresholds  Computationally prohibitive!

Data & Text Mining 11 Mining Association Rules Example of Rules: {Milk,Diaper}  {Beer} (s=0.4, c=0.67) {Milk,Beer}  {Diaper} (s=0.4, c=1.0) {Diaper,Beer}  {Milk} (s=0.4, c=0.67) {Beer}  {Milk,Diaper} (s=0.4, c=0.67) {Diaper}  {Milk,Beer} (s=0.4, c=0.5) {Milk}  {Diaper,Beer} (s=0.4, c=0.5) Observations: All the above rules are binary partitions of the same itemset: {Milk, Diaper, Beer} Rules originating from the same itemset have identical support but can have different confidence Thus, we may decouple the support and confidence requirements

Data & Text Mining 12 Mining Association Rules Two-step approach: 1. Frequent Itemset Generation – Generate all itemsets whose support  minsup 2. Rule Generation – Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset Frequent itemset generation is still computationally expensive

Data & Text Mining 13 Frequent Itemset Generation Given d items, there are 2 d possible candidate itemsets

Data & Text Mining 14 Frequent Itemset Generation Brute-force approach:  Each itemset in the lattice is a candidate frequent itemset  Count the support of each candidate by scanning the database  Match each transaction against every candidate  Complexity ~ O(NMw) => Expensive since M = 2 d !!!

Data & Text Mining 15 Frequent Itemset Generation Strategies Reduce the number of candidates (M)  Complete search: M=2 d  Use pruning techniques to reduce M Reduce the number of transactions (N)  Reduce size of N as the size of itemset increases  Used by DHP and vertical-based mining algorithms Reduce the number of comparisons (NM)  Use efficient data structures to store the candidates or transactions  No need to match every candidate against every transaction

Data & Text Mining 16 Reducing Number of Candidates Apriori principle:  If an itemset is frequent, then all of its subsets must also be frequent Apriori principle holds due to the following property of the support measure:  Support of an itemset never exceeds the support of its subsets  This is known as the anti-monotone property of support

Data & Text Mining 17 Found to be Infrequent Illustrating Apriori Principle Found to be Frequent

Data & Text Mining 18 Illustrating Apriori Principle Pruned supersets

Data & Text Mining 19 Apriori Algorithm Method:  Let k=1  Generate frequent itemsets of length 1  Repeat until no new frequent itemsets are identified Generate length (k+1) candidate itemsets from length k frequent itemsets Prune candidate itemsets containing subsets of length k that are infrequent Count the support of each candidate by scanning the DB Eliminate candidates that are infrequent, leaving only those that are frequent

Data & Text Mining20 Association rule discovery and generation

Data & Text Mining 21 Reducing Number of Comparisons Candidate counting:  Scan the database of transactions to determine the support of each candidate itemset  To reduce the number of comparisons, store the candidates in a hash structure Instead of matching each transaction against every candidate, match it against candidates contained in the hashed buckets

Data & Text Mining 22 Factors Affecting Complexity Choice of minimum support threshold  lowering support threshold results in more frequent itemsets  this may increase number of candidates and max length of frequent itemsets Dimensionality (number of items) of the data set  more space is needed to store support count of each item  if number of frequent items also increases, both computation and I/O costs may also increase Size of database  since Apriori makes multiple passes, run time of algorithm may increase with number of transactions Average transaction width  transaction width increases with denser data sets  This may increase max length of frequent itemsets and traversals of hash tree (number of subsets in a transaction increases with its width)

Data & Text Mining 23 Rule Generation Given a frequent itemset L, find all non-empty subsets f  L such that f  L – f satisfies the minimum confidence requirement  If {A,B,C,D} is a frequent itemset, candidate rules: ABC  D, ABD  C, ACD  B, BCD  A, A  BCD,B  ACD,C  ABD, D  ABC AB  CD,AC  BD, AD  BC, BC  AD, BD  AC, CD  AB, If |L| = k, then there are 2 k – 2 candidate association rules (ignoring L   and   L)

Data & Text Mining 24 Rule Generation How to efficiently generate rules from frequent itemsets?  In general, confidence does not have an anti-monotone property c(ABC  D) can be larger or smaller than c(AB  D)  But confidence of rules generated from the same itemset has an anti-monotone property  e.g., L = {A,B,C,D}: c(ABC  D)  c(AB  CD)  c(A  BCD) Confidence is anti-monotone w.r.t. number of items on the RHS of the rule

Data & Text Mining 25 Rule Generation for Apriori Algorithm Lattice of rules Pruned Rules Low Confidence Rule

Data & Text Mining 26 Rule Generation for Apriori Algorithm Candidate rule is generated by merging two rules that share the same prefix in the rule consequent join(CD=>AB,BD=>AC) would produce the candidate rule D => ABC Prune rule D=>ABC if its subset AD=>BC does not have high confidence

Data & Text Mining 27 Demonstration A bank wants to examine its customer base and understand which of its products individual customers own in combination with one another. It has chosen to conduct a market-basket analysis of a sample of its customer base. The bank has a data set that lists the banking products/services used by 7,991 customers. Data set: BANK Variables  ACCT: ID, Nominal, Account Number  SERVICE: Target, Nominal, Type of Service  VISIT: Sequence, Ordinal, Order of Product Purchase

Data & Text Mining28 Evaluation of association patterns

Data & Text Mining 29 Contingency Table Checking Account 5003,500 1,0005,000 No Yes NoYes Saving Account 4,000 6,000 10,000 Support(SVG  CK) = 50% Confidence(SVG  CK) = 83% Lift(SVG  CK) = 0.83/0.85 < 1

Data & Text Mining 30 Statistical Independence Population of 1000 students  600 students know how to swim (S)  700 students know how to bike (B)  420 students know how to swim and bike (S,B)  P(S  B) = 420/1000 = 0.42  P(S)  P(B) = 0.6  0.7 = 0.42  P(S  B) = P(S)  P(B) => Statistical independence  P(S  B) > P(S)  P(B) => Positively correlated  P(S  B) Negatively correlated

Data & Text Mining 31 Statistical-based Measures Measures that take into account statistical dependence

Data & Text Mining 32 Example: Lift/Interest Contingency Table Coffee Tea15520 Tea Association Rule: Tea  Coffee Confidence= P(Coffee|Tea) = 0.75 but P(Coffee) = 0.9  Lift = 0.75/0.9= (< 1, therefore is negatively associated)

Data & Text Mining 33 Compared to Confusion Matrix Computed Yes Computed No Total Actual Yes15520 Actual No75580 Total In classification, we are interested in P(Actual Yes|Computed Yes) i.e. P(Row | Column) In associate analysis, we are interested in P(Column|Row)

Data & Text Mining34 Sequential pattern analysis

Data & Text Mining 35 Examples of Sequence Data Sequence Database SequenceElement (Transaction) Event (Item) CustomerPurchase history of a given customer A set of items bought by a customer at time t Books, diary products, CDs, etc Web DataBrowsing activity of a particular Web visitor A collection of files viewed by a Web visitor after a single mouse click Home page, index page, contact info, etc Event dataHistory of events generated by a given sensor Events triggered by a sensor at time t Types of alarms generated by sensors Genome sequences DNA sequence of a particular species An element of the DNA sequence Bases A,T,G,C Sequence E1 E2 E1 E3 E2 E3 E4 E2 Element (Transaction) Event (Item)

Data & Text Mining 36 Examples of Sequence Web sequence: Sequence of initiating events causing the nuclear accident at 3-mile Island: ( Sequence of books checked out at a library:

Data & Text Mining 37 Sequential Pattern Mining: Definition Given:  a database of sequences  a user-specified minimum support threshold, minsup Task:  Find all subsequences with support ≥ minsup

Data & Text Mining 38 Sequential Pattern Mining: Example Minsup = 50% Examples of Frequent Subsequences: s=60% s=80% s=60%