Download presentation

Presentation is loading. Please wait.

Published byBruno Dunnington Modified about 1 year ago

1
Association Rule Mining

2
2 The Task Two ways of defining the task General –Input: A collection of instances –Output: rules to predict the values of any attribute(s) (not just the class attribute) from values of other attributes –E.g. if temperature = cool then humidity =normal –If the right hand side of a rule has only the class attribute, then the rule is a classification rule Distinction: Classification rules are applied together as sets of rules Specific - Market-basket analysis –Input: a collection of transactions –Output: rules to predict the occurrence of any item(s) from the occurrence of other items in a transaction –E.g. {Milk, Diaper} -> {Beer} General rule structure: –Antecedents -> Consequents

3
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction Market-Basket transactions Example of Association Rules {Diaper} {Beer}, {Milk, Bread} {Eggs,Coke}, {Beer, Bread} {Milk}, Implication means co-occurrence, not causality!

4
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Definition: Frequent Itemset l Itemset –A collection of one or more items Example: {Milk, Bread, Diaper} –k-itemset An itemset that contains k items l Support count ( ) –Frequency of occurrence of an itemset –E.g. ({Milk, Bread,Diaper}) = 2 l Support –Fraction of transactions that contain an itemset –E.g. s({Milk, Bread, Diaper}) = 2/5 l Frequent Itemset –An itemset whose support is greater than or equal to a minsup threshold

5
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Definition: Association Rule Example: l Association Rule –An implication expression of the form X Y, where X and Y are itemsets –Example: {Milk, Diaper} {Beer} l Rule Evaluation Metrics –Support (s) Fraction of transactions that contain both X and Y –Confidence (c) Measures how often items in Y appear in transactions that contain X

6
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Association Rule Mining Task l Given a set of transactions T, the goal of association rule mining is to find all rules having –support ≥ minsup threshold –confidence ≥ minconf threshold l Brute-force approach: –List all possible association rules –Compute the support and confidence for each rule –Prune rules that fail the minsup and minconf thresholds Computationally prohibitive!

7
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Mining Association Rules Example of Rules: {Milk,Diaper} {Beer} (s=0.4, c=0.67) {Milk,Beer} {Diaper} (s=0.4, c=1.0) {Diaper,Beer} {Milk} (s=0.4, c=0.67) {Beer} {Milk,Diaper} (s=0.4, c=0.67) {Diaper} {Milk,Beer} (s=0.4, c=0.5) {Milk} {Diaper,Beer} (s=0.4, c=0.5) Observations: All the above rules are binary partitions of the same itemset: {Milk, Diaper, Beer} Rules originating from the same itemset have identical support but can have different confidence Thus, we may decouple the support and confidence requirements

8
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Mining Association Rules l Two-step approach: 1.Frequent Itemset Generation – Generate all itemsets whose support minsup 2.Rule Generation – Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset l Frequent itemset generation is still computationally expensive

9
9 Power set Given a set S, power set, P is the set of all subsets of S Known property of power sets –If S has n number of elements, P will have N = 2 n number of elements. Examples: –For S = {}, P={{}}, N = 2 0 = 1 –For S = {Milk}, P={{}, {Milk}}, N=2 1 =2 –For S = {Milk, Diaper} –P={{},{Milk}, {Diaper}, {Milk, Diaper}}, N=2 2 =4 –For S = {Milk, Diaper, Beer}, –P={{},{Milk}, {Diaper}, {Beer}, {Milk, Diaper}, {Diaper, Beer}, {Beer, Milk}, {Milk, Diaper, Beer}}, N=2 3 =8

10
10 Brute Force approach to Frequent Itemset Generation For an itemset with 3 elements, we have 8 subsets –Each subset is a candidate frequent itemset which needs to be matched against each transaction ItemsetCount {Milk}4 {Diaper}4 {Beer}3 1-itemsets 2-itemsets ItemsetCount {Milk, Diaper}3 {Diaper, Beer}3 {Beer, Milk}2 3-itemsets ItemsetCount {Milk, Diaper, Beer}2 Important Observation: Counts of subsets can’t be smaller than the count of an itemset!

11
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Reducing Number of Candidates l Apriori principle: –If an itemset is frequent, then all of its subsets must also be frequent l Apriori principle holds due to the following property of the support measure: –Support of an itemset never exceeds the support of its subsets –This is known as the anti-monotone property of support

12
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Illustrating Apriori Principle Items (1-itemsets) Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs) Triplets (3-itemsets) Minimum Support = 3 If every subset is considered, 6 C C C 3 = 41 With support-based pruning, = 13 Write all possible 3-itemsets and prune the list based on infrequent 2-itemsets

13
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Apriori Algorithm l Method: –Let k=1 –Generate frequent itemsets of length 1 –Repeat until no new frequent itemsets are identified Generate length (k+1) candidate itemsets from length k frequent itemsets Prune candidate itemsets containing subsets of length k that are infrequent Count the support of each candidate by scanning the DB Eliminate candidates that are infrequent, leaving only those that are frequent Note: This algorithm makes several passes over the transaction list

14
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Rule Generation l Given a frequent itemset L, find all non-empty subsets f L such that f L – f satisfies the minimum confidence requirement –If {A,B,C,D} is a frequent itemset, candidate rules: ABC D, ABD C, ACD B, BCD A, A BCD,B ACD,C ABD, D ABC AB CD,AC BD, AD BC, BC AD, BD AC, CD AB, l If |L| = k, then there are 2 k – 2 candidate association rules (ignoring L and L) l Because, rules are generated from frequent itemsets, they automatically satisfy the minimum support threshold –Rule generation should ensure production of rules that satisfy only the minimum confidence threshold

15
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Rule Generation l How to efficiently generate rules from frequent itemsets? –In general, confidence does not have an anti-monotone property c(ABC D) can be larger or smaller than c(AB D) –But confidence of rules generated from the same itemset has an anti-monotone property –e.g., L = {A,B,C,D}: c(ABC D) c(AB CD) c(A BCD) Confidence is anti-monotone w.r.t. number of items on the RHS of the rule l Example: Consider the following two rules –{Milk} {Diaper, Beer} has c m =0.5 –{Milk, Diaper} {Beer} has c md =0.67 > c m

16
16 Computing Confidence for Rules Unlike computing support, computing confidence does not require several passes over the transaction list Supports computed from frequent itemset generation can be reused Tables on the side show support values for all (except null) the subsets of itemset {Bread, Milk, Diaper} Confidence values are shown for the (2 3 -2) = 6 rules generated from this frequent itemset ItemsetSupport {Milk}4/5=0.8 {Diaper}4/5=0.8 {Bread}4/5=0.8 Example of Rules: {Bread, Milk} {Diaper} (s= 0.6, c=1.0) {Milk, Diaper} {Bread} (s= 0.6, c=1.0) {Diaper, Bread} {Milk} (s= 0.6, c=1.0) {Bread} {Milk, Diaper} (s= 0.6, c=0.75) {Diaper} {Milk, Bread} (s= 0.6, c=0.75) {Milk} {Diaper, Bread} (s= 0.6, c=0.75) ItemsetSupport {Milk, Diaper}3/5=0.6 {Diaper, Bread}3/5=0.6 {Bread, Milk}3/5=0.6 ItemsetSupport {Bread, Milk, Diaper}3/5=0.6

17
17 Rule generation in Apriori Algorithm For each frequent k- itemset where k > 2 –Generate high confidence rules with one item in the consequent –Using these rules, iteratively generate high confidence rules with more than one items in the consequent –if any rule has low confidence then all the other rules containing the consequents can be pruned (not generated) {Bread, Milk, Diaper} 1-item rules {Bread, Milk} {Diaper} {Milk, Diaper} {Bread} {Diaper, Bread} {Milk} 2-item rules {Bread} {Milk, Diaper} {Diaper} {Milk, Bread} {Milk} {Diaper, Bread}

18
18 Evaluation Support and confidence used by Apriori allow a lot of rules which are not necessarily interesting Two options to extract interesting rules –Using subjective knowledge –Using objective measures (measures better than confidence) Subjective approaches –Visualization – users allowed to interactively verify the discovered rules –Template-based approach – filter out rules that do not fit the user specified templates –Subjective interestingness measure – filter out rules that are obvious (bread -> butter) and that are non-actionable (do not lead to profits)

19
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Drawback of Confidence Coffee Tea15520 Tea Association Rule: Tea Coffee Confidence= P(Coffee|Tea) = 0.75 but P(Coffee) = 0.9 Although confidence is high, rule is misleading P(Coffee|Tea) =

20
20 Objective Measures Confidence estimates rule quality in terms of the antecedent support but not consequent support –As seen on the previous slide, support for consequent (P(coffee)) is higher than the rule confidence (P(coffee/Tea)) Weka uses other objective measures –Lift (A->B) = confidence(A->B)/support(B) = support(A- >B)/(support(A)*support(B)) –Leverage (A->B) = support(A->B) – support(A)*support(B) –Conviction(A->B) = support(A)*support(not B)/support(A->B) –conviction inverts the lift ratio and also computes support for RHS not being true

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google