Presentation is loading. Please wait.

Presentation is loading. Please wait.

2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 1 CSE 711 Seminar on Data Mining: Apriori Algorithm By Sung-Hyuk Cha.

Similar presentations


Presentation on theme: "2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 1 CSE 711 Seminar on Data Mining: Apriori Algorithm By Sung-Hyuk Cha."— Presentation transcript:

1

2 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 1 CSE 711 Seminar on Data Mining: Apriori Algorithm By Sung-Hyuk Cha

3 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 2 Association Rules Definition: Rules that state a statistical correlation between the occurrence of certain attributes in a database table. Given a set of transactions, where each transaction is a set of items, X 1,..., X n and Y, an association rule is an expression X 1,..., X n  Y. This means that the attributes X 1,..., X n predict Y Intuitive meaning of such a rule: transactions in the database which contain the items in X tend also to contain the items in Y.

4 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 3 Measures for an Association Rule Support : Given the association rule X 1,..., X n  Y, the support is the percentage of records for which X 1,..., X n and Y both hold. The statistical significance of the association rule. Confidence : Given the association rule X 1,..., X n  Y, the confidence is the percentage of records for which Y holds, within the group of records for which X 1,..., X n hold. The degree of correlation in the dataset between X and Y. A measure of the rule’s strength.

5 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 4 Quiz # 2 Problem: Given a transaction table D, find the support and confidence for an association rule B,D  E. Database D TIDItems 01 02 03 04 A B E A C D E B C D E A B D E 05 06 B D E A B C 07A B D Answer: support = 3/7, confidence = 3/4

6 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 5 Apriori Algorithm An efficient algorithm to find association rules. Procedure  Procedure  Find all the frequent itemsets :  Use the frequent itemsets to generate the association rules A frequent itemset is a set of items that have support greater than a user defined minimum.

7 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 6 Notation An itemset having k items. k-itemset LkLk Set of candidate k-itemsets (those with minimum support). Each member of this set has two fields: i) itemset and ii) support count. CkCk Set of candidate k-itemsets (potentially frequent itemsets). Each member of this set has two fields: i) itemset and ii) support count. The sample transaction database D The set of all frequent items.F

8 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 7 Example TIDItems 100 200 300 400 A C D B C E A B C E B E Database D C1C1C1C1 Support {A}.50 {B} {C} {D} {E} L1L1L1L1.75.25.75 Y Y Y N Y (k = 1) itemset C2C2C2C2 Support {A,B}.25 {A,C} {A,E} {B,C} {B,E} L2L2L2L2 N (k = 2) itemset C3C3C3C3 Support {B,C,E} L3L3L3L3.50Y (k = 3) itemset {C,E}.50.25.50.75 Y N Y Y.50Y C4C4C4C4 Support {A,B,C,E} L4L4L4L4.25N (k = 4) itemset * Suppose a user defined minimum =.49 * n items implies O(n - 2) computational complexity? 2

9 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 8 Procedure Apriorialgo() { F =  ; L k = {frequent 1-itemsets}; k = 2; /* k represents the pass number. */ while (L k-1 !=  ) { F = F U L k ; C k = New candidates of size k generated from L k-1 ; for all transactions t  D increment the count of all candidates in C k that are contained in t ; L k = All candidates in C k with minimum support ; k++ ; } return ( F ) ; }

10 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 9 Candidate Generation Given L k-1, the set of all frequent (k-1)-itemsets, generate a superset of the set of all frequent k-itemsets. Idea : if an itemset X has minimum support, so do all subsets of X. 1. Join L k-1 with L k-1 2. Prune: delete all itemsets c  C k such that some (k-1)-subset of c is not in L k-1. ex) L 2 = { {A,C}, {B,C}, {B,E}, {C,E} } 1. Join : { {A,B,C}, {A,C,E}, {B,C,E} } 2. Prune : { {A,B,C}, {A,C,E}, {B,C,E} } {A, E}  L 2 Instead of 5 C 3 = 10, we have only 1 candidate. {A, B}  L 2

11 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 10 Thoughts Association rules are always defined on binary attributes.  Need to flatten the tables. ex) CIDGenderEthnicityCall MFWBHADICID Phone Company DB. - Support for Asian ethnicity will never exceed.5. - No need to consider itemsets {M,F}, {W,B} nor {D,I}. - M  F or D  I are not of interest at all. * Considering the original schema before flattening may be a good idea.

12 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 11 Finding association rules with item constraints When item constraints are considered, the Apriori candidate generation procedure does not generate all the potential frequent itemsets as candidates. Procedure  Procedure 1. Find all the frequent itemsets that satisfy the boolean expression B. 2. Find the support of all subsets of frequent itemsets that do not satisfy B. 3. Generate the association rules from the frequent itemsets found in Step 1. by computing confidences from the frequent itemsets found in Steps 1 & 2.

13 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 12 L s(k) Set of frequent k-itemsets that contain an item in S. Additional Notation BBoolean expression with m disjuncts: B = D 1  D 2 ...  D m DiDi N conjuncts in D i, D i = a i,1  a i,2 ...  a i,n SSet of items such that any itemset that satisfies B contains an item from S. L b(k) Set of frequent k-itemsets that satisfy B. C s(k) Set of candidate k-itemsets that contain an item in S. C b(k) Set of candidate k-itemsets that satisfy B.

14 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 13  Procedure 1. Scan the data and determine L 1 and F. 2. Find L b(1) 3. Generate C b(k+1) from L b(k) 3-1. C k+1 = L k x F 3-2. Delete all candidates in C k+1 that do not satisfy B. 3-3. Delete all candidates in C k+1 below the minimum support. 3-4. for each D i with exactly k + 1 non-negated elements, add the itemset to C k+1 if all the items are frequent. Direct Algorithm

15 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 14 TIDItems 100 200 300 400 A C D B C E A B C E B E Database D Example Given B = (A  B)  (C   E) step 1 & 2 L b(1) = { C } C 1 = { {A}, {B}, {C}, {D}, {E} } L 1 = { {A}, {B}, {C}, {E} } C 2 = L b(1) x F = { {A,C}, {B,C}, {C,E} } step 3-2step 3-1 C b(2) = { {A,C}, {B,C} } step 3-3 L 2 = { {A,C}, {B,C} } step 3-4 L b(2) = { {A,B}, {A,C}, {B,C} } C 3 = L b(2) x F = { {A,B,C}, {A,B,E}, {A,C,E}, {B,C,E} } step 3-2step 3-1 C b(3) = { {A,B,C}, {A,B,E} } step 3-3 L 3 =  step 3-4 L b(3) = 

16 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 15 MultipleJoins and Reorder algorithms to find association rules with item constraints will be added.

17 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 16 Mining Sequential Patterns Given a database D of customer transactions, the problem of mining sequential patterns is to find the maximal sequences among all sequences that have certain user-specified minimum support. - Transaction-time field is added. - Itemset in a sequence is denoted as

18 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 17 Sequence Version of DB Conversion 11222344451122234445 CustomerID Jun 25 93 Jun 30 93 Jun 10 93 Jun 15 93 Jun 20 93 Jun 25 93 Jun 30 93 July 25 93 Jun 12 93 Transaction Time 30 90 10,20 30 40,60,70 30,50,70 30 40,70 90 Items D 1234512345 CustomerID Customer Sequence Sequential version of D’ Answer set with support >.25 = {, } * Customer sequence : all the transactions of a customer is a sequence ordered by increasing transaction time.

19 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 18 Definitions Def 1. A sequence is contained in another sequence if there exists integers i 1 < i 2 < … < i n such that a 1  b i1, a 2  b i2, …, a n  b in ex) is contained in. is contained in. Def 2. A sequence s is maximal if s is not contained in any other sequence. - T i is transaction time. - itemset(T i ) is transaction the set of items in T i. - litemset : an item set with minimum support. Yes No

20 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 19  Procedure 1. Convert D into a D’ of customer sequences. 2. Litemset mapping 3. Transform each customer sequence into a litemset representation.  4. Find the desired sequences using the set of litemsets. 4-1. AprioriAll 4-2. AprioriSome 4-3. DynamicSome 5. Find the maximal sequences among the set of large sequences. for(k = n; k > 1; k--) foreach k-sequence s k delete from S all subsequences of s k. Procedure

21 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 20 1234512345 Mapped to (30) (40) (70) (40 70) (90) Large Itemsets Example step 2 1234512345 CID Customer Sequence step 3 Transformed Sequence Mapping

22 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 21 AprioriAll Aprioriall() { L k = {frequent 1-itemsets}; k = 2; /* k represents the pass number. */ while (L k-1 !=  ) { F = F U L k ; C k = New candidates of size k generated from L k-1 ; for each customer-sequence c  D increment the count of all candidates in C k that are contained in c ; L k = All candidates in C k with minimum support ; k++ ; } return ( F ) ; }

23 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 22 2 3 2 L 3 Supp Example C4C4 2 L 4 Supp. Customer Seq’s. Minimum support =.40 4 2 4 L 1 Supp 2 3 2 3 2 L 2 Supp The maximal large sequences are {,, }.

24 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 23 AprioriSome and DynamicSome algorithms to find association rules with sequential patterns will be added.

25 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 24 GSC features S y (i,j) tan S x (i,j) Gradient (local) Structural (intermediate) and Concavity (global)

26 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 25 GSC feature table

27 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 26 A sample GSC features Gradient : 000000000011000000001100001110000000111000000011000000 11000100000000110000000000000111001100011111000011110000000010 01010000010001110011111001111100000100000100000000000000000000 01000001001000 (192) Structure : 000000000000000000001100001110001000010000100000010000 000000000100101000000000011000010100110000110000000000000100100 011001100000000000000110010100000000000001100000000000000000000 000000010000 (192) Concavity : 11110110100111110110011000000110111101101001100100000 110000011100000000000000000000000000000000000000000111111100000 000000000000 (128)

28 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 27 Class A 800 samples

29 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 28 Class A, B and C ABC

30 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 29 Reordered by Frequency ABC

31 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 30 - G  S, G  C - F 1, F 2, F 3  “A” - F 1  F 2 Associate Rules in GSC

32 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 31 References Agrawal, R.; Imielinski, T.; and Swami, A. “Mining Association Rules between Sets of Items in Large Databases.” Proc. Of the ACM SIGMOD Conference on Management of Data, 207-216, 1993 Agrawal, R. and Srikant, R. “Fast Algorithms for Mining Association Rules in Large Databases” Proc. Of the 20th Int’l Conference on Very Large Databases, p478-499, Sept. 1994 Agrawal, R. and Srikant, R. “Mining Sequential Patterns”, Research Report RJ 9910, IBM Almaden Research Center, San Jose, California, October 1994 Agrawal, R. and Shafer, J. “Parallel Mining of Association Rules.” IEEE Transactions on Knowledge and Data Engineering. 8(6), 1996

33 2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 32 Generate the Selected Item Set() { S =  ; for each D i, i = 1 to m ; { for each a i,j, j = 1 to n ; cost of conjunct = support(S U a i,j ) - support(a i,j ) ; Add a i,j with the minimum cost to S ; } MultipleJoins Algorithm TIDItems 100 200 300 400 A C D B C E A B C E B E Database D B = (A  B)  (C   E) The algorithm gives S = { A, C }. Procedure  Procedure 1. Scan the data and determine F. 2. L s(1) = S  F...


Download ppt "2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 1 CSE 711 Seminar on Data Mining: Apriori Algorithm By Sung-Hyuk Cha."

Similar presentations


Ads by Google