1 Improvements to A-Priori Bloom Filters Park-Chen-Yu Algorithm Multistage Algorithm Approximate Algorithms Compacting Results.

Slides:



Advertisements
Similar presentations
Recap: Mining association rules from large datasets
Advertisements

Hash-Based Improvements to A-Priori
Association Analysis (2). Example TIDList of item ID’s T1I1, I2, I5 T2I2, I4 T3I2, I3 T4I1, I2, I4 T5I1, I3 T6I2, I3 T7I1, I3 T8I1, I2, I3, I5 T9I1, I2,
1 Frequent Itemset Mining: Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored basket-by-basket.
Data Mining of Very Large Data
1 CPS : Information Management and Mining Association Rules and Frequent Itemsets.
IDS561 Big Data Analytics Week 6.
 Back to finding frequent itemsets  Typically, data is kept in flat files rather than in a database system:  Stored on disk  Stored basket-by-basket.
1 Mining Associations Apriori Algorithm. 2 Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored.
Association rules The goal of mining association rules is to generate all possible rules that exceed some minimum user-specified support and confidence.
Sampling Large Databases for Association Rules ( Toivenon’s Approach, 1996) Farzaneh Mirzazadeh Fall 2007.
1 Association Rules Market Baskets Frequent Itemsets A-Priori Algorithm.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Frequent Itemsets, Association Rules Evaluation Alternative Algorithms
1 Association Rules Apriori Algorithm. 2 Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored.
Association Analysis (2). Example TIDList of item ID’s T1I1, I2, I5 T2I2, I4 T3I2, I3 T4I1, I2, I4 T5I1, I3 T6I2, I3 T7I1, I3 T8I1, I2, I3, I5 T9I1, I2,
1 Improvements to A-Priori Park-Chen-Yu Algorithm Multistage Algorithm Approximate Algorithms Compacting Results.
Disk Access Model. Using Secondary Storage Effectively In most studies of algorithms, one assumes the “RAM model”: –Data is in main memory, –Access to.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
Data Mining: Associations
1 Association Rules Market Baskets Frequent Itemsets A-priori Algorithm.
Normal Forms for CFG’s Eliminating Useless Variables Removing Epsilon
Improvements to A-Priori
1 Query Processing Two-Pass Algorithms Source: our textbook.
Improvements to A-Priori
This material in not in your text (except as exercises) Sequence Comparisons –Problems in molecular biology involve finding the minimum number of edit.
1 More Stream-Mining Counting How Many Elements Computing “Moments”
1 Improvements to A-Priori Bloom Filters Park-Chen-Yu Algorithm Multistage Algorithm Approximate Algorithms Compacting Results.
Tirgul 7. Find an efficient implementation of a dynamic collection of elements with unique keys Supported Operations: Insert, Search and Delete. The keys.
1 Mining Data Streams The Stream Model Sliding Windows Counting 1’s.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
1 Improvements to A-Priori Park-Chen-Yu Algorithm Multistage Algorithm Approximate Algorithms Compacting Results.
1 Generalizing Map-Reduce The Computational Model Map-Reduce-Like Algorithms Computing Joins.
1 Mining Data Streams The Stream Model Sliding Windows Counting 1’s.
Performance and Scalability: Apriori Implementation.
1 “Association Rules” Market Baskets Frequent Itemsets A-priori Algorithm.
Cloud and Big Data Summer School, Stockholm, Aug., 2015 Jeffrey D. Ullman.
Approximate Frequency Counts over Data Streams Gurmeet Singh Manku, Rajeev Motwani Standford University VLDB2002.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining By Tan, Steinbach, Kumar Lecture.
Modul 7: Association Analysis. 2 Association Rule Mining  Given a set of transactions, find rules that will predict the occurrence of an item based on.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Frequent Itemsets and Association Rules 1 Wu-Jun Li Department of Computer Science and Engineering Shanghai Jiao Tong University Lecture 3: Frequent Itemsets.
1 CSE 326: Data Structures: Hash Tables Lecture 12: Monday, Feb 3, 2003.
Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to.
Hashing Sections 10.2 – 10.3 CS 302 Dr. George Bebis.
1 Low-Support, High-Correlation Finding Rare but Similar Items Minhashing.
CS 8751 ML & KDDSupport Vector Machines1 Mining Association Rules KDD from a DBMS point of view –The importance of efficiency Market basket analysis Association.
Sampling Large Databases for Association Rules Jingting Zeng CIS 664 Presentation March 13, 2007.
Searching Given distinct keys k 1, k 2, …, k n and a collection of n records of the form »(k 1,I 1 ), (k 2,I 2 ), …, (k n, I n ) Search Problem - For key.
UNIT 5.  The related activities of sorting, searching and merging are central to many computer applications.  Sorting and merging provide us with a.
CS 345: Topics in Data Warehousing Thursday, November 18, 2004.
Jeffrey D. Ullman Stanford University.  2% of your grade will be for answering other students’ questions on Piazza.  18% for Gradiance.  Piazza code.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Frequent Itemsets and Association Rules
1 Data Mining Lecture 6: Association Analysis. 2 Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of.
Analysis of Massive Data Sets Prof. dr. sc. Siniša Srbljić Doc. dr. sc. Dejan Škvorc Doc. dr. sc. Ante Đerek Faculty of Electrical Engineering and Computing.
The Stream Model Sliding Windows Counting 1’s
Frequent Pattern Mining
Market Basket Many-to-many relationship between different objects
Data Mining Association Analysis: Basic Concepts and Algorithms
Hash-Based Improvements to A-Priori
ΠΑΝΕΠΙΣΤΗΜΙΟ ΙΩΑΝΝΙΝΩΝ ΑΝΟΙΚΤΑ ΑΚΑΔΗΜΑΪΚΑ ΜΑΘΗΜΑΤΑ
Frequent Itemsets The Market-Basket Model Association Rules A-Priori Algorithm Other Algorithms Jeffrey D. Ullman Stanford University.
2 Announcements We will be releasing HW1 today
Frequent Itemset Mining & Association Rules
Frequent Itemset Mining & Association Rules
Farzaneh Mirzazadeh Fall 2007
Mining Association Rules in Large Databases
Presentation transcript:

1 Improvements to A-Priori Bloom Filters Park-Chen-Yu Algorithm Multistage Algorithm Approximate Algorithms Compacting Results

2 Aside: Hash-Based Filtering uSimple problem: I have a set S of one billion strings of length 10. uI want to scan a larger file F of strings and output those that are in S. uI have 1GB of main memory. wSo I can’t afford to store S in memory.

3 Solution – (1) uCreate a bit array of 8 billion bits, initially all 0’s. uChoose a hash function h with range [0, 8*10 9 ), and hash each member of S to one of the bits, which is then set to 1. uFilter the file F by hashing each string and outputting only those that hash to a 1.

4 Solution – (2) Filter File F To output; may be in S. h Drop; surely not in S.

5 Solution – (3) uAs at most 1/8 of the bit array is 1, only 1/8 th of the strings not in S get through to the output. uIf a string is in S, it surely hashes to a 1, so it always gets through. uCan repeat with another hash function and bit array to reduce the false positives by another factor of 8.

6 Solution – Summary uEach filter step costs one pass through the remaining file F and reduces the fraction of false positives by a factor of 8. wActually 1/(1-e -1/8 ). uRepeat passes until few false positives. uEither accept some errors, or check the remaining strings. we.g., divide surviving F into chunks that fit in memory and make a pass though S for each.

7 Aside: Throwing Darts uA number of times we are going to need to deal with the problem: If we throw k darts into n equally likely targets, what is the probability that a target gets at least one dart? uExample: targets = bits, darts = hash values of elements.

8 Throwing Darts – (2) (1 – 1/n) Probablity target not hit by one dart k 1 - Probability at least one dart hits target n(/n) Equivalent Equals 1/e as n  ∞ 1 – e –k/n

9 Throwing Darts – (3) uIf k << n, then e -k/n can be approximated by the first two terms of its Taylor expansion: 1 – k/n. uExample: 10 9 darts, 8*10 9 targets. wTrue value: 1 – e -1/8 = wApproximation: 1 – (1 – 1/8) =.125.

10 Improvement: Superimposed Codes (Bloom Filters) uWe could use two hash functions, and hash each member of S to two bits of the bit array. uNow, around ¼ of the array is 1’s. uBut we transmit a string in F to the output only if both its bits are 1, i.e., only 1/16 th are false positives. wActually (1-e -1/4 ) 2 =

11 Superimposed Codes – (2) uGeneralizes to any number of hash functions. uThe more hash functions, the smaller the probability of a false positive. uLimiting Factor: Eventually, the bit vector becomes almost all 1’s. wAlmost anything hashes to only 1’s.

12 Aside: History uThe idea is attributed to Bloom (1970). uWiki on generalized Boom Filters ilter#Examples

13 PCY Algorithm – An Application of Hash-Filtering uDuring Pass 1 of A-priori, most memory is idle. uUse that memory to keep counts of buckets into which pairs of items are hashed. wJust the count, not the pairs themselves.

Generation of Candidate Pairs 14

15 Needed Extensions to Hash-Filtering 1.Pairs of items need to be generated from the input file; they are not present in the file. 2.We are not just interested in the presence of a pair, but we need to see whether it is present at least s (support) times.

16 PCY Algorithm – (2) uA bucket is frequent if its count is at least the support threshold. uIf a bucket is not frequent, no pair that hashes to that bucket could possibly be a frequent pair. uOn Pass 2, we only count pairs that hash to frequent buckets.

17 Picture of PCY Hash table Item counts Bitmap Pass 1Pass 2 Frequent items Counts of candidate pairs

18 PCY Algorithm – Before Pass 1 Organize Main Memory uSpace to count each item. wOne (typically) 4-byte integer per item. uUse the rest of the space for as many integers, representing buckets, as we can.

19 PCY Algorithm – Pass 1 FOR (each basket) { FOR (each item in the basket) add 1 to item’s count; FOR (each pair of items) { hash the pair to a bucket; add 1 to the count for that bucket }

20 Observations About Buckets 1.A bucket that a frequent pair hashes to is surely frequent. wWe cannot use the hash table to eliminate any member of this bucket. 2.Even without any frequent pair, a bucket can be frequent. wAgain, nothing in the bucket can be eliminated.

21 Observations – (2) 3. But in the best case, the count for a bucket is less than the support s. wNow, all pairs that hash to this bucket can be eliminated as candidates, even if the pair consists of two frequent items. uThought question: under what conditions can we be sure most buckets will be in case 3?

22 PCY Algorithm – Between Passes uReplace the buckets by a bit-vector: w1 means the bucket is frequent; 0 means it is not. u4-byte integers are replaced by bits, so the bit-vector requires 1/32 of memory. uAlso, decide which items are frequent and list them for the second pass.

23 PCY Algorithm – Pass 2 uCount all pairs {i, j } that meet the conditions for being a candidate pair: 1.Both i and j are frequent items. 2.The pair {i, j }, hashes to a bucket number whose bit in the bit vector is 1. uNotice all these conditions are necessary for the pair to have a chance of being frequent.

24 Memory Details uBuckets require a few bytes each. wNote: we don’t have to count past s. w# buckets is O(main-memory size). uOn second pass, a table of (item, item, count) triples is essential (why?). wThus, hash table must eliminate 2/3 of the candidate pairs for PCY to beat a-priori.

25 Multistage Algorithm uKey idea: After Pass 1 of PCY, rehash only those pairs that qualify for Pass 2 of PCY. uOn middle pass, fewer pairs contribute to buckets, so fewer false positives – frequent buckets with no frequent pair.

26 Multistage Picture First hash table Second hash table Item counts Bitmap 1 Bitmap 2 Freq. items Counts of candidate pairs Pass 1 Pass 2Pass 3

27 Multistage – Pass 3 uCount only those pairs {i, j } that satisfy these candidate pair conditions: 1.Both i and j are frequent items. 2.Using the first hash function, the pair hashes to a bucket whose bit in the first bit-vector is 1. 3.Using the second hash function, the pair hashes to a bucket whose bit in the second bit-vector is 1.

28 Important Points 1.The two hash functions have to be independent. 2.We need to check both hashes on the third pass. wIf not, we would wind up counting pairs of frequent items that hashed first to an infrequent bucket but happened to hash second to a frequent bucket.

29 Multihash uKey idea: use several independent hash tables on the first pass. uRisk: halving the number of buckets doubles the average count. We have to be sure most buckets will still not reach count s. uIf so, we can get a benefit like multistage, but in only 2 passes.

30 Multihash Picture First hash table Second hash table Item counts Bitmap 1 Bitmap 2 Freq. items Counts of candidate pairs Pass 1 Pass 2

31 Extensions uEither multistage or multihash can use more than two hash functions. uIn multistage, there is a point of diminishing returns, since the bit-vectors eventually consume all of main memory. uFor multihash, the bit-vectors occupy exactly what one PCY bitmap does, but too many hash functions makes all counts > s.

32 All (Or Most) Frequent Itemsets In < 2 Passes uA-Priori, PCY, etc., take k passes to find frequent itemsets of size k. uOther techniques use 2 or fewer passes for all sizes: wSimple algorithm. wSON (Savasere, Omiecinski, and Navathe). wToivonen.

33 Simple Algorithm – (1) uTake a random sample of the market baskets. uRun a-priori or one of its improvements (for sets of all sizes, not just pairs) in main memory, so you don’t pay for disk I/O each time you increase the size of itemsets. wBe sure you leave enough space for counts.

34 Main-Memory Picture Copy of sample baskets Space for counts

35 Simple Algorithm – (2) uUse as your support threshold a suitable, scaled-back number. wE.g., if your sample is 1/100 of the baskets, use s /100 as your support threshold instead of s.

36 Simple Algorithm – Option uOptionally, verify that your guesses are truly frequent in the entire data set by a second pass. uBut you don’t catch sets frequent in the whole but not in the sample. wSmaller threshold, e.g., s /125, helps catch more truly frequent itemsets. But requires more space.

37 SON Algorithm – (1) uRepeatedly read small subsets of the baskets into main memory and perform the first pass of the simple algorithm on each subset. uAn itemset becomes a candidate if it is found to be frequent in any one or more subsets of the baskets.

38 SON Algorithm – (2) uOn a second pass, count all the candidate itemsets and determine which are frequent in the entire set. uKey “monotonicity” idea: an itemset cannot be frequent in the entire set of baskets unless it is frequent in at least one subset.

39 SON Algorithm – Distributed Version uThis idea lends itself to distributed data mining. uIf baskets are distributed among many nodes, compute frequent itemsets at each node, then distribute the candidates from each node. uFinally, accumulate the counts of all candidates.

40 Toivonen’s Algorithm – (1) uStart as in the simple algorithm, but lower the threshold slightly for the sample. wExample: if the sample is 1% of the baskets, use s /125 as the support threshold rather than s /100. wGoal is to avoid missing any itemset that is frequent in the full set of baskets.

41 Toivonen’s Algorithm – (2) uAdd to the itemsets that are frequent in the sample the negative border of these itemsets. uAn itemset is in the negative border if it is not deemed frequent in the sample, but all its immediate subsets are.

42 Example: Negative Border uABCD is in the negative border if and only if: 1.It is not frequent in the sample, but 2.All of ABC, BCD, ACD, and ABD are. uA is in the negative border if and only if it is not frequent in the sample. uBecause the empty set is always frequent. uUnless there are fewer baskets than the support threshold (silly case).

43 Picture of Negative Border … tripletons doubletons singletons Negative Border Frequent Itemsets from Sample

44 Toivonen’s Algorithm – (3) uIn a second pass, count all candidate frequent itemsets from the first pass, and also count their negative border. uIf no itemset from the negative border turns out to be frequent, then the candidates found to be frequent in the whole data are exactly the frequent itemsets.

45 Toivonen’s Algorithm – (4) uWhat if we find that something in the negative border is actually frequent? uWe must start over again! uTry to choose the support threshold so the probability of failure is low, while the number of itemsets checked on the second pass fits in main-memory.

46 If Something in the Negative Border is Frequent... … tripletons doubletons singletons Negative Border Frequent Itemsets from Sample We broke through the negative border. How far does the problem go?

47 Theorem: uIf there is an itemset that is frequent in the whole, but not frequent in the sample, then there is a member of the negative border for the sample that is frequent in the whole.

48 uProof: Suppose not; i.e.; 1.There is an itemset S frequent in the whole but not frequent in the sample, and 2.Nothing in the negative border is frequent in the whole. uLet T be a smallest subset of S that is not frequent in the sample. uT is frequent in the whole (S is frequent + monotonicity). uT is in the negative border (else not “smallest”).

49 Compacting the Output 1.Maximal Frequent itemsets : no immediate superset is frequent. 2.Closed itemsets : no immediate superset has the same count (> 0). wStores not only frequent information, but exact counts.

50 Example: Maximal/Closed CountMaximal (s=3)Closed A4No No B5No Yes C3No No AB4Yes Yes AC2No No BC3Yes Yes ABC2No Yes Frequent, but superset BC also frequent. Frequent, and its only superset, ABC, not freq. Superset BC has same count. Its only super- set, ABC, has smaller count.