Presentation is loading. Please wait.

Presentation is loading. Please wait.

Methods to Improve Apriori’s Efficiency  Hash-based itemset counting: A k-itemset whose corresponding hashing bucket count is below the threshold cannot.

Similar presentations


Presentation on theme: "Methods to Improve Apriori’s Efficiency  Hash-based itemset counting: A k-itemset whose corresponding hashing bucket count is below the threshold cannot."— Presentation transcript:

1 Methods to Improve Apriori’s Efficiency  Hash-based itemset counting: A k-itemset whose corresponding hashing bucket count is below the threshold cannot be frequent  Transaction reduction: A transaction that does not contain any frequent k-itemset is useless in subsequent scans  Partitioning: Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB  Sampling: mining on a subset of given data, lower support threshold + a method to determine the completeness  Dynamic itemset counting: add new candidate itemsets only when all of their subsets are estimated to be frequent  The core of the Apriori algorithm: –Use frequent (k – 1)-itemsets to generate candidate k-itemsets –Use database scan and pattern matching to collect counts for the candidate itemsets  The bottleneck of Apriori: candidate generation 1. Huge candidate sets: 10 4 frequent 1-itemset will generate 10 7 candidate 2-itemsets To discover frequent pattern of size 100, eg, {a 1 …a 100 }, need to generate 2 100  10 30 candidates. 2. Multiple scans of database: (Needs (n +1 ) scans, n = length of the longest pattern) These notes contain NDSU confidential & Proprietary material. Patents are pending on bSQ, Ptree technology

2 Mining Frequent Patterns Without Candidate Generation  Compress a large database into a compact, Frequent-Pattern tree (FP-tree) structure –highly condensed, but complete frequent pattern mining; avoids costly database scans  Develop an efficient, FP-tree-based frequent pattern mining method –A divide-and-conquer methodology: decompose mining tasks into smaller ones –Avoid candidate generation: sub-database test only! {} f:4c:1 b:1 p:1 b:1c:3 a:3 b:1m:2 p:2m:1 Header Table Item frequency head f4 c4 a3 b3 m3 p3 Steps: 1.Scan DB once, find frequent 1- itemset (single item pattern) 2.Order frequent items in frequency descending order 3.Scan DB again, construct FP-tree TIDItems bought (ordered) frequent items 100{f, a, c, d, g, i, m, p}{f, c, a, m, p} 200{a, b, c, f, l, m, o}{f, c, a, b, m} 300 {b, f, h, j, o}{f, b} 400 {b, c, k, s, p}{c, b, p} 500 {a, f, c, e, l, p, m, n}{f, c, a, m, p} min_support = 0.5

3 FP-tree Structure  Completeness: –never breaks a long pattern of any transaction –preserves complete information for frequent pattern mining  Compactness –reduce irrelevant information—infrequent items are gone –frequency descending ordering: more frequent items are more likely to be shared –never be larger than the original database (if not count node-links and counts) –Example: For Connect-4 DB, compression ratio could be over 100  General idea (divide-and-conquer) –Recursively grow frequent pattern path using the FP-tree  Method –For each item, construct its conditional pattern-base, then its conditional FP-tree –Repeat the process on each newly created conditional FP-tree until the resulting FP-tree is empty, or it contains only one path (single path will generate all combinations of its sub-paths, each of which is a frequent pattern)

4 Major Steps to Mine FP-tree 1)Construct conditional pattern base for each node in the FP-tree 2)Construct conditional FP-tree from each conditional pattern-base 3)Recursively mine conditional FP-trees and grow frequent patterns obtained so far If conditional FP-tree contains a single path, simply enumerate all the patterns Step-1: Starting at the frequent header table in the FP-tree Traverse the FP-tree by following the link of each frequent item Accumulate all of transformed prefix paths of that item to form a conditional pattern base {} f:4c:1 b:1 p:1 b:1c:3 a:3 b:1m:2 p:2m:1 Header Table Item frequency head f4 c4 a3 b3 m3 p3 Conditional pattern bases itemcond. pattern base cf:3 afc:3 bfca:1, f:1, c:1 mfca:2, fcab:1 pfcam:2, cb:1

5 FP-tree Construction continued Step-2 : For each pattern-base Accumulate the count for each item in the base Construct the FP-tree for the frequent items of the pattern base m-conditional pattern base: fca:2, fcab:1 {} f:3 c:3 a:3 m-conditional FP-tree All frequent patterns concerning m m, fm, cm, am, fcm, fam, cam, fcam   {} f:4c:1 b:1 p:1 b:1c:3 a:3 b:1m:2 p:2m:1 Header Table Item frequency head f4 c4 a3 b3 m3 p3  Node-link property of FP-tree for Conditional Pattern Base Construction For any frequent item a i, all the possible frequent patterns that contain a i can be obtained by following a i 's node-links, starting from a i 's head in the FP-tree header  Prefix path property of FP-tree for Conditional Pattern Base Construction To calculate frequent patterns for a node a i in path P, only the prefix sub-path of a i in P need to be accumulated, and its frequency count should carry same count as node a i.

6 Mining Frequent Patterns by Creating Conditional Pattern-Bases Empty f {(f:3)}|c{(f:3)}c {(f:3, c:3)}|a{(fc:3)}a Empty{(fca:1), (f:1), (c:1)}b {(f:3, c:3, a:3)}|m{(fca:2), (fcab:1)}m {(c:3)}|p{(fcam:2), (cb:1)}p Conditional FP-treeConditional pattern-base Item Step 3: Recursively mine the conditional FP-tree {} f:3 c:3 a:3 m-conditional FP-tree Cond. pattern base of “am”: (fc:3) {} f:3 c:3 am-conditional FP-tree Cond. pattern base of “cm”: (f:3) {} f:3 cm-conditional FP-tree Cond. pattern base of “cam”: (f:3) {} f:3 cam-conditional FP-tree

7 Single FP-tree Path Generation  Suppose an FP-tree T has a single path P  The complete set of frequent pattern of T can be generated by enumeration of all the combinations of the sub-paths of P {} f:3 c:3 a:3 m-conditional FP-tree All frequent patterns concerning m m, fm, cm, am, fcm, fam, cam, fcam  Principles of FP-Growth Pattern growth property  be frequent itemset in DB, B be  's conditional pattern base,  be an itemset in B. Then    is a frequent itemset in DB iff  is frequent in B. “abcdef ” is a frequent pattern, if and only if “abcde ” is a frequent pattern, and “f ” is frequent in the set of transactions containing “abcde ”

8 FP-growth vs. Apriori: Scalability With the Support Threshold Data set T25I20D10K  Our performance study shows –FP-growth is an order of magnitude faster than Apriori, and is also faster than tree-projection  Reasoning –No candidate generation, no candidate test –Use compact data structure –Eliminate repeated database scan –Basic operation is counting and FP-tree building Data set T25I20D100K

9 Presentation of Association Rules (Table Form ) Visualization of Assoc Rule Using Plane Graph Visualization of Association Rule Using Rule Graph Icerberg query: Compute aggregates over one or a set of attributes only for those whose aggregate values is above certain threshold. E.g., select P.custID, P.itemID, sum(P.qty) from purchase P group by P.custID, P.itemID having sum(P.qty) >= 10  Compute iceberg queries efficiently by Apriori:  First compute lower dimensions  Then compute higher dimensions only when all the lower ones are above the threshold

10 Multiple-Level Association Rules  Items often form hierarchy.  Items at the lower level are expected to have lower support.  Rules regarding itemsets at approp levels could be useful.  Transaction database can be encoded based on dimensions and levels.  We can explore shared multi- level mining Food bread milk skim SunsetFraser 2%white wheat

11 Mining Multi-Level Associations  A top_down, progressive deepening approach: First find high-level strong rules: milk  bread [20%, 60%]. Then find their lower-level “weaker” rules: 2% milk  wheat bread [6%, 50%]  Variations at mining multiple-level association rules. Level-crossed association rules: 2% milk  Wonder wheat bread Assoc rules with multiple, alternative hierarchies: 2% milk  Wonder bread  Uniform Support vs. Reduced Support –Uniform Support: the same minimum support for all levels + 1 minimum support threshold. No need to examine itemsets containing any item whose ancestors do not have minimum support. – Lower level items do not occur as frequently. If support threshold –too high  miss low level associations –too low  generate too many high level associations –Reduced Support: reduced minimum support at lower levels There are 4 search strategies: –Level-by-level independent –Level-cross filtering by k-itemset –Level-cross filtering by single item –Controlled level-cross filtering by single item

12 Support Multi-level mining with uniform support Milk [support = 10%] 2% Milk [support = 6%] Skim Milk [support = 4%] Level 1 min_sup = 5% Level 2 min_sup = 5% Back Milk [support = 10%] 2% Milk [support = 6%] Skim Milk [support = 4%] Level 1 min_sup = 5% Level 2 min_sup = 3% Multi-level mining with reduced support

13 Multi-level Association: Redundancy Filtering  Some rules may be redundant due to “ancestor” relationships between items. E.g., –milk  wheat bread [support = 8%, confidence = 70%] –2% milk  wheat bread [support = 2%, confidence = 72%]  We say the first rule is an ancestor of the second rule.  Rule is redundant if its support is close to “expected” value, based on rule’s ancestor.  Multi-Level Mining: Progressive Deepening  A top-down, progressive deepening approach: – First mine high-level frequent items: milk (15%), bread (10%) – Then mine their lower-level “weaker” freq itemsets: 2% milk (5%), wheat bread (4%)  Different min_support threshold across multi-levels lead to different algs: –If adopting same minsupp across multi-levels, then toss t if any of t’s ancestors is infrequent. –If adopting reduced min_support at lower levels; then examine only those descendents whose ancestor’s support is frequent/non-negligible.

14 Progressive Refinement  Hierarchy of spatial relationship: –“g_close_to”: near_by, touch, intersect, contain, etc. –First search for rough relationship and then refine it.  Two-step mining of spatial association: –Step1: rough spatial computation (as filter) Using MBR or R-tree for rough est. –Step2: Detailed spatial algorithm (as refinement) Apply only to objs that passed rough spatial assoc test (no less than minsupp)  Progressive Refinement: Why progressive refinement? –Mining operator can be expensive or cheap, fine or rough –Trade speed with quality: step-by-step refinement.  Superset coverage property: –Preserve all positive answers—allow a positive false test but not a false negative test.  Two- or multi-step mining: –First apply rough/cheap operator (superset coverage) –Apply expensive alg on substantially reduced candidate set (Koperski,Han, SSD’95).

15 Multi-Dimensional Association: Concepts  Single-dimensional rules: buys(X, “milk”)  buys(X, “bread”)  Multi-dimensional rules:  2 dimensions or predicates –Inter-dimension association rules (no repeated predicates) age(X,”19-25”)  occupation(X,“student”)  buys(X,“coke”) –hybrid-dimension association rules (repeated predicates) age(X,”19-25”)  buys(X, “popcorn”)  buys(X, “coke”)  Categorical Attributes –finite number of possible values, no ordering among values  Quantitative Attributes –numeric, implicit ordering among values

16 Techniques for Mining MD Associations Search for frequent k-predicate set: Example: {age, occupation, buys} is a 3-predicate set. Techniques can be categorized by how age are treated. 1. Using static discretization of quantitative attributes Quantitative attrs statically discretized using predefined concept hierarchies 2. Quantitative association rules Quantitative attrs are dynamically discretized into “bins”based on data distr 3. Distance-based association rules Dynamic discretization process that considers distance between data points.  Static Discretization of Quantitative Attributes –Discretized prior to mining using concept hierarchy. –Numeric values are replaced by ranges. –In relational DB, finding all frequent k-predicate sets will require k or k+1 table scans. –Data cube is well suited for mining. –The cells of an n-dimensional cuboid correspond to the predicate sets. –Mining from data cubes can be much faster. (income)(age) () (buys) (age, income)(age,buys)(income,buys) (age,income,buys)

17 Quantitative Association Rules age(X,”30-34”)  income(X,”24K - 48K”)  buys(X,”high resolution TV”)  Numeric attributes are dynamically discretized –Such that the confidence or compactness of the rules mined is maximized.  2-D quantitative association rules: A quan1  A quan2  A cat  Cluster “adjacent” association rules to form general rules using a 2-D grid.  Example:

18 ARCS (Association Rule Clustering System) How does ARCS work? 1. Binning 2. Find frequent predicateset 3. Clustering 4. Optimize  Limitations of ARCS –Only quantitative attributes on LHS of rules. –Only 2 attributes on LHS. (2D limitation) –An alternative to ARCS Non-grid-based equi-depth binning clustering based on a measure of partial completeness. “Mining Quantitative Association Rules in Large Relational Tables” by R. Srikant and R. Agrawal.

19 Mining Distance-based Association Rules  Binning methods do not capture the semantics of interval data  Distance-based partitioning, more meaningful discretization considering: –density/number of points in an interval –“closeness” of points in an interval  S[X] is a set of N tuples t 1, t 2, …, t N, projected on the attribute set X  The diameter of S[X]: –dist x :distance metric, e.g. Euclidean or Manhattan

20 Minkowski Metrics L p -metrics (aka: Minkowski metrics) d p (X,Y) = (  i=1 to n w i |x i - y i | p ) 1/p (weights, wi assumed =1) Unit Disks Dividing Lines p=1 (Manhattan) p=2 (Euclidean) p=3,4,…. Pmax (chessboard) P=½, ⅓, ¼, … d max ≡ max|x i - y i |  d  ≡ lim p   d p (X,Y). Proof (sort of) lim p   {  i=1 to n a i p } 1/p max(a i ) ≡ b. For p large enough, other a i p << b p since y=x p increasingly concave, so  i=1 to n a i p  k*b p (k=duplicity of b in the sum), so {  i=1 to n a i p } 1/p  k 1/p *b and k 1/p  1

21 P>1 Minkowski Metrics q x1 y1 x2 y2 e^(LN(B2-C2)*A2)e^(LN(D2-E2)*A2) e^(LN(G2+F2)/A2) 2 0.5 0 0.5 0 0.25 0.25 0.7071067812 4 0.5 0 0.5 0 0.0625 0.0625 0.5946035575 9 0.5 0 0.5 0 0.001953125 0.001953125 0.5400298694 100 0.5 0 0.5 0 7.888609E-31 7.888609E-31 0.503477775 MAX 0.5 0 0.5 0 0.5 2 0.70 0 0.7071 0 0.5 0.5 1 3 0.70 0 0.7071 0 0.3535533906 0.3535533906 0.8908987181 7 0.70 0 0.7071 0 0.0883883476 0.0883883476 0.7807091822 100 0.70 0 0.7071 0 8.881784E-16 8.881784E-16 0.7120250978 MAX 0.70 0 0.7071 0 0.7071067812 2 0.99 0 0.99 0 0.9801 0.9801 1.4000714267 8 0.99 0 0.99 0 0.9227446944 0.9227446944 1.0796026553 100 0.99 0 0.99 0 0.3660323413 0.3660323413 0.9968859946 1000 0.99 0 0.99 0 0.0000431712 0.0000431712 0.9906864536 MAX 0.99 0 0.99 0 0.99 2 1 0 1 0 1 1 1.4142135624 9 1 0 1 0 1 1 1.0800597389 100 1 0 1 0 1 1 1.0069555501 1000 1 0 1 0 1 1 1.0006933875 MAX 1 0 1 0 1 2 0.9 0 0.1 0 0.81 0.01 0.9055385138 9 0.9 0 0.1 0 0.387420489 0.000000001 0.9000000003 100 0.9 0 0.1 0 0.0000265614 ************** 0.9 1000 0.9 0 0.1 0 1.747871E-46 0 0.9 MAX 0.9 0 0.1 0 0.9 2 3 0 3 0 9 9 4.2426406871 3 3 0 3 0 27 27 3.7797631497 8 3 0 3 0 6561 6561 3.271523198 100 3 0 3 0 5.153775E+47 5.153775E+47 3.0208666502 MAX 3 0 3 0 3 6 90 0 45 0 531441000000 8303765625 90.232863532 9 90 0 45 0 3.874205E+17 7.566806E+14 90.019514317 100 90 0 45 0 **************************** 90 MAX 90 0 45 0 90

22 d 1/p (X,Y) = (  i=1 to n |x i - y i | 1/p ) p P<1 p=0 (lim as p  0) doesn’t exist (Does not converge.) q x1 y1 x2 y2 e^(LN(B2-C2)*A2) e^(LN(D2-E2)*A2) e^(LN(G2+F2)/A2) 1 0.1 0 0.1 0 0.1 0.1 0.2 0.8 0.1 0 0.1 0 0.1584893192 0.1584893192 0.237841423 0.4 0.1 0 0.1 0 0.3981071706 0.3981071706 0.5656854249 0.2 0.1 0 0.1 0 0.6309573445 0.6309573445 3.2 0.1 0.1 0 0.1 0 0.7943282347 0.7943282347 102.4.04 0.1 0 0.1 0 0.9120108394 0.9120108394 3355443.2.02 0.1 0 0.1 0 0.954992586 0.954992586 112589990684263.01 0.1 0 0.1 0 0.977237221 0.977237221 1.2676506002E+29 2 0.1 0 0.1 0 0.01 0.01 0.1414213562 P<1 Minkowski Metrics q x1 y1 x2 y2 e^(LN(B2-C2)*A2) e^(LN(D2-E2)*A2) e^(LN(G2+F2)/A2) 1 0.5 0 0.5 0 0.5 0.5 1 0.8 0.5 0 0.5 0 0.5743491775 0.5743491775 1.189207115 0.4 0.5 0 0.5 0 0.7578582833 0.7578582833 2.8284271247 0.2 0.5 0 0.5 0 0.8705505633 0.8705505633 16 0.1 0.5 0 0.5 0 0.9330329915 0.9330329915 512 0.04 0.5 0 0.5 0 0.9726549474 0.9726549474 16777216 0.02 0.5 0 0.5 0 0.9862327045 0.9862327045 5.6294995342E+14 0.01 0.5 0 0.5 0 0.9930924954 0.9930924954 6.3382530011E+29 2 0.5 0 0.5 0 0.25 0.25 0.7071067812 q x1 y1 x2 y2 e^(LN(B2-C2)*A2) e^(LN(D2-E2)*A2) e^(LN(G2+F2)/A2) 1 0.9 0 0.1 0 0.9 0.1 1 0.8 0.9 0 0.1 0 0.9191661188 0.1584893192 1.097993846 0.4 0.9 0 0.1 0 0.9587315155 0.3981071706 2.14447281 0.2 0.9 0 0.1 0 0.9791483624 0.6309573445 10.8211133585 0.1 0.9 0 0.1 0 0.9895192582 0.7943282347 326.27006047 0.04 0.9 0 0.1 0 0.9957944476 0.9120108394 10312196.9619 0.02 0.9 0 0.1 0 0.9978950083 0.954992586 341871052443154 0.01 0.9 0 0.1 0 0.9989469497 0.977237221 3.8259705676E+29 2 0.9 0 0.1 0 0.81 0.01 0.9055385138

23 Min dissimilarity function The d min function (d min (X,Y) = min i=1 to n |x i - y i | is strange. It is not a psuedo- metric. The Unit Disk is: And the neighborhood of the blue point relative to the red point (dividing nbrhd - those points closer to the blue than the red). Major bifurcations! http://www.cs.ndsu.nodak.edu/~serazi/research/Distance.html

24 Other Interesting Metrics Canberra metric: d c (X,Y) = (  i=1 to n |x i – y i | / (x i + y i ) - normalized manhattan distance Square Cord metric: d sc (X,Y) =  i=1 to n (  x i –  y i ) 2 - Already discussed as L p with p=1/2 Squared Chi-squared metric: d chi (X,Y) =  i=1 to n (x i – y i ) 2 / (x i + y i ) HOBbit metric (Hi Order Binary bit) d H (X,Y) = max i=1 to n {n – HOB(x i - y i )} where, for m-bit integers, A=a 1..a m and B=b 1..b m HOB(A,B) = max i=1 to m {s:  i(1  i  s  a i =b i )} (related to Hamming distance in coding theory) Scalar Product metric: d chi (X,Y) = X Y =  i=1 to n x i * y i Hyperbolic metrics: (which map infinite space 1-1 onto a sphere) Whichare rotationally invariant? Translationally invariant? Other?

25  Example 2: –X and Y: positively correlated, –X and Z, negatively related –support and confidence of X=>Z dominates  Need measure of dependent or correlated events  P(B|A)/P(B) is also called the lift of rule A => B Criticism to Support and Confidence  Example 1: (Aggarwal & Yu, PODS98) –Among 5000 students 3000 play basketball 3750 eat cereal 2000 both play basket ball and eat cereal –play basketball  eat cereal [40%, 66.7%] is misleading because the overall percentage of students eating cereal is 75% which is higher than 66.7%. –play basketball  not eat cereal [20%, 33.3%] is far more accurate, although with lower support and confidence

26 Constraint-Based Mining –Interactive, exploratory mining giga-bytes of data? Could it be real? — Making good use of constraints! –What kinds of constraints can be used in mining? Knowledge type constraint: classification, association, etc. Data constraint: SQL-like queries –Find product pairs sold together in Vancouver in Dec.’98. Dim/level constraints: (in relevance to region,price, brand, customer category Rule constraints (small sales (price $200)) Interestingness constraints: (strong rules (minsupp  3%, minconf  60%).  Two kind of rule constraints: –Rule form constraints: meta-rule guided mining. P(x, y) ^ Q(x, w)  takes(x, “database systems”). –Rule (content) constraint: constraint-based query optimization (Ng, et al., SIGMOD’98). sum(LHS) 20 ^ count(LHS) > 3 ^ sum(RHS) > 1000  1-variable vs. 2-variable constraints (Lakshmanan, et al. SIGMOD’99): –1-var: A constraint confining only one side (L/R) of the rule, e.g., as shown above. –2-var: A constraint confining both sides (L and R). sum(LHS) < min(RHS) ^ max(RHS) < 5* sum(LHS)

27 Constrain-Based Association Query  Database: (1) trans (TID, Itemset ), (2) itemInfo (Item, Type, Price)  A constrained asso. query (CAQ) is in the form of {(S1, S2 )|C }, –where C is a set of constraints on S1, S2 including frequency constraint  A classification of (single-variable) constraints: –Class constraint: S  A. e.g. S  Item –Domain constraint: S  v,   { , , , , ,  }. e.g. S.Price < 100 v  S,  is  or . e.g. snacks  S.Type V  S, or S  V,   { , , , ,  } –e.g. {snacks, sodas }  S.Type –Aggregation constraint: agg(S)  v, where agg is in {min, max, sum, count, avg}, and   { , , , , ,  }. e.g. count(S1.Type)  1, avg(S2.Price)  100  Given a CAQ = { (S1, S2) | C }, the algorithm should be : –sound: Only finds frequent sets that satisfy the given constraints C –complete: All frequent sets satisfying constraints C are found  A naïve solution: –Apply Apriori for finding all frequent sets, and then to test them for constraint satisfaction one by one.  Our approach: –Comprehensive analysis of the properties of constraints and try to push them as deeply as possible inside the frequent set computation.

28 Constraints, C a anti-monotone  C a is anti-monotone iff for any pattern S not satisfying C a, no super-patterns of S can satisfy C a monotone  C m is monotone iff. for any pattern S satisfying C m, every super-pattern of S also satisfies it succinct set  A subset of item I s is a succinct set, if it can be expressed as  p (I) for some selection predicate p, where  is a selection operator power set  SP  2 I is a succinct power set, if there is a fixed number of succinct set I 1, …, I k  I, s.t. SP can be expressed in terms of the strict power sets of I 1, …, I k using union and minus succinct  A constraint C s is succinct provided SAT Cs (I) is a succinct power set  Suppose all items in patterns are listed in a total order R convertible anti-monotone  A constraint C is convertible anti-monotone iff a pattern S satisfying the constraint implies that each suffix of S w.r.t. R also satisfies C convertible monotone  A constraint C is convertible monotone iff a pattern S satisfying the constraint implies that each pattern of which S is a suffix w.r.t. R also satisfies C Succinctness Anti-monotonicityMonotonicity Convertible constraints Inconvertible constraints

29 Property of Constraints  Anti-monotonicity: If a set S violates the constraint, any superset of S violates the constraint.  Examples: –sum(S.Price)  v is anti-monotone –sum(S.Price)  v is not anti-monotone –sum(S.Price) = v is partly anti-monotone  Application: –Push “sum(S.price)  1000” deeply into iterative frequent set computation. Anti-Monotonicity  Example of Convertible Constraints: Avg(S)  V  Let R be the value descending order over set of items –E.g. I={9, 8, 6, 4, 3, 1}  Avg(S)  v is convertible monotone w.r.t. R –If S is a suffix of S 1, avg(S 1 )  avg(S) {8, 4, 3} is a suffix of {9, 8, 4, 3} avg({9, 8, 4, 3})=6  avg({8, 4, 3})=5 –If S satisfies avg(S)  v, so does S 1 {8, 4, 3} satisfies constraint avg(S)  4, so does {9, 8, 4, 3}  Succinctness: –For any set S1 and S2 satisfying C, S1  S2 satisfies C –Given A1 is the sets of size 1 satisfying C, then any set S satisfying C are based on A1 (has subset belongs to A1)  Example : –sum(S.Price )  v is not succinct –min(S.Price )  v is succinct  Optimization: –If C is succinct, then C is pre-counting prunable. The satisfaction of the constraint alone is not affected by the iterative support counting. S  v,   { , ,  } v  S S  V S  V S  V min(S)  v min(S)  v min(S)  v max(S)  v max(S)  v max(S)  v count(S)  v count(S)  v count(S)  v sum(S)  v sum(S)  v sum(S)  v avg(S)  v,   { , ,  } (frequent constraint) yes no yes partly no yes partly yes no partly yes no partly yes no partly Convertible (yes) Yes yes weakly no (no) Succinctness


Download ppt "Methods to Improve Apriori’s Efficiency  Hash-based itemset counting: A k-itemset whose corresponding hashing bucket count is below the threshold cannot."

Similar presentations


Ads by Google