Presentation is loading. Please wait.

Presentation is loading. Please wait.

實驗室研究暨成果說明會 Content and Knowledge Management Laboratory (B) Data Mining Part Director: Anthony J. T. Lee Presenter: Wan-chuen Lin.

Similar presentations


Presentation on theme: "實驗室研究暨成果說明會 Content and Knowledge Management Laboratory (B) Data Mining Part Director: Anthony J. T. Lee Presenter: Wan-chuen Lin."— Presentation transcript:

1 實驗室研究暨成果說明會 Content and Knowledge Management Laboratory (B) Data Mining Part Director: Anthony J. T. Lee Presenter: Wan-chuen Lin

2 2 Outline Introduction of basic data mining concepts about our research topics Brief description of doctoral research Topic 1: Mining frequent itemsets with multi- dimensional constraints Topic 2: Mining the inter-transactional association rules of multi-dimensional interval patterns Topic 3: Inter-sequence association rules mining Topic 4: Mining association rules among time- series data

3 3 Introduction of Data Mining Data mining is the task of discovering knowledge from large amounts of data. One of the fundamental data mining problems, frequent itemset mining, covers a broad spectrum of mining topics, including association rules, sequential patterns, etc. Frequent itemset mining is to discover all the itemsets whose supports in the database exceed a user-specified threshold.

4 4 Introduction of Association Rules Association rule is of the form X  Y, where X and Y are both frequent itemsets in the given database and X  Y= . The support of X  Y is the percentage of transactions in the given database that contain both X and Y, i.e., P(X  Y). The confidence of X  Y is the percentage of transactions in the given database containing X that also contain Y, i.e., P(Y|X).

5 5 Introduction of Sequential Patterns A sequence is an ordered list of itemsets, and denoted by, where s j is an itemset. s j is also called an element of the sequence, and denoted as (x 1 x 2 …x m ), where x k is an item. The support of a sequence  in a sequence database is the number of tuples containing . A sequence  is called a sequential pattern if support(  )  min-support.

6 6 Algorithm for Mining Frequent Itemsets Apriori Candidate set generation-and–test Level-wise: it iteratively generates candidate k-itemsets from previously found frequent (k-1)-itemsets, and then checks the supports of candidates to form frequent k-itemsets. L k-1 Join Support Check LkLk CkCk

7 7 Algorithm for Mining Frequent Itemsets (cont ’ d) FP-growth The method constructs a compressed frequent pattern tree, called FP-tree. A divide-and-conquer strategy to recursively decompose the mining task into a set of smaller tasks in conditional databases, and concatenates the suffix itemset with the frequent itemsets generated from a conditional FP-tree.

8 8 Algorithm for Mining Sequential Patterns - PrefixSpan It finds length-1 sequential patterns in the target database first, and partitions the database into smaller projected databases with prefix of each sequential pattern previously found. The sequential patterns can be mined by constructing corresponding projected databases and mine each recursively. It preserves the element order of each tuple in the mining process.

9 9 Brief Description of Doctoral Research Mining calling path patterns in GSM networks Two problems of mining calling path patterns Mining PMFCPs Mining periodic PMFCPs Graph structures [(periodic) frequent calling path graph] and graph-based mining algorithms Based on a depth-first No candidate paths are generated and the database is scanned only once if the whole graph structure can be held in the main memory.

10 10 Brief Description of Doctoral Research (cont ’ d) Bioinformatic data mining Gene Clustering Sequence comparisons, alignments and compression DNA sequence Protein sequence Application Phylogenetic tree to predict the function of a new protein Relationship between DNA sequence & disease

11 11 Topic 1: Mining Frequent Itemsets with Multi-dimensional Constraints Frequent itemset mining often generates a very large number of frequent itemsets. Only the subset of the frequent itemsets and association rules is of interest to users. Users need additional post-processing to find useful ones. Constraint-based mining pushes user-specific constraints deep inside the mining process to improve performance. With multi-dimensional items, constraints can be imposed on multiple dimensional attributes.

12 12 Topic 1: Mining Frequent Itemsets with Multi-dimensional Constraints itemID a 1 a 2 …. a m i k = (k 1, k 2 …, k m ) A = i A = (A 1, A 2, …, A m ) A 1 =A.a 1 attributes (dimensions) Multi-dimensional Constraints

13 13 Topic 1: Mining Frequent Itemsets with Multi-dimensional Constraints Multi-dimensional constraints can be categorized according to constraint properties. anti-monotone, monotone, convertible and inconvertible It can be also classified according to the number of sub-constraints included. Single constraint against multiple dimensions, Ex: max(S.cost)  min(S.price) Conjunction and/or disjunction of multiple sub- constraints, Ex: (C1: S.cost  v1)  (C2: S.price  v2)

14 14 Topic 1: Mining Frequent Itemsets with Multi-dimensional Constraints We extend constraints to place over multi- dimensional itemsets and develop algorithms for mining frequent itemsets with multi- dimensional constraints by extension of CFG (Constrained Frequent Pattern Growth), Overview of our algorithm Phase 1: Frequency check Phase 2: Constraint check Phase 3: Conditional database construction

15 15 Example: C am  max(S.cost)  min(S.price) Database BECA BEA DA BDA BDE BDECA BEC BDEC DEC BDC A-conditional Database BEC BE D BD BDEC EA-conditional Database D Frequent items: B, D, E, C, A C(BDECA)=false C(B)=true C(D)=true C(E)=true C(C)=true C(A)=true Frequent items: B, D, E, C C(BDECA)=false C(BA)=false C(DA)=true C(EA)=true C(CA)=false Frequent items: 

16 16 Topic 2: Mining Inter-transactional Association Rules of Multi-dimensional Interval Patterns Transaction could be the items bought by the same customer, the events happened on the same day, and so on. Intra-transactional association rules: associations among items within the same transaction. Ex: buy (X, diapers) => buy (X, beer) [support=80%] Inter-transactional association rules: association relations among different transactions. Ex: If the prices of IBM and SUN go up, Microsoft’s will most likely [80%] increases the next day.

17 17 Topic 2: Mining Inter-transactional Association Rules of Multi-dimensional Interval Patterns Interval data are different from the point data in that they occupy regions of non-zero size. Multi-dimensional Intervals can be represented as line segments (1-D), rectangles (2-D), hyper-cubes (n-D), etc. Extended item: denoted as  (Location) Reference point: the smallest  (Location) among all  (Location). Maxspan: a sliding window; only associations covered by it are considered.

18 18 Example There are two cubes in the 3-dimensional space:  0,2,1 and  1,1,0. Reference point: (0,1,0) The two items are denoted as  0,1,1 and  1,0,0.  0,2,1  1,1,0

19 19 Algorithm (Apriori-like) Example Support: 10% (10%*20=2) Maxspan: 4 L 1 :  0,0

20 20 Algorithm (Apriori-like) Example (cont ’ d) Remind: Apriori-like algorithm L k-1 L 2 : {  0,0,  1,1 }, {  1,0,  0,1 }, {  0,0,  2,0 }, {  0,0,  3,0 } L 3 : {  3,0,  2,1,  0,3 } {  1,0,  0,1,  2,1 } {  3,0,  0,3,  4,1 } {  2,0,  0,2,  4,0 } L 4 : {  0,3,  4,1,  2,1,  3,0 } JoinSupport Check LkLk CkCk

21 21 Topic 3: Inter-sequence Association Rules Mining Inter-sequence model 12345678910 Transaction Time : Transaction ID : 12345678910

22 22 Topic 3: Inter-sequence Association Rules Mining (cont ’ d) Extended sequence (denote asΔ t ): a sequence s = at time pointΔ t. Algorithm: Step 1: Use PrefixSpan to find all sequential patterns Step 2: Use an Apriori-like method to check if some extended sequence set is large Use L-bucket (List-bucket) & C-bucket (candidate-bucket) to improve mining efficiency.

23 23 Example min_support = 3 maxspan = 2 Tran. IDTran. Time Sequence 11 22 33 44 55 66 77 88 99 10 The database Sequential Patterns: –,, –,,,,,, – PrefixSpan

24 24 Example (cont ’ d) Candidates C 2 {Δ 0, Δ 1 }, {Δ 0, Δ 2 } {Δ 0, Δ 1 }, {Δ 0, Δ 1 }, {Δ 0, Δ 2 }, {Δ 0, Δ 2 } {Δ 0, Δ 1 }, {Δ 0, Δ 2 } {Δ 0, Δ 1 }, {Δ 0, Δ 1 }, {Δ 0, Δ 2 }, {Δ 0, Δ 2 } {Δ 0, Δ 1 }, {Δ 0, Δ 2 } PrefixSpan Result,,,,, L1L1 {Δ 0 }

25 25 Example (cont ’ d) L2L2 {Δ 0 }, {Δ 0 }, {Δ 0 }, {Δ 0 }, {Δ 0 },{Δ 0 } {Δ 0, Δ 1 }, {Δ 0, Δ 2 }, {Δ 0, Δ 1 }, {Δ 0, Δ 1 }, {Δ 0, Δ 2 }, {Δ 0, Δ 2 }, {Δ 0, Δ 1 }, {Δ 0, Δ 1 }, {Δ 0, Δ 2 }, {Δ 0, Δ 2 }, {Δ 0, Δ 1 }, {Δ 0, Δ 2 }, {Δ 0, Δ 1 }, {Δ 0, Δ 1 }, {Δ 0, Δ 2 }, {Δ 0, Δ 2 }, {Δ 0, Δ 1 }, {Δ 0, Δ 2 } PrefixSpan Result,,,,, C2C2 Apriori-like L k-1 → C k → L k

26 26 Topic 4: Mining Association Rules among Time-series Data A line is an ordered and continuous list in the form {t 1, t 2, …, t m } describing the property of the subject along the time. Step 1: find the frequent lines and points in each line-set. (Apriori-like algorithm) Step 2: use those frequent-set combination to find the associations among them. (inter- transaction association rules)

27 27 Topic 4: Mining Association Rules among Time-series Data

28 28 Time-series Data Approximation For the algorithm’s efficiency Equally partition the fluctuation rate into several classes.

29 29 Step 1: Line Discovery (Apriori-like) Step 2: Association Rule Mining

30 Data Mining Part Thank You!


Download ppt "實驗室研究暨成果說明會 Content and Knowledge Management Laboratory (B) Data Mining Part Director: Anthony J. T. Lee Presenter: Wan-chuen Lin."

Similar presentations


Ads by Google