Presentation is loading. Please wait.

Presentation is loading. Please wait.

DATA MINING - CLUSTERING. Clustering 4 Clustering - unsupervised classification 4 Clustering - the process of grouping physical or abstract objects into.

Similar presentations


Presentation on theme: "DATA MINING - CLUSTERING. Clustering 4 Clustering - unsupervised classification 4 Clustering - the process of grouping physical or abstract objects into."— Presentation transcript:

1 DATA MINING - CLUSTERING

2 Clustering 4 Clustering - unsupervised classification 4 Clustering - the process of grouping physical or abstract objects into classes of similar objects 4 Clustering - help in construct meaningful partitioning of a large set of objects 4 Data clustering in statistics, machine learning, spatial database and data mining

3 CLARANS Algorithm 4 CLARANS - Clustering Large Applications Based on Randomized Search - presented by Ng and Han 4 CLARANS - based on randomized search and 2 statistics algorithm: PAM and CLARA 4 Method of algorithm - search of local optimum 4 Example of algorithm usage

4 Focusing Methods 4 FM - based on CLARANS algorithm and efficient spatial access method, like R*-tree 4 The focusing on representative objects technique 4 The focus on relevant clusters technique 4 The focus on a cluster technique 4 Examples of usage

5 Pattern-Based Similarity Search 4 Searching for similar patterns in a temporal or spatial-temporal database 4 Two types of queries encountered in data mining operations: – object - relative similarity query – all -pair similarity query 4 Various approaches: – similarity measures chosen – type of comparison chosen – subsequence parameters chosen

6 Similarity Measures (1) 1st Measure - the Euclidean distance between two sequences: {x i } - the target sequence of length n {y i } - a sequence of length N in thedatabase {z i J } - J-th subsequence of length n of {y i }

7 Similarity Measures (2) 3th Measure - the correlation (Discrete Fourier Transforms) between two sequences: 2nd Measure - the linear correlation between two sequences:

8 Alternative approaches 4 Matching all of the data points of a sequence simultaneously 4 Matching each sequence into a small set of multidimensional rectangles in the featude space – Fourier transformation – SVD - Singular Value Decomposition – The Karhunen-Loeve transformation 4 Hierarchy Scan - new approach

9 Mining Path Traversal Patterns 4 Solution of the problem of mining traversal patterns: first step: devise to convert the original sequence of log data into a set of traversal subsequences (maximal forward reference) second step: determine the frequent traversal patterns, term large reference sequences 4 Problems with finding large reference sequences

10 Mining Path Traversal Patterns - Example Traversal path for a user: {A,B,C,D,C,B,E,G,H,G, W,A,O,U,O,V} V C B G D W H O EA U The set of maximal forward references for this user: { ABCD, ABEGH, ABEGW, AOU, AOV }

11 Clustering Features and CF-trees Clustering Features - triplet summarizing information about subclusters of points: CF-tree - a balanced tree with 2 parameters: ¨ branching factor B - max number of children ¨ threshold T - max diameter of subclusters stored at the leaf nodes

12 Construct of CF-tree Usage of this structure in BIRCH algorithm ¨ The non leaf nodes are storing sums of their’ s children’s CFs ¨ The CF-tree is build dynamically as data points are inserted ¨ A point is inserted to the closest leaf entry (subcluster) ¨ If the diameter of the subcluster after insertion is larger than T value, the leaf node(s) are split

13 BIRCH Algorithm -Balanced -Interactive -Reducing and -Clustering using -Hierarchies

14 BIRCH Algorithm (1) PHASE 1 Scan all data and built an initial in memory CF tree using the given amount of memory and recycling space on disk Phase 1: Load into memory Phase 2: Condense Phase 3: Global Clustering Phase 4: Cluster Refining Data

15 BIRCH Algorithm (2) PHASE 2 (optional) Scan the leaf entries in the initial CF tree to rebuild a smaller CF tree, while removing more outliers and grouping crowded clusters into largest one Phase 3: Global Clustering Phase 4: Cluster Refining Data Phase 1: Load into memory Phase 2: Condense

16 BIRCH Algorithm (3) PHASE 3 Adapt an existing clustering algorithm for a set of data points to work with a set of subclusters, each described by its CF vector. Phase 3: Global Clustering Phase 4: Cluster Refining Data Phase 1: Load into memory Phase 2: Condense

17 BIRCH Algorithm (4) PHASE 4 (optional) Pass over the data to correct inaccuracies and refine clusters further. Phase 4 entails the cost of additional pass. Phase 3: Global Clustering Phase 4: Cluster Refining Data Phase 1: Load into memory Phase 2: Condense

18 CURE Algorithm -Clustering -Using -Representatives

19 CURE Algorithm (1) Draw random sample Partition sample Partially cluster partitions Label data in diskCluster partial clustersEliminate outliers Data CURE begins by drawing random sample from the database.

20 CURE Algorithm (2) Draw random sample Partition sample Partially cluster partitions Label data in diskCluster partial clustersEliminate outliers Data In order to further speed up clustering, CURE first partition the random sample into p partitions, each of size n/p.

21 CURE Algorithm (3) Draw random sample Partition sample Partially cluster partitions Label data in diskCluster partial clustersEliminate outliers Data Partially cluster each partition until the final number of clusters in each partition reduce to n/pq for some constant q > 1.

22 CURE Algorithm (4) Draw random sample Partition sample Partially cluster partitions Label data in diskCluster partial clustersEliminate outliers Data Outliers do not belong to any of the clusters. In CURE outliers are eliminated at multiply steps.

23 CURE Algorithm (5) Draw random sample Partition sample Partially cluster partitions Label data in diskCluster partial clustersEliminate outliers Data Cluster in a final pass to generate the final k clusters.

24 CURE Algorithm (6) Draw random sample Partition sample Partially cluster partitions Label data in diskCluster partial clustersEliminate outliers Data Each data point is assigned to the cluster containing the representative point closest to it.

25 CURE - cluster procedure procedure cluster(S,k) begin T := build_kd_tree(S) Q := buid_heap(S) while size(Q) > k do { u := extract_min(Q) v := u.closet delete(Q,v) w := merge(u,v) delete_rep(T,u) insert_rep(T,w) w.closet := x /*x is an arbitrary cluster in Q */ for each x  Q do { if dist(w,x) < dist(w,w.closet) w.closet := x if x.closet is either u or v { if dist(x,x.closet)< dist(x,w) a.closet := closet_cluster(T,x,dist(x,w)) else x.closet :=w relocate(Q,x) } else if dist(x,x.closet)>dist(x,w){ x.closet := w relocate(Q,x) } } insert (Q,w) } end

26 CURE - merge procedure procedure merge(u,v) begin w := uUv w.mean := |u|u.mean+|v|v.mean/|u|+|v| tmpSet :=  for i := 1 to c do { maxDist := 0 foreach point p in cluster w do { if i = 1 maxDist := dist(p,w.mean) else minDist := min{dist(p,q) : q  tmpSet if (minDist  maxDist) { maxDist := minDist maxPoint := p } } tmpSet := tmpSet U {maxPoint} } foreach point p in tmpSet do w.rep := w.rep U {p+  *(w.mean-p)} return w end

27 The Intelligent Miner of IBM Clustering - Demographic provides fast and natural clustering of very large databases. It automatically determines the number of clusters to be generated. Similarities between records are determined by comparing their field values. The clusters are then defined so that Condorcet’s criterion is maximised: (sum of all record similarities of pairs in the same cluster) - (sum of all record similarities of pairs in different clusters)

28 The Intelligent Miner - example Suppose that you have a database of a supermarket that includes customer identification and information abut the date and time of the purchases. The clustering mining function clusters this data to enable the identification of different types of shoppers. For example, this might reveal that customers buy many articles on Friday and usually pay by credit card.


Download ppt "DATA MINING - CLUSTERING. Clustering 4 Clustering - unsupervised classification 4 Clustering - the process of grouping physical or abstract objects into."

Similar presentations


Ads by Google