Presentation is loading. Please wait.

Presentation is loading. Please wait.

Slide 1 EE3J2 Data Mining Lecture 18 K-means and Agglomerative Algorithms.

Similar presentations


Presentation on theme: "Slide 1 EE3J2 Data Mining Lecture 18 K-means and Agglomerative Algorithms."— Presentation transcript:

1 Slide 1 EE3J2 Data Mining Lecture 18 K-means and Agglomerative Algorithms

2 Slide 2 Today  Unsupervised Learning  Clustering  K-means

3 Slide 3 EE3J2 Data Mining Distortion  The distortion for the centroid set C = c 1,…,c M is defined by:  In other words, the distortion is the sum of distances between each data point and its nearest centroid  The task of clustering is to find a centroid set C such that the distortion Dist(C) is minimised

4 Slide 4 4 The K-Means Clustering Method  Given k, the k-means algorithm is implemented in 4 steps: Initialisation  Define the number of clusters (k).  Designate a cluster centre (a vector quantity that is of the same dimensionality of the data) for each cluster. Assign each data point to the closest cluster centre (centroid). That data point is now a member of that cluster. Calculate the new cluster centre (the geometric average of all the members of a certain cluster). Calculate the sum of within-cluster sum-of-squares. If this value has not significantly changed over a certain number of iterations, exit the algorithm. If it has, or the change is insignificant but has not been seen to persist over a certain number of iterations, go back to Step 2. Remember you converge when you have found the minimum overall distance between the centroid and the objects.

5 Slide 5 K Means Example (K=2) Pick seeds Reassign clusters Compute centroids x x Reasssign clusters x x x x Compute centroids Reassign clusters Converged! [From Mooney]

6 Slide 6 So….Basically  Start with randomly k data points (objects).  Find the set of data points that are closer to C 0 k ( Y 0 k ).  Compute average of these points, notate C 1 k -> new centroid.  Now repeat again this process and find the closest objects to C 1 k  Compute the average to get C 2 k -> new centroid, and so on….  Until convergence.

7 Slide 7 7 Comments on the K-Means Method  Strength Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. Often terminates at a local optimum. The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms  Weakness Applicable only when mean is defined, then what about categorical data? Need to specify k, the number of clusters, in advance Unable to handle noisy data and outliers Not suitable to discover clusters with non- convex shapes

8 Slide 8 Hierarchical Clustering  Grouping data objects into a tree of clusters.  Agglomerative clustering Begin by assuming that every data point is a separate centroid Combine closest centroids until the desired number of clusters is reached  Divisive clustering Begin by assuming that there is just one centroid/cluster Split clusters until the desired number of clusters is reached

9 Slide 9 Agglomerative Clustering - Example StudentsExam1Exam2Exam3 Mike937 Tom1029 Bill194 T Ren655 Ali1103

10 Slide 10 Distances between objects  Using Euclidean Distance measure, what's the difference between Mike and Tom? Mike:9,3,7 Tom: 10,2,9 SE1E2E3 Mike937 Tom1029 Bill194 T655 Ali1103

11 Slide 11 Distance Matrix MikeTomBillT RenAli Mike02.510.444.1211.75 Tom-012.56.413.93 Bill--06.481.41 T Ren---07.35 Ali----0

12 Slide 12 The Algorithm Step 1  Identify the entities which are most similar - this can be easily discerned from the distance table constructed.  In this example, Bill and Ali are most similar, with a distance value of 1.41. They are therefore the most 'related' Bill Ali

13 Slide 13 The Algorithm – Step 2  The two entities that are most similar can now be merged so that they represent a single cluster (or new entity).  So Bill and Ali can now be considered to be a single entity. How do we compare this entity with others? We use the Average linkage between the two.  So the new average vector is [1, 9.5, 3.5] – see first table and average the marks for Bill and Ali.  We now need to redraw the distance table, including the merger of the two entities, and new distance calculations.

14 Slide 14 The Algorithm – Step 3 MikeTomT Ren{Bill & Ali} Mike-2.54.1210.9 Tom-6.49.1 T Ren-6.9 {Bill & Ali}-

15 Slide 15 Next closest students  Mike and Tom with 2.5!  So, now we have 2 clusters! Bill Ali Mike Tom

16 Slide 16 The distance matrix now {Mike & Tom} T Ren{Bill & Ali} {Mike & Tom} -3.79.2 T Ren-6.9 {Bill & Ali}- Now, T Ren is closest to Bill and Ali so T Ren joines them In the cluster.

17 Slide 17 The final dendogram Bill Ali Mike Tom T Ren MANY ‘SUB-CLUSTERS’ WITHIN ONE CLUSTER

18 Slide 18 Conclusions  K- Means Algorithm – memorize equations and algorithm.  Hierarchical Clustering: Agglomerative Clustering


Download ppt "Slide 1 EE3J2 Data Mining Lecture 18 K-means and Agglomerative Algorithms."

Similar presentations


Ads by Google