Download presentation

Presentation is loading. Please wait.

Published byJohn Wheat Modified over 2 years ago

1
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 13: K-Means Clustering Martin Russell

2
Slide 2 EE3J2 Data Mining Objectives To explain the for K-means clustering To understand the K-means clustering algorithm To understand the relationships between: –Clustering and statistical modelling using GMMs –K-means clustering and E-M estimation for GMMs

3
Slide 3 EE3J2 Data Mining Clustering so far Agglomerative clustering –Begin by assuming that every data point is a separate centroid –Combine closest centroids until the desired number of clusters is reached –See agglom.c on the course website Divisive clustering –Begin by assuming that there is just one centroid/cluster –Split clusters until the desired number of clusters is reached

4
Slide 4 EE3J2 Data Mining Optimality Neither agglomerative clustering nor divisive clustering is optimal In other words, the set of centroids which they give is not guaranteed to minimise distortion

5
Slide 5 EE3J2 Data Mining Optimality continued For example: –In agglomerative clustering, a dense cluster of data points will be combined into a single centroid –But to minimise distortion, need several centroids in a region where there are many data points –A single ‘outlier’ may get its own cluster Agglomerative clustering provides a useful starting point, but further refinement is needed

6
Slide 6 EE3J2 Data Mining 12 centroids

7
Slide 7 EE3J2 Data Mining K-means Clustering Suppose that we have decided how many centroids we need - denote this number by K Suppose that we have an initial estimate of suitable positions for our K centroids K-means clustering is an iterative procedure for moving these centroids to reduce distortion

8
Slide 8 EE3J2 Data Mining K-means clustering - notation Suppose there are T data points, denoted by: Suppose that the initial K clusters are denoted by: One iteration of K-means clustering will produce a new set of clusters Such that

9
Slide 9 EE3J2 Data Mining K-means clustering (1) For each data point y t let c i(t) be the closest centroid In other words: d(y t, c i(t) ) = min m d(y t,c m ) Now, for each centroid c 0 k define: In other words, Y 0 k is the set of data points which are closer to c 0 k than any other cluster

10
Slide 10 EE3J2 Data Mining K-means clustering (2) Now define a new k th centroid c 1 k by: where |Y k 0 | is the number of samples in Y k 0 In other words, c 1 k is the average value of the samples which were closest to c 0 k

11
Slide 11 EE3J2 Data Mining K-means clustering (3) Now repeat the same process starting with the new centroids: to create a new set of centroids: … and so on until the process converges Each new set of centroids has smaller distortion than the previous set

12
Slide 12 EE3J2 Data Mining Initialisation An outstanding problem is to choose the initial cluster set C 0 Possibilities include: –Choose C 0 randomly –Choose C 0 using agglomerative clustering –Choose C 0 using divisive clustering Choice of C 0 can be important –K-means clustering is a “hill-climbing” algorithm –It only finds a local minimum of the distortion function –This local minimum is determined by C 0

13
Slide 13 EE3J2 Data Mining Local optimality Distortion Cluster set C 0 C 1..C n Dist(C 0 ) Local minimum Global minimum N.B: I’ve drawn the cluster set space as 1 dimensional for simplicity. In reality it is a very high dimensional space

14
Slide 14 EE3J2 Data Mining Example

15
Slide 15 EE3J2 Data Mining Example - distortion

16
Slide 16 EE3J2 Data Mining C programs on website agglom.c –Agglomerative clustering –agglom dataFile centFile numCent –Runs agglomerative clustering on the data in dataFile until the number of centroids is numCent. Writes the centroid (x,y) coordinates to centFile

17
Slide 17 EE3J2 Data Mining C programs on website k-means.c –K-means clustering –k-means dataFile centFile opFile –Runs 10 iterations of k-means clustering on the data in dataFile starting with the centroids in centFile. –After each iteration writes distortion and new centroids to opFile sa1.txt –This is the data file that I have used in the demonstrations

18
Slide 18 Relationship with GMMs The set of centroids in clustering corresponds to the set of means in a GMM Measuring distances using Euclidean distance in clustering corresponds to assuming that the GMM variances are all equal to 1 k-means clustering corresponds to the mean estimation part of the E-M algorithm, but: –In k-means samples are allocated 100% to the closest centroid –In E-M samples are shared between GMM components according to posterior probabilities EE3J2 Data Mining

19
Slide 19 K-means clustering - example EE3J2 Data Mining Data1.21.7 11.1 1.52.5 22.1 1.33.1 1.81.9 0.91.5 0.21.2 21.1 2.53.7 2.44.2 3.13.9 2.84.5 1.62.1 0.71.7 Centroids (0)0.752.5 31.5 1.754

20
Slide 20 EE3J2 Data Mining Distance to centroidsClosest centroid d(x(n),c(1))d(x(n),c(2))d(x(n),c(3))c(1)c(2)c(3) Data1.21.70.921.812.361 11.11.422.043.001 1.52.50.751.801.521 22.11.311.171.921 1.33.10.812.331.011 1.81.91.211.262.101 0.91.51.012.102.641 0.21.21.412.823.201 21.11.881.082.911 2.53.72.122.260.811 2.44.22.372.770.681 3.13.92.742.401.351 2.84.52.863.011.161 1.62.10.941.521.911 0.71.70.802.312.531 Totals924 Centroids (0)0.752.5 31.5 1.754Distortion(0)15.52 First iteration of k-means

21
Slide 21 EE3J2 Data Mining Distance to centroidsClosest centroidc(1)c(2)c(3) d(x(n),c(1)) d(x(n),c(2) )d(x(n),c(3))c(1)c(2)c(3)xyxyxy Data1.21.70.921.812.3611.201.70 11.11.422.043.0011.001.10 1.52.50.751.801.5211.502.50 22.11.311.171.9212.002.10 1.33.10.812.331.0111.303.10 1.81.91.211.262.1011.801.90 0.91.51.012.102.6410.901.50 0.21.21.412.823.2010.201.20 21.11.881.082.9112.001.10 2.53.72.122.260.8112.503.70 2.44.22.372.770.6812.404.20 3.13.92.742.401.3513.103.90 2.84.52.863.011.1612.804.50 1.62.10.941.521.9111.602.10 0.71.70.802.312.5310.701.70 Totals92410.216.843.210.816.3 Centroids (0)0.752.5 31.5 1.754Dist’n(0)15.52 First iteration of k-means

22
Slide 22 EE3J2 Data Mining Data1.21.7 11.1 1.52.5 22.1 1.33.1 1.81.9 0.91.5 0.21.2 21.1 2.53.7 2.44.2 3.13.9 2.84.5 1.62.1 0.71.7 Centroids (0)0.752.5 31.5 1.754 Centroids (1)1.1333331.866667 21.6 2.74.075 First iteration of k-means

23
Slide 23 EE3J2 Data Mining Second iteration of k-means Distance to centroidsClosest centroid d(x(n),c(1))d(x(n),c(2))d(x(n),c(3))c(1)c(2)c(3) Data1.21.70.180.812.811 11.10.781.123.431 1.52.50.731.031.981 22.10.900.502.101 1.33.11.241.661.711 1.81.90.670.362.351 0.91.50.431.103.141 0.21.21.151.843.811 21.11.160.503.061 2.53.72.292.160.431 2.44.22.652.630.331 3.13.92.832.550.441 2.84.53.123.010.441 1.62.10.520.642.261 0.71.70.461.303.101 834 Centroids(1)1.1333331.866667 21.6 2.74.075

24
Slide 24 EE3J2 Data Mining Second iteration of k-means Data1.21.7 11.1 1.52.5 22.1 1.33.1 1.81.9 0.91.5 0.21.2 21.1 2.53.7 2.44.2 3.13.9 2.84.5 1.62.1 0.71.7 Centroids(1)1.1333331.866667 21.6 2.74.075 Centroids (2)1.051.8625 1.9333331.7 2.74.075

25
Slide 25 EE3J2 Data Mining

26
Slide 26 EE3J2 Data Mining Summary The need for k-means clustering The k-means clustering algorithm Example of k-means clustering

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google