Download presentation

1
**AMCS/CS229: Machine Learning**

Clustering 2 Xiangliang Zhang King Abdullah University of Science and Technology

2
**Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning**

Cluster Analysis Partitioning Methods + EM algorithm Hierarchical Methods Density-Based Methods Clustering quality evaluation How to decide the number of clusters ? Summary Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning 2

3
**The quality of Clustering**

For supervised classification we have a variety of measures to evaluate how good our model is Accuracy, precision, recall For cluster analysis, the analogous question is how to evaluate the “goodness” of the resulting clusters? But “clusters are in the eye of the beholder”! Then why do we want to evaluate them? To avoid finding patterns in noise To compare clustering algorithms To compare two sets of clusters To compare two clusters Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning 3

4
**Measures of Cluster Validity**

Numerical measures that are applied to judge various aspects of cluster validity, are classified into the following two types: External Index: Used to measure the extent to which cluster labels match externally supplied class labels. Purity, Normalized Mutual Information Internal Index: Used to measure the goodness of a clustering structure without respect to external information. Sum of Squared Error (SSE) Cophenetic correlation coefficient, silhouette 4

5
**Cluster Validity: External Index**

The class labels are externally supplied (q classes) Purity: Larger purity values indicate better clustering solutions. Purity of each cluster Cr of size nr Purity of the entire clustering 5

6
**Cluster Validity: External Index**

Purity: 6

7
**Cluster Validity: External Index**

The class labels are externally supplied (q classes) NMI (Normalized Mutual Information) : where I is mutual information and H is entropy 7

8
**Cluster Validity: External Index**

NMI (Normalized Mutual Information) : Larger NMI values indicate better clustering solutions. 8

9
**Internal Measures: SSE**

Internal Index: Used to measure the goodness of a clustering structure without respect to external information SSE is good for comparing two clustering results average SSE SSE curves w.r.t. various K Can also be used to estimate the number of clusters 9

10
**Internal Measures: Cophenetic correlation coefficient**

a measure of how faithfully a dendrogram preserves the pairwise distances between the original data points. Compare two hierarchical clusterings of the data 0.5 D F 0.71 A B 1.00 E 1.41 C 2.50 Single link --- Min Compute the correlation coefficient between Dist and CP 10 Matlab functions: cophenet

11
**Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning**

Cluster Analysis Partitioning Methods + EM algorithm Hierarchical Methods Density-Based Methods Clustering quality evaluation How to decide the number of clusters ? Summary Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning 11

12
**Internal Measures: Cohesion and Separation**

Cluster cohesion measures how closely related are objects in a cluster = SSE or the sum of the weight of all links within a cluster. Cluster separation measures how distinct or well-separated a cluster is from other clusters = sum of the weights between nodes in the cluster and nodes outside the cluster. separation cohesion 12

13
**Internal Measures: Silhouette Coefficient**

Silhouette Coefficient combines ideas of both cohesion and separation For an individual point, i Calculate a = average distance of i to the points in its cluster Calculate b = min (average distance of i to points in another cluster) The silhouette coefficient for a point is then given by Typically between 0 and 1. The closer to 1 the better. Can calculate the Average Silhouette width for a cluster or a clustering Matlab functions: silhouette 13

14
**Determine number of clusters by Silhouette Coefficient**

compare different clusterings by the average silhouette values K=3 mean(silh) = 0.526 K=4 mean(silh) = 0.640 K=5 mean(silh) = 0.527

15
**Determine the number of clusters**

Select the number K of clusters as the one maximizing averaged silhouette value of all points Optimizing an objective criterion Gap statistics of the decreasing of SSE w.r.t. K Model-based method: optimizing a global criterion (e.g. the maximum likelihood of data) Try to use clustering methods which need not to set K, e.g., DbScan, Prior knowledge….. Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning 15

16
**Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning**

Cluster Analysis Partitioning Methods + EM algorithm Hierarchical Methods Density-Based Methods Clustering quality evaluation How to decide the number of clusters ? Summary Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning 16

17
**Clustering VS Classification**

Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning 17

18
**Problems and Challenges**

Considerable progress has been made in scalable clustering methods Partitioning: k-means, k-medoids, CLARANS Hierarchical: BIRCH, ROCK, CHAMELEON Density-based: DBSCAN, OPTICS, DenClue Grid-based: STING, WaveCluster, CLIQUE Model-based: EM, SOM Spectral clustering Affinity Propagation Frequent pattern-based: Bi-clustering, pCluster Current clustering techniques do not address all the requirements adequately, still an active area of research Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning 18

19
**Open issues in clustering**

Cluster Analysis Open issues in clustering Clustering quality evaluation How to decide the number of clusters ? 19

20
**Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning**

What you should know What is clustering? How does k-means work? What is the difference between k-means and k-mediods? What is EM algorithm? How does it work? What is the relationship between k-means and EM? How to define inter-cluster similarity in Hierarchical clustering? What kinds of options do you have ? How does DBSCAN work ? Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning 20

21
**Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning**

What you should know What are the advantages and disadvantages of DbScan? How to evaluate the clustering results ? Usually how to decide the number of clusters ? What are the main differences between clustering and classification? Xiangliang Zhang, KAUST AMCS/CS229: Machine Learning 21

Similar presentations

OK

Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar 10/30/2007.

Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar 10/30/2007.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on sources of energy for class 8 Ppt on mineral and power resources for class 8 Home networking seminar ppt on 4g Ppt on current account deficit meaning Ppt on poem song of the rain Ppt on word association tests Ppt on eye oscillation Ppt online viewer free Ppt on data handling in maths Ppt on bresenham's line drawing algorithm