Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 20: Cluster Validation

Similar presentations


Presentation on theme: "Lecture 20: Cluster Validation"— Presentation transcript:

1 Lecture 20: Cluster Validation
CSE 881: Data Mining Lecture 20: Cluster Validation

2 Cluster Validity For supervised classification we have a variety of measures to evaluate how good our model is Accuracy, precision, recall For cluster analysis, the analogous question is how to evaluate the “goodness” of the resulting clusters? But “clusters are in the eye of the beholder”! Then why do we want to evaluate them? To avoid finding patterns in noise To compare clustering algorithms To compare two sets of clusters To compare two clusters

3 Clusters found in Random Data
DBSCAN Random Points K-means Complete Link

4 Measures of Cluster Validity
Internal Index (Unsupervised): Used to measure the goodness of a clustering structure without respect to external information. Sum of Squared Error (SSE) External Index (Supervised): Used to measure the extent to which cluster labels match externally supplied class labels. Entropy Relative Index: Used to compare two different clusterings or clusters. Often an external or internal index is used for this function, e.g., SSE or entropy

5 Unsupervised Cluster Validation
Cluster Evaluation based on Proximity Matrix Correlation between proximity and incidence matrices Visualize proximity matrix Cluster Evaluation based on Cohesion and Separation

6 Measuring Cluster Validity Via Correlation
Two matrices Proximity Matrix “Incidence” Matrix One row and one column for each data point An entry is 1 if the associated pair of points belong to the same cluster An entry is 0 if the associated pair of points belongs to different clusters Compute the correlation between proximity and incidence matrices Since the matrices are symmetric, only the correlation between n(n-1) / 2 entries needs to be calculated. High correlation indicates that points that belong to the same cluster are close to each other. Not a good measure for some density or contiguity based clusters

7 Measuring Cluster Validity Via Correlation
Correlation of incidence and proximity matrices for the K-means clusterings of the following two data sets. Corr = Corr =

8 Using Similarity Matrix for Cluster Validation
Order the similarity matrix with respect to cluster labels and inspect visually

9 Using Similarity Matrix for Cluster Validation
Clusters in random data are not so crisp DBSCAN

10 Using Similarity Matrix for Cluster Validation
Clusters in random data are not so crisp K-means

11 Using Similarity Matrix for Cluster Validation
Clusters in random data are not so crisp Complete Link

12 Using Similarity Matrix for Cluster Validation
DBSCAN

13 Internal Measures: SSE
Internal Index: Used to measure the goodness of a clustering structure without respect to external information SSE SSE is good for comparing two clusterings or two clusters (average SSE). Can also be used to estimate the number of clusters

14 Internal Measures: SSE
SSE curve for a more complicated data set SSE of clusters found using K-means

15 Unsupervised Cluster Validity Measure
More generally, given K clusters: Validity(Ci): a function of cohesion, separation, or both Weight wi associated with each cluster I For SSE: wi = 1 validity(Ci) =

16 Internal Measures: Cohesion and Separation
Cluster Cohesion: Measures how closely related are objects in a cluster Cluster Separation: Measure how distinct or well-separated a cluster is from other clusters

17 Graph-based versus Prototype-based Views

18 Graph-based View Cluster Cohesion: Measures how closely related are objects in a cluster Cluster Separation: Measure how distinct or well-separated a cluster is from other clusters

19 Prototype-Based View Cluster Cohesion: Cluster Separation:
Equivalent to SSE if proximity is square of Euclidean distance Cluster Separation:

20 Unsupervised Cluster Validity Measures

21 Prototype-based vs Graph-based Cohesion
For SSE and points in Euclidean space, it can be shown that average pairwise difference between points in a cluster is equivalent to SSE of the cluster

22 Total Sum of Squares (TSS)
c: overall mean ci: centroid of each cluster Ci mi: number of points in cluster Ci c c1 c2 c3

23 Total Sum of Squares (TSS)
1 m1 2 3 4 m2 5 K=1 cluster: K=2 clusters: TSS = SSE + SSB

24 Total Sum of Squares (TSS)
TSS = SSE + SSB Given a data set, TSS is fixed A clustering with large SSE has small SSB, while one with small SSE has large SSB Goal is to minimize SSE and maximize SSB

25 Internal Measures: Silhouette Coefficient
Silhouette Coefficient combine ideas of both cohesion and separation, but for individual points, as well as clusters and clusterings For an individual point, i Calculate a = average distance of i to the points in its cluster Calculate b = min (average distance of i to points in another cluster) The silhouette coefficient for a point is then given by s = 1 – a/b if a < b, (or s = b/a if a  b, not the usual case) Typically between 0 and 1. The closer to 1 the better. Can calculate the Average Silhouette width for a cluster or a clustering

26 Unsupervised Evaluation of Hierarchical Clustering
Distance Matrix: Single Link

27 Unsupervised Evaluation of Hierarchical Clustering
CPCC (CoPhenetic Correlation Coefficient) Correlation between original distance matrix and cophenetic distance matrix Cophenetic Distance Matrix for Single Link Single Link

28 Unsupervised Evaluation of Hierarchical Clustering
Single Link Complete Link

29 Supervised Cluster Validation: Entropy and Purity

30 Supervised Cluster Validation: Precision and Recall
Precision for cluster i w.r.t. class j = Recall for cluster i w.r.t. class j = Cluster i mi1: class 1 mi2: class 2 Overall Data m1: class 1 m2: class 2

31 Supervised Cluster Validation: Hierarchical Clustering
Hierarchical F-measure:

32 Framework for Cluster Validity
Need a framework to interpret any measure. For example, if our measure of evaluation has the value, 10, is that good, fair, or poor? Statistics provide a framework for cluster validity The more “atypical” a clustering result is, the more likely it represents valid structure in the data Can compare the values of an index that result from random data or clusterings to those of a clustering result. If the value of the index is unlikely, then the cluster results are valid For comparing the results of two different sets of cluster analyses, a framework is less necessary. However, there is the question of whether the difference between two index values is significant

33 Statistical Framework for SSE
Example: 2-d data with 100 points Suppose a clustering algorithm produces SSE = 0.005 Does it mean that the clusters are statistically significant?

34 Statistical Framework for SSE
Generate 500 sets of random data points of size 100 distributed over the range 0.2 – 0.8 for x and y values Perform clustering with k = 3 clusters for each data set Plot histogram of SSE (compare with the value 0.005)

35 Statistical Framework for Correlation
Correlation of incidence and proximity matrices for the K-means clusterings of the following two data sets. Corr = (statistically significant) Corr = (not statistically significant)

36 Final Comment on Cluster Validity
“The validation of clustering structures is the most difficult and frustrating part of cluster analysis. Without a strong effort in this direction, cluster analysis will remain a black art accessible only to those true believers who have experience and great courage.” Algorithms for Clustering Data, Jain and Dubes


Download ppt "Lecture 20: Cluster Validation"

Similar presentations


Ads by Google