Presentation on theme: "Clustering (1) Clustering Similarity measure Hierarchical clustering Model-based clustering Figures from the book Data Clustering by Gan et al."— Presentation transcript:
Clustering (1) Clustering Similarity measure Hierarchical clustering Model-based clustering Figures from the book Data Clustering by Gan et al.
Objects in a cluster should share closely related properties have small mutual distances be clearly distinguishable from objects not in the same cluster A cluster should be a densely populated region surrounded by relatively empty regions. Compact cluster --- can be represented by a center Chained cluster --- higher order structures Clustering
From a dataset, Distance matrix: Similarity matrix: Similarity measures
Euclidean distance Mahattan distance Mahattan segmental distance (using only part of the dimensions) Similarity measures
Maximum distance (sup distance) Minkowski distance. This is the general case. R=2, Euclidean distance; R=1, Manhattan distance; R=∞, maximum distance. Similarity measures
Mahalanobis distance It is invariant under non-singular transformations The new covariant matrix is Similarity measures
The Mahalanobis distance doesn’t change Similarity measures
Chord distance: the length of the chord joining the two normalized points within a hypersphere of radius one Geodesic distance: the length of the shorter arc connecting the two normalized data points at the surface of the hypersphere of unit radius Similarity measures
Mixed-type data: General similarity coefficient by Gower: For quantitative attributes, (R is range), if neither is missing. For binary attributes, if x k =1 & y k =1; if x k =1 or y k =1. For nominal attributes, if x k = y k ; if neither is missing. Similarity measures
Similarity between clusters Mean-based distance: Nearest neighbor Farthest neighbor Average neighbor Similarity measures
Hierarchical clustering Agglomerative: build tree by joining nodes; Divisive: build tree by dividing groups of objects.
Complete linkage: find the distance between any two nodes by farthest neighbor distance. Average linkage: find the distance between any two nodes by average distance. Hierarchical clustering
Comments: Hierarchical clustering generates a tree; to find clusters, the tree needs to be cut at a certain height; Complete linkage method favors compact, ball- shaped clusters; single linkage method favors chain-shaped clusters; average linkage is somewhere in between. Hierarchical clustering
Model-based clustering Impose certain model assumptions on potential clusters; try to optimize the fit between data and model. The data is viewed as coming from a mixture of probability distributions; each of the distributions represents a cluster.
For example, if we believe the data come from a mixture of several Gaussian densities, the likelihood that data point i is from cluster j is: Model-based clustering
Given the number of clusters, we try to maximize the likelihood Where is the probability that the observation belongs to cluster j The most commonly used method is the EM algorithm. It iterates between soft cluster assignment and parameter estimation. Model-based clustering