Download presentation

Presentation is loading. Please wait.

Published byGreta Netter Modified about 1 year ago

1
Clustering Paolo Ferragina Dipartimento di Informatica Università di Pisa This is a mix of slides taken from several presentations, plus my touch !

2
Objectives of Cluster Analysis Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups Inter-cluster distances are maximized Intra-cluster distances are minimized Competing objectives The commonest form of unsupervised learning

3
Google News: automatic clustering gives an effective news presentation metaphor

4
For improving search recall Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to information needs Therefore, to improve search recall: Cluster docs in corpus a priori When a query matches a doc D, also return other docs in the cluster containing D Hope if we do this: The query “car” will also return docs containing automobile Sec But also for speeding up the search operation

5
For better visualization/navigation of search results Sec. 16.1

6
Issues for clustering Representation for clustering Document representation Vector space? Normalization? Need a notion of similarity/distance How many clusters? Fixed a priori? Completely data driven? Sec. 16.2

7
Notion of similarity/distance Ideal: semantic similarity Practical: term-statistical similarity Docs as vectors We will use cosine similarity. For many algorithms, easier to think in terms of a distance (rather than similarity) between docs.

8
Clustering Algorithms Flat algorithms Create a set of clusters Usually start with a random (partial) partitioning Refine it iteratively K means clustering Hierarchical algorithms Create a hierarchy of clusters (dendogram) Bottom-up, agglomerative Top-down, divisive

9
Hard vs. soft clustering Hard clustering: Each document belongs to exactly one cluster More common and easier to do Soft clustering: Each document can belong to more than one cluster. Makes more sense for applications like creating browsable hierarchies News is a proper example Search results is another example

10
Flat & Partitioning Algorithms Given: a set of n documents and the number K Find: a partition in K clusters that optimizes the chosen partitioning criterion Globally optimal Intractable for many objective functions Ergo, exhaustively enumerate all partitions Locally optimal Effective heuristic methods: K-means and K-medoids algorithms

11
K-Means Assumes documents are real-valued vectors. Clusters based on centroids (aka the center of gravity or mean) of points in a cluster, c: Reassignment of instances to clusters is based on distance to the current cluster centroids. Sec. 16.4

12
K-Means Algorithm Select K random docs {s 1, s 2,… s K } as seeds. Until clustering converges (or other stopping criterion): For each doc d i : Assign d i to the cluster c r such that dist(d i, s r ) is minimal. For each cluster c j s j = (c j ) Sec. 16.4

13
K Means Example (K=2) Pick seeds Reassign clusters Compute centroids x x Reassign clusters x x x x Compute centroids Reassign clusters Converged! Sec. 16.4

14
Termination conditions Several possibilities, e.g., A fixed number of iterations. Doc partition unchanged. Centroid positions don’t change. Sec. 16.4

15
Convergence Why should the K-means algorithm ever reach a fixed point? K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm EM is known to converge Number of iterations could be large But in practice usually isn’t Sec. 16.4

16
Convergence of K-Means Define goodness measure of cluster c as sum of squared distances from cluster centroid: G(c,s) = Σ i (d i – s c ) 2 (sum over all d i in cluster c) G(C,s) = Σ c G(c,s) Reassignment monotonically decreases G It is a coordinate descent algorithm (opt one component at a time) At any step we have some value for G(C,s) 1) Fix s, optimize C assign d to the closest centroid G(C’,s) < G(C,s) 2) Fix C’, optimize s take the new centroids G(C’,s’) < G(C’,s) < G(C,s) The new cost is smaller than the original one local minimum Sec. 16.4

17
Time Complexity The centroids are K Each doc/centroid consists of M dimensions Computing distance btw vectors is O(M) time. Reassigning clusters: Each doc compared with all centroids, O(KNM) time. Computing centroids: Each doc gets added once to some centroid, O(NM) time. Assume these two steps are each done once for I iterations: O(IKNM). Sec. 16.4

18
Seed Choice Results can vary based on random seed selection. Some seeds can result in poor convergence rate, or convergence to sub-optimal clusterings. Select good seeds using a heuristic doc least similar to any existing centroid According to a probability distribution that depends inversely-proportional on the distance from the other current centroids In the above, if you start with B and E as centroids you converge to {A,B,C} and {D,E,F} If you start with D and F you converge to {A,B,D,E} {C,F} Example showing sensitivity to seeds Sec. 16.4

19
How Many Clusters? Number of clusters K is given Partition n docs into predetermined number of clusters Finding the “right” number of clusters is part of the problem Can usually take an algorithm for one flavor and convert to the other.

20
Bisecting K-means Variant of K-means that can produce a partitional or a hierarchical clustering

21
Bisecting K-means Example

22
K-means Pros Simple Fast for low dimensional data It can find pure sub-clusters if large number of clusters is specified (but, over-partitioning) Cons K-Means cannot handle non-globular data of different sizes and densities K-Means will not identify outliers K-Means is restricted to data which has the notion of a center (centroid)

23
Hierarchical Clustering Build a tree-based hierarchical taxonomy (dendrogram) from a set of documents One approach: recursive application of a partitional clustering algorithm animal vertebrate fish reptile amphib. mammal worm insect crustacean invertebrate Ch. 17

24
Strengths of Hierarchical Clustering No assumption of any particular number of clusters Any desired number of clusters can be obtained by ‘cutting’ the dendogram at the proper level They may correspond to meaningful taxonomies Example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, …)

25
Hierarchical Agglomerative Clustering (HAC) Starts with each doc in a separate cluster Then repeatedly joins the closest pair of clusters, until there is only one cluster. The history of mergings forms a binary tree or hierarchy. Sec. 17.1

26
Closest pair of clusters Single-link Similarity of the closest points, the most cosine-similar Complete-link Similarity of the farthest points, the least cosine-similar Centroid Clusters whose centroids are the closest (or most cosine- similar) Average-link Clusters whose average distance/cosine between pairs of elements is the smallest Sec. 17.2

27
How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p Similarity? Single link (MIN) Complete link (MAX) Centroids Average Proximity Matrix

28
How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p Proximity Matrix MIN MAX Centroids Average

29
How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p Proximity Matrix MIN MAX Centroids Average

30
How to define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p Proximity Matrix MIN MAX Centroids Average

31
How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1p2p3p4p Proximity Matrix MIN MAX Centroids Average

32
Starting Situation Start with clusters of individual points and a proximity matrix p1 p3 p5 p4 p2 p1p2p3p4p Proximity Matrix

33
Intermediate Situation After some merging steps, we have some clusters C1 C4 C2 C5 C3 C2C1 C3 C5 C4 C2 C3C4C5 Proximity Matrix C1 C3C2C5C4

34
Intermediate Situation We want to merge the two closest clusters (C2 and C5) and update the proximity matrix. C1 C4 C2 C5 C3 C2C1 C3 C5 C4 C2 C3C4C5 Proximity Matrix C1 C3C2 U C5C4

35
After Merging The question is “How do we update the proximity matrix?” C1 C4 C2 U C5 C3 ? ? ? ? ? C2 U C5 C1 C3 C4 C2 U C5 C3C4 Proximity Matrix C1 C3 C2 U C5 C4

36
Cluster Similarity: MIN or Single Link Similarity of two clusters is based on the two most similar (closest) points in the different clusters Determined by one pair of points, i.e., by one link in the proximity graph ? ?

37
Strength of MIN Original Points Two Clusters Can handle non-elliptical shapes

38
Limitations of MIN Original Points Two Clusters Sensitive to noise and outliers

39
Cluster Similarity: MAX or Complete Linkage Similarity of two clusters is based on the two least similar (most distant) points in the different clusters Determined by all pairs of points in the two clusters ? ?

40
Strength of MAX Original Points Two Clusters Less susceptible to noise and outliers

41
Limitations of MAX Original Points Two Clusters Tends to break large clusters Biased towards globular clusters

42
Cluster Similarity: Average Proximity of two clusters is the average of pairwise proximity between points in the two clusters

43
Hierarchical Clustering: Comparison Average MIN MAX

44
How to evaluate clustering quality ? Assesses a clustering with respect to ground truth … requires labeled data Produce the gold standard is costly !! Sec. 16.3

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google