Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"

Similar presentations


Presentation on theme: "Prof. Paolo Ferragina, Algoritmi per "Information Retrieval""— Presentation transcript:

1 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Clustering Paolo Ferragina Dipartimento di Informatica Università di Pisa Chap 16 and 17

2 Objectives of Cluster Analysis
Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups Competing objectives Intra-cluster distances are minimized Inter-cluster distances are maximized The commonest form of unsupervised learning

3 Google News: automatic clustering gives an effective news presentation metaphor

4 For improving search recall
Sec. 16.1 Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to information needs Therefore, to improve search recall: Cluster docs in corpus a priori When a query matches a doc D, also return other docs in the cluster containing D Hope if we do this: The query “car” will also return docs containing automobile But also for speeding up the search operation

5 For better visualization/navigation of search results
Sec. 16.1 For better visualization/navigation of search results

6 Issues for clustering Representation for clustering How many clusters?
Sec. 16.2 Representation for clustering Document representation Vector space? Normalization? Need a notion of similarity/distance How many clusters? Fixed a priori? Completely data driven?

7 Notion of similarity/distance
Ideal: semantic similarity Practical: term-statistical similarity Docs as vectors We will use cosine similarity. For many algorithms, easier to think in terms of a distance (rather than similarity) between docs.

8 Clustering Algorithms
Flat algorithms Create a set of clusters Usually start with a random (partial) partitioning Refine it iteratively K means clustering Hierarchical algorithms Create a hierarchy of clusters (dendogram) Bottom-up, agglomerative Top-down, divisive

9 Hard vs. soft clustering
Hard clustering: Each document belongs to exactly one cluster More common and easier to do Soft clustering: Each document can belong to more than one cluster. Makes more sense for applications like creating browsable hierarchies News is a proper example Search results is another example

10 Flat & Partitioning Algorithms
Given: a set of n documents and the number K Find: a partition in K clusters that optimizes the chosen partitioning criterion Globally optimal Intractable for many objective functions Ergo, exhaustively enumerate all partitions Locally optimal Effective heuristic methods: K-means and K-medoids algorithms

11 K-Means Assumes documents are real-valued vectors.
Sec. 16.4 Assumes documents are real-valued vectors. Clusters based on centroids (aka the center of gravity or mean) of points in a cluster, c: Reassignment of instances to clusters is based on distance to the current cluster centroids.

12 K-Means Algorithm Select K random docs {s1, s2,… sK} as seeds.
Sec. 16.4 Select K random docs {s1, s2,… sK} as seeds. Until clustering converges (or other stopping criterion): For each doc di: Assign di to the cluster cr such that dist(di, sr) is minimal. For each cluster cj sj = (cj)

13 K Means Example (K=2) Pick seeds Reassign clusters Compute centroids
Sec. 16.4 Pick seeds Reassign clusters Compute centroids x Reassign clusters x Compute centroids Reassign clusters Converged!

14 Termination conditions
Sec. 16.4 Several possibilities: A fixed number of iterations. Doc partition unchanged. Centroid positions don’t change.

15 Convergence Why should the K-means algorithm ever reach a fixed point?
Sec. 16.4 Why should the K-means algorithm ever reach a fixed point? K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm EM is known to converge Number of iterations could be large But in practice usually isn’t

16 Convergence of K-Means
Sec. 16.4 Define goodness measure of cluster c as sum of squared distances from cluster centroid: G(c,s) = Σj in c (dj – sc) (sum over all di in cluster c) G(C,s) = Σc G(c,s) Reassignment monotonically decreases G It is a coordinate descent algorithm (optimize one component at a time) At any step we have some value for G(C,s) 1) Fix s, optimize C  assign doc to the closest centroid  G(C’,s) < G(C,s) 2) Fix C’, optimize s  take the new centroids  G(C’,s’) < G(C’,s) < G(C,s) The new cost is smaller than the original one  local minimum

17 Time Complexity The centroids are K
Sec. 16.4 The centroids are K Each doc/centroid consists of M dimensions Computing distance btw vectors is O(M) time. Reassigning clusters: Each doc compared with all centroids, O(N x K x M) time. Computing centroids: Each doc gets added once to some centroid, O(NM) time. Assume these two steps are each done once for I iterations: O(IKNM).

18 Seed Choice Results can vary based on random seed selection.
Sec. 16.4 Results can vary based on random seed selection. Some seeds can result in poor convergence rate, or convergence to sub-optimal clusterings. Select good seeds using a heuristic doc least similar to any existing centroid According to a probability distribution that depends inversely-proportional on the distance from the other current centroids Example showing sensitivity to seeds In the above, if you start with B and E as centroids you converge to {A,B,C} and {D,E,F} If you start with D and F you converge to {A,B,D,E} {C,F}

19 How Many Clusters? Number of clusters K is given
Partition n docs into predetermined number of clusters Finding the “right” number of clusters is part of the problem Can usually take an algorithm for one flavor and convert to the other.

20 Bisecting K-means Variant of K-means that can produce a partitional or a hierarchical clustering SSE = G(C,s) is called Sum of Squared Error

21 Bisecting K-means Example

22 K-means Pros Simple Fast for low dimensional data
It can find pure sub-clusters if large number of clusters is specified (but, over-partitioning) Cons K-Means cannot handle non-globular data of different sizes and densities K-Means will not identify outliers K-Means is restricted to data which has the notion of a center (centroid)

23 Hierarchical Clustering
Build a tree-based hierarchical taxonomy (dendrogram) from a set of documents Possibility: recursive application of a partitional clustering algorithm animal vertebrate fish reptile amphib. mammal worm insect crustacean invertebrate

24 Strengths of Hierarchical Clustering
No assumption of any particular number of clusters Any desired number of clusters can be obtained by ‘cutting’ the dendogram at the proper level They may correspond to meaningful taxonomies Example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, …)

25 Hierarchical Agglomerative Clustering (HAC)
Sec. 17.1 Starts with each doc in a separate cluster Then repeatedly join the most similar pair of clusters, until there is only one cluster. Keep attention: The computation at every step is the MIN of the SIM computed among all pairs of clusters The history of merging forms a binary tree or hierarchy.

26 Similarity of pair of clusters
Sec. 17.2 Similarity of pair of clusters Single-link Similarity of the closest points Complete-link Similarity of the farthest points Centroid Similarity among centroids Average-link Similarity = Average distance between all pairs of items Keep attention: The computation at every step is the MIN of the SIM computed among all pairs of clusters

27 How to Define Inter-Cluster Similarity
p1 p3 p5 p4 p2 . . . . Similarity? Single link (MIN) Complete link (MAX) Centroids Average Proximity Matrix Keep attention: The computation at every step is the MIN of the SIM computed among all pairs of clusters

28 How to Define Inter-Cluster Similarity
p1 p3 p5 p4 p2 . . . . MIN MAX Centroids Average Proximity Matrix Keep attention: The computation at every step is the MIN of the SIM computed among all pairs of clusters

29 How to Define Inter-Cluster Similarity
p1 p3 p5 p4 p2 . . . . MIN MAX Centroids Average Proximity Matrix Keep attention: The computation at every step is the MIN of the SIM computed among all pairs of clusters

30 How to define Inter-Cluster Similarity
p1 p3 p5 p4 p2 . . . . MIN MAX Centroids Average Proximity Matrix Keep attention: The computation at every step is the MIN of the SIM computed among all pairs of clusters

31 How to Define Inter-Cluster Similarity
p1 p3 p5 p4 p2 . . . . MIN MAX Centroids Average Proximity Matrix Keep attention: The computation at every step is the MIN of the SIM computed among all pairs of clusters

32 Starting Situation Start with clusters of individual points and a proximity matrix p1 p3 p5 p4 p2 . . . . Proximity Matrix

33 Intermediate Situation
After some merging steps, we have some clusters C2 C1 C3 C5 C4 C3 C4 C1 Proximity Matrix C1 C3 C2 C5 C4 C5 C2

34 Intermediate Situation
We want to merge the two closest clusters (C2 and C5) and update the proximity matrix. C2 C1 C3 C5 C4 C3 C4 C1 Proximity Matrix C1 C3 C2 U C5 C4 C5 C2

35 After Merging The question is “How do we update the proximity matrix?”
C2 U C5 C1 C3 C4 C1 ? C3 ? ? ? ? C2 U C5 C4 C3 ? ? C4 C1 Proximity Matrix C1 C3 C2 U C5 C4 C2 U C5

36 Cluster Similarity: MIN or Single Link
Similarity of two clusters is based on the two most similar (closest) points in the different clusters Determined by one pair of points, i.e., by one link in the proximity graph. ? 1 2 3 4 5

37 Strength of MIN Two Clusters Original Points
Can handle non-elliptical shapes

38 Limitations of MIN Two Clusters Original Points
Sensitive to noise and outliers

39 Cluster Similarity: MAX or Complete Linkage
Similarity of two clusters is based on the two least similar (most distant) points in the different clusters Determined by all pairs of points in the two clusters ? 1 2 3 4 5

40 Strength of MAX Two Clusters Original Points
Less susceptible to noise and outliers

41 Limitations of MAX Original Points Two Clusters
Tends to break large clusters Biased towards globular clusters

42 Cluster Similarity: Average
Proximity of two clusters is the average of pairwise proximity between points in the two clusters. ? 1 2 3 4 5

43 Hierarchical Clustering: Comparison
5 5 1 2 3 4 5 6 4 1 2 3 4 5 6 4 3 2 2 1 MAX MIN 3 1 5 1 2 3 4 5 6 4 2 3 1 Average

44 How to evaluate clustering quality ?
Sec. 16.3 Assesses a clustering with respect to ground truth … requires labeled data Produce the gold standard is costly !!


Download ppt "Prof. Paolo Ferragina, Algoritmi per "Information Retrieval""

Similar presentations


Ads by Google