Presentation is loading. Please wait.

Presentation is loading. Please wait.

INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID

Similar presentations


Presentation on theme: "INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID"— Presentation transcript:

1 INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Lecture # 41 Clustering

2 ACKNOWLEDGEMENTS The presentation of this lecture has been taken from the following sources “Introduction to information retrieval” by Prabhakar Raghavan, Christopher D. Manning, and Hinrich Schütze “Managing gigabytes” by Ian H. Witten, ‎Alistair Moffat, ‎Timothy C. Bell “Modern information retrieval” by Baeza-Yates Ricardo, ‎  “Web Information Retrieval” by Stefano Ceri, ‎Alessandro Bozzon, ‎Marco Brambilla

3 Outline What is clustering? Improving search recall
Issues for clustering Notion of similarity/distance Hard vs. soft clustering Clustering Algorithms

4 What is clustering? The commonest form of unsupervised learning
Clustering: the process of grouping a set of objects into classes of similar objects Documents within a cluster should be similar. Documents from different clusters should be dissimilar. The commonest form of unsupervised learning Unsupervised learning = learning from raw data, as opposed to supervised data where a classification of examples is given A common and important task that finds many applications in IR and other places 00:01:30  00:01:45 (Clustering) 00:01:30  00:03:55 (the commonest)

5 A data set with clear cluster structure
How would you design an algorithm for finding the three clusters in this case? 00:05:10  00:05:50

6 Applications of clustering in IR
Whole corpus analysis/navigation Better user interface: search without typing For improving recall in search applications Better search results (like pseudo RF) For better navigation of search results Effective “user recall” will be higher For speeding up vector space retrieval Cluster-based retrieval gives faster search 00:06:40  00:07:00

7 Yahoo! Hierarchy isn’t clustering but is the kind of output you want from clustering
… (30) agriculture biology physics CS space ... ... ... ... ... dairy botany cell AI courses crops craft magnetism agronomy HCI missions forestry evolution 00:07:30  00:08:00 00:08:15  00:08:35 Final data set: Yahoo Science hierarchy, consisting of text of web pages pointed to by Yahoo, gathered summer of 1997. 264 classes, only sample shown here. relativity

8 Google News: automatic clustering gives an effective news presentation metaphor
00:09:25  00:10:10

9 Scatter/Gather: Cutting, Karger, and Pedersen
00:10:30  00:11:05 00:11:10  00:12:20

10 Applications of clustering in IR
Whole corpus analysis/navigation Better user interface: search without typing For improving recall in search applications Better search results (like pseudo RF) For better navigation of search results Effective “user recall” will be higher For speeding up vector space retrieval Cluster-based retrieval gives faster search 00:13:10  00:13:25

11 For improving search recall
Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to information needs Therefore, to improve search recall: Cluster docs in corpus a priori When a query matches a doc D, also return other docs in the cluster containing D Hope if we do this: The query “car” will also return docs containing automobile Because clustering grouped together docs containing car with those containing automobile. 00:15:45  00:16:10 (cluster hypothesis & therefore) 00:16:30  00:16:51 (when a query) 00:17:00  00:17:30 (hope if we)

12 Applications of clustering in IR
Whole corpus analysis/navigation Better user interface: search without typing For improving recall in search applications Better search results (like pseudo RF) For better navigation of search results Effective “user recall” will be higher For speeding up vector space retrieval Cluster-based retrieval gives faster search 00:18:50  00:19:05

13 yippy.com – grouping search results
00:19:10  00:20:20 00:20:30  00:20:55 00:21:10  00:21:25 yippy.com – grouping search results

14 Applications of clustering in IR
Whole corpus analysis/navigation Better user interface: search without typing For improving recall in search applications Better search results (like pseudo RF) For better navigation of search results Effective “user recall” will be higher For speeding up vector space retrieval Cluster-based retrieval gives faster search 00:22:20  00:22:32

15 Visualization Query 00:22:40  00:22:55 Leader Follower

16 Issues for clustering Representation for clustering How many clusters?
Document representation Vector space? Normalization? Need a notion of similarity/distance How many clusters? Fixed a priori? Completely data driven? Avoid “trivial” clusters - too large or small If a cluster's too large, then for navigation purposes you've wasted an extra user click without whittling down the set of documents much. 00:24:10  00:25:00 (represents) 00:25:30  00:26:15 (how many clusters)

17 Notion of similarity/distance
Ideal: semantic similarity. Practical: term-statistical similarity (docs as vectors) Cosine similarity For many algorithms, easier to think in terms of a distance (rather than similarity) between docs. We will mostly speak of Euclidean distance But real implementations use cosine similarity 00:26:45  00:27:00 (ideal) 00:27:15  00:28:10 (practical)

18 Hard vs. soft clustering
Hard clustering: Each document belongs to exactly one cluster More common and easier to do Soft clustering: A document can belong to more than one cluster. Makes more sense for applications like creating brows able hierarchies You may want to put a pair of sneakers in two clusters: (i) sports apparel and (ii) shoes You can only do that with a soft clustering approach. 00:28:25  00:28:35 (hard clustering) 00:28:55  00:29:45

19 Clustering Algorithms
Flat algorithms Usually start with a random (partial) partitioning Refine it iteratively K means clustering (Model based clustering) Hierarchical algorithms Bottom-up, agglomerative (Top-down, divisive) 00:30:15  00:31:00

20 Partitioning Algorithms
Partitioning method: Construct a partition of n documents into a set of K clusters Given: a set of documents and the number K Find: a partition of K clusters that optimizes the chosen partitioning criterion Globally optimal Intractable for many objective functions Ergo, exhaustively enumerate all partitions Effective heuristic methods: K-means and K-medoids algorithms 00:32:50  00:33:15 (partitioning & giver & find) 00:33:55  00:34:20 (globally & effective)

21 K-Means Assumes documents are real-valued vectors.
Clusters based on centroids (aka the center of gravity or mean) of points in a cluster, c: Reassignment of instances to clusters is based on distance to the current cluster centroids. (Or one can equivalently phrase it in terms of similarities) 00:34:40  00:35:00 (assumes & clusters) 00:35:25  00:35:50 (formula) 00:37:10  00:37:30 (reassignment)

22 K Means Example (K=2) Pick seeds Reassign clusters Compute centroids
Converged! (Graphics: animation required as in the slide) 00:37:55  00:40:00 00:40:30  00:40:55 00:41:20  00:42:30 00:42:42  00:43:50

23 Termination conditions
Several possibilities, e.g., A fixed number of iterations. Doc partition unchanged. Centroid positions don’t change. Does this mean that the docs in a cluster are unchanged? 00:44:50  00:46:05 (a fixed) 00:46:00  00:46:20 (doc & centroid)

24 Convergence Why should the K-means algorithm ever reach a fixed point?
A state in which clusters don’t change. K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm. EM is known to converge. Number of iterations could be large. But in practice usually isn’t 00:47:25  00:47:50 (why should) 00:47:55  00:48:15 (k-mean)

25 Convergence of K-Means
Residual Sum of Squares (RSS), a goodness measure of a cluster, is the sum of squared distances from the cluster centroid: RSSj = Σi |di – cj| (sum over all di in cluster j) RSS = Σj RSSj Reassignment monotonically decreases RSS since each vector is assigned to the closest centroid. Recomputation also monotonically decreases each RSSj because … 00:49:05  00:49:35 (residual & RSS)

26 K Means Example (K=2) Pick seeds Reassign clusters Compute centroids
Converged! 00:51:30  00:52:20 (whole animation explain again)

27 Convergence of K-Means
Residual Sum of Squares (RSS), a goodness measure of a cluster, is the sum of squared distances from the cluster centroid: RSSj = Σi |di – cj| (sum over all di in cluster j) RSS = Σj RSSj Reassignment monotonically decreases RSS since each vector is assigned to the closest centroid. Recomputation also monotonically decreases each RSSj because … 00:52:22  00:52:40 (reassignment & recompilation)

28 Seed Choice Results can vary based on random seed selection.
Some seeds can result in poor convergence rate, or convergence to sub-optimal clusterings. Select good seeds using a heuristic (e.g., doc least similar to any existing mean) Try out multiple starting points Initialize with the results of another method. Example showing sensitivity to seeds In the above, if you start with B and E as centroids you converge to {A,B,C} and {D,E,F} If you start with D and F you converge to {A,B,D,E} {C,F} 00:52:54  00:53:40 00:53:45  00:54:30

29 Hierarchical Clustering
Build a tree-based hierarchical taxonomy (dendrogram) from a set of documents. One approach: recursive application of a partitional clustering algorithm. animal vertebrate fish reptile amphib. mammal worm insect crustacean invertebrate 00:54:35  00:54:55

30 Final word and resources
In clustering, clusters are inferred from the data without human input (unsupervised learning) However, in practice, it’s a bit less clear: there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents, . . . Resources IIR 16 except 16.5 IIR 17.1–17.3


Download ppt "INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID"

Similar presentations


Ads by Google