INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID

Slides:



Advertisements
Similar presentations
Lecture 15(Ch16): Clustering
Advertisements

Information Retrieval Lecture 7 Introduction to Information Retrieval (Manning et al. 2007) Chapter 17 For the MSc Computer Science Programme Dell Zhang.
PARTITIONAL CLUSTERING
Clustering Paolo Ferragina Dipartimento di Informatica Università di Pisa This is a mix of slides taken from several presentations, plus my touch !
Unsupervised learning
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 16: Flat Clustering 1.
CS347 Lecture 8 May 7, 2001 ©Prabhakar Raghavan. Today’s topic Clustering documents.
CS728 Web Clustering II Lecture 14. K-Means Assumes documents are real-valued vectors. Clusters based on centroids (aka the center of gravity or mean)
1 Text Clustering. 2 Clustering Partition unlabeled examples into disjoint subsets of clusters, such that: –Examples within a cluster are very similar.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 16: Flat Clustering 1.
CS276A Text Retrieval and Mining Lecture 13 [Borrows slides from Ray Mooney and Soumen Chakrabarti]
Unsupervised Learning: Clustering 1 Lecture 16: Clustering Web Search and Mining.
Unsupervised Learning. CS583, Bing Liu, UIC 2 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate.
Text Clustering.
Clustering Supervised vs. Unsupervised Learning Examples of clustering in Web IR Characteristics of clustering Clustering algorithms Cluster Labeling 1.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
ITCS 6265 Information Retrieval & Web Mining Lecture 15 Clustering.
Basic Machine Learning: Clustering CS 315 – Web Search and Data Mining 1.
1 Motivation Web query is usually two or three words long. –Prone to ambiguity –Example “keyboard” –Input device of computer –Musical instruments How can.
Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 16: Clustering.
Clustering Paolo Ferragina Dipartimento di Informatica Università di Pisa This is a mix of slides taken from several presentations, plus my touch !
Information Retrieval Lecture 6 Introduction to Information Retrieval (Manning et al. 2007) Chapter 16 For the MSc Computer Science Programme Dell Zhang.
Hinrich Schütze and Christina Lioma Lecture 16: Flat Clustering
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 16: Flat Clustering 1.
Information Retrieval and Organisation Chapter 16 Flat Clustering Dell Zhang Birkbeck, University of London.
Clustering. What is Clustering? Clustering: the process of grouping a set of objects into classes of similar objects –Documents within a cluster should.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 15 10/13/2011.
Clustering (Modified from Stanford CS276 Slides - Lecture 17 Clustering)
Today’s Topic: Clustering  Document clustering Motivations Document representations Success criteria  Clustering algorithms Partitional Hierarchical.
Compiled By: Raj Gaurang Tiwari Assistant Professor SRMGPC, Lucknow Unsupervised Learning.
Lecture 12: Clustering May 5, Clustering (Ch 16 and 17)  Document clustering  Motivations  Document representations  Success criteria  Clustering.
Information Retrieval and Organisation Chapter 17 Hierarchical Clustering Dell Zhang Birkbeck, University of London.
Basic Machine Learning: Clustering CS 315 – Web Search and Data Mining 1.
Clustering Slides adapted from Chris Manning, Prabhakar Raghavan, and Hinrich Schütze (
1 Machine Learning Lecture 9: Clustering Moshe Koppel Slides adapted from Raymond J. Mooney.
Introduction to Information Retrieval Introduction to Information Retrieval Clustering Chris Manning, Pandu Nayak, and Prabhakar Raghavan.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Data Mining and Text Mining. The Standard Data Mining process.
Hierarchical Clustering & Topic Models
Unsupervised Learning: Clustering
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Sampath Jayarathna Cal Poly Pomona
Unsupervised Learning: Clustering
IAT 355 Clustering ______________________________________________________________________________________.
Machine Learning Clustering: K-means Supervised Learning
Machine Learning Lecture 9: Clustering
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
本投影片修改自Introduction to Information Retrieval一書之投影片 Ch 16 & 17
CS276B Text Information Retrieval, Mining, and Exploitation
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Machine Learning on Data Lecture 9b- Clustering
Text Categorization Berlin Chen 2003 Reference:
Clustering Techniques
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Introduction to Machine learning
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Presentation transcript:

INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID Lecture # 41 Clustering

ACKNOWLEDGEMENTS The presentation of this lecture has been taken from the following sources “Introduction to information retrieval” by Prabhakar Raghavan, Christopher D. Manning, and Hinrich Schütze “Managing gigabytes” by Ian H. Witten, ‎Alistair Moffat, ‎Timothy C. Bell “Modern information retrieval” by Baeza-Yates Ricardo, ‎  “Web Information Retrieval” by Stefano Ceri, ‎Alessandro Bozzon, ‎Marco Brambilla

Outline What is clustering? Improving search recall Issues for clustering Notion of similarity/distance Hard vs. soft clustering Clustering Algorithms

What is clustering? The commonest form of unsupervised learning Clustering: the process of grouping a set of objects into classes of similar objects Documents within a cluster should be similar. Documents from different clusters should be dissimilar. The commonest form of unsupervised learning Unsupervised learning = learning from raw data, as opposed to supervised data where a classification of examples is given A common and important task that finds many applications in IR and other places 00:01:30  00:01:45 (Clustering) 00:01:30  00:03:55 (the commonest)

A data set with clear cluster structure How would you design an algorithm for finding the three clusters in this case? 00:05:10  00:05:50

Applications of clustering in IR Whole corpus analysis/navigation Better user interface: search without typing For improving recall in search applications Better search results (like pseudo RF) For better navigation of search results Effective “user recall” will be higher For speeding up vector space retrieval Cluster-based retrieval gives faster search 00:06:40  00:07:00

Yahoo! Hierarchy isn’t clustering but is the kind of output you want from clustering www.yahoo.com/Science … (30) agriculture biology physics CS space ... ... ... ... ... dairy botany cell AI courses crops craft magnetism agronomy HCI missions forestry evolution 00:07:30  00:08:00 00:08:15  00:08:35 Final data set: Yahoo Science hierarchy, consisting of text of web pages pointed to by Yahoo, gathered summer of 1997. 264 classes, only sample shown here. relativity

Google News: automatic clustering gives an effective news presentation metaphor 00:09:25  00:10:10

Scatter/Gather: Cutting, Karger, and Pedersen 00:10:30  00:11:05 00:11:10  00:12:20

Applications of clustering in IR Whole corpus analysis/navigation Better user interface: search without typing For improving recall in search applications Better search results (like pseudo RF) For better navigation of search results Effective “user recall” will be higher For speeding up vector space retrieval Cluster-based retrieval gives faster search 00:13:10  00:13:25

For improving search recall Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to information needs Therefore, to improve search recall: Cluster docs in corpus a priori When a query matches a doc D, also return other docs in the cluster containing D Hope if we do this: The query “car” will also return docs containing automobile Because clustering grouped together docs containing car with those containing automobile. 00:15:45  00:16:10 (cluster hypothesis & therefore) 00:16:30  00:16:51 (when a query) 00:17:00  00:17:30 (hope if we)

Applications of clustering in IR Whole corpus analysis/navigation Better user interface: search without typing For improving recall in search applications Better search results (like pseudo RF) For better navigation of search results Effective “user recall” will be higher For speeding up vector space retrieval Cluster-based retrieval gives faster search 00:18:50  00:19:05

yippy.com – grouping search results 00:19:10  00:20:20 00:20:30  00:20:55 00:21:10  00:21:25 yippy.com – grouping search results

Applications of clustering in IR Whole corpus analysis/navigation Better user interface: search without typing For improving recall in search applications Better search results (like pseudo RF) For better navigation of search results Effective “user recall” will be higher For speeding up vector space retrieval Cluster-based retrieval gives faster search 00:22:20  00:22:32

Visualization Query 00:22:40  00:22:55 Leader Follower

Issues for clustering Representation for clustering How many clusters? Document representation Vector space? Normalization? Need a notion of similarity/distance How many clusters? Fixed a priori? Completely data driven? Avoid “trivial” clusters - too large or small If a cluster's too large, then for navigation purposes you've wasted an extra user click without whittling down the set of documents much. 00:24:10  00:25:00 (represents) 00:25:30  00:26:15 (how many clusters)

Notion of similarity/distance Ideal: semantic similarity. Practical: term-statistical similarity (docs as vectors) Cosine similarity For many algorithms, easier to think in terms of a distance (rather than similarity) between docs. We will mostly speak of Euclidean distance But real implementations use cosine similarity 00:26:45  00:27:00 (ideal) 00:27:15  00:28:10 (practical)

Hard vs. soft clustering Hard clustering: Each document belongs to exactly one cluster More common and easier to do Soft clustering: A document can belong to more than one cluster. Makes more sense for applications like creating brows able hierarchies You may want to put a pair of sneakers in two clusters: (i) sports apparel and (ii) shoes You can only do that with a soft clustering approach. 00:28:25  00:28:35 (hard clustering) 00:28:55  00:29:45

Clustering Algorithms Flat algorithms Usually start with a random (partial) partitioning Refine it iteratively K means clustering (Model based clustering) Hierarchical algorithms Bottom-up, agglomerative (Top-down, divisive) 00:30:15  00:31:00

Partitioning Algorithms Partitioning method: Construct a partition of n documents into a set of K clusters Given: a set of documents and the number K Find: a partition of K clusters that optimizes the chosen partitioning criterion Globally optimal Intractable for many objective functions Ergo, exhaustively enumerate all partitions Effective heuristic methods: K-means and K-medoids algorithms 00:32:50  00:33:15 (partitioning & giver & find) 00:33:55  00:34:20 (globally & effective)

K-Means Assumes documents are real-valued vectors. Clusters based on centroids (aka the center of gravity or mean) of points in a cluster, c: Reassignment of instances to clusters is based on distance to the current cluster centroids. (Or one can equivalently phrase it in terms of similarities) 00:34:40  00:35:00 (assumes & clusters) 00:35:25  00:35:50 (formula) 00:37:10  00:37:30 (reassignment)

K Means Example (K=2) Pick seeds Reassign clusters Compute centroids Converged! (Graphics: animation required as in the slide) 00:37:55  00:40:00 00:40:30  00:40:55 00:41:20  00:42:30 00:42:42  00:43:50

Termination conditions Several possibilities, e.g., A fixed number of iterations. Doc partition unchanged. Centroid positions don’t change. Does this mean that the docs in a cluster are unchanged? 00:44:50  00:46:05 (a fixed) 00:46:00  00:46:20 (doc & centroid)

Convergence Why should the K-means algorithm ever reach a fixed point? A state in which clusters don’t change. K-means is a special case of a general procedure known as the Expectation Maximization (EM) algorithm. EM is known to converge. Number of iterations could be large. But in practice usually isn’t 00:47:25  00:47:50 (why should) 00:47:55  00:48:15 (k-mean)

Convergence of K-Means Residual Sum of Squares (RSS), a goodness measure of a cluster, is the sum of squared distances from the cluster centroid: RSSj = Σi |di – cj|2 (sum over all di in cluster j) RSS = Σj RSSj Reassignment monotonically decreases RSS since each vector is assigned to the closest centroid. Recomputation also monotonically decreases each RSSj because … 00:49:05  00:49:35 (residual & RSS)

K Means Example (K=2) Pick seeds Reassign clusters Compute centroids Converged! 00:51:30  00:52:20 (whole animation explain again)

Convergence of K-Means Residual Sum of Squares (RSS), a goodness measure of a cluster, is the sum of squared distances from the cluster centroid: RSSj = Σi |di – cj|2 (sum over all di in cluster j) RSS = Σj RSSj Reassignment monotonically decreases RSS since each vector is assigned to the closest centroid. Recomputation also monotonically decreases each RSSj because … 00:52:22  00:52:40 (reassignment & recompilation)

Seed Choice Results can vary based on random seed selection. Some seeds can result in poor convergence rate, or convergence to sub-optimal clusterings. Select good seeds using a heuristic (e.g., doc least similar to any existing mean) Try out multiple starting points Initialize with the results of another method. Example showing sensitivity to seeds In the above, if you start with B and E as centroids you converge to {A,B,C} and {D,E,F} If you start with D and F you converge to {A,B,D,E} {C,F} 00:52:54  00:53:40 00:53:45  00:54:30

Hierarchical Clustering Build a tree-based hierarchical taxonomy (dendrogram) from a set of documents. One approach: recursive application of a partitional clustering algorithm. animal vertebrate fish reptile amphib. mammal worm insect crustacean invertebrate 00:54:35  00:54:55

Final word and resources In clustering, clusters are inferred from the data without human input (unsupervised learning) However, in practice, it’s a bit less clear: there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents, . . . Resources IIR 16 except 16.5 IIR 17.1–17.3