Similarity Measures for Text Document Clustering

Slides:



Advertisements
Similar presentations
Answering Approximate Queries over Autonomous Web Databases Xiangfu Meng, Z. M. Ma, and Li Yan College of Information Science and Engineering, Northeastern.
Advertisements

Text Categorization.
Clustering Clustering of data is a method by which large sets of data is grouped into clusters of smaller sets of similar data. The example below demonstrates.
Mustafa Cayci INFS 795 An Evaluation on Feature Selection for Text Clustering.
Clustering for web documents 1 박흠. Clustering for web documents 2 Contents Cluto Criterion Functions for Document Clustering* Experiments and Analysis.
Chapter 5: Introduction to Information Retrieval
Clustering Categorical Data The Case of Quran Verses
The 5th annual UK Workshop on Computational Intelligence London, 5-7 September 2005 Department of Electronic & Electrical Engineering University College.
PARTITIONAL CLUSTERING
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model.
DIMENSIONALITY REDUCTION BY RANDOM PROJECTION AND LATENT SEMANTIC INDEXING Jessica Lin and Dimitrios Gunopulos Ângelo Cardoso IST/UTL December
Query Operations: Automatic Local Analysis. Introduction Difficulty of formulating user queries –Insufficient knowledge of the collection –Insufficient.
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
Jierui Xie, Boleslaw Szymanski, Mohammed J. Zaki Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 12180, USA {xiej2, szymansk,
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Slide 1 EE3J2 Data Mining Lecture 16 Unsupervised Learning Ali Al-Shahib.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Cluster Analysis (1).
What is Cluster Analysis?
S IMILARITY M EASURES FOR T EXT D OCUMENT C LUSTERING Anna Huang Department of Computer Science The University of Waikato, Hamilton, New Zealand BY Farah.
Clustering Ram Akella Lecture 6 February 23, & 280I University of California Berkeley Silicon Valley Center/SC.
Chapter 5: Information Retrieval and Web Search
“A Comparison of Document Clustering Techniques” Michael Steinbach, George Karypis and Vipin Kumar (Technical Report, CSE, UMN, 2000) Mahashweta Das
Clustering Unsupervised learning Generating “classes”
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
Modeling (Chap. 2) Modern Information Retrieval Spring 2000.
Unsupervised Learning Reading: Chapter 8 from Introduction to Data Mining by Tan, Steinbach, and Kumar, pp , , (
Presented by Tienwei Tsai July, 2005
The identification of interesting web sites Presented by Xiaoshu Cai.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Eric H. Huang, Richard Socher, Christopher D. Manning, Andrew Y. Ng Computer Science Department, Stanford University, Stanford, CA 94305, USA ImprovingWord.
Basic Machine Learning: Clustering CS 315 – Web Search and Data Mining 1.
Chapter 6: Information Retrieval and Web Search
Clustering.
CS 8751 ML & KDDData Clustering1 Clustering Unsupervised learning Generating “classes” Distance/similarity measures Agglomerative methods Divisive methods.
Evaluation of gene-expression clustering via mutual information distance measure Ido Priness, Oded Maimon and Irad Ben-Gal BMC Bioinformatics, 2007.
1 CSC 594 Topics in AI – Text Mining and Analytics Fall 2015/16 8. Text Clustering.
Natural Language Processing Topics in Information Retrieval August, 2002.
Identifying Ethnic Origins with A Prototype Classification Method Fu Chang Institute of Information Science Academia Sinica ext. 1819
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
ENHANCING CLUSTERING BLOG DOCUMENTS BY UTILIZING AUTHOR/READER COMMENTS Beibei Li, Shuting Xu, Jun Zhang Department of Computer Science University of Kentucky.
Semi-Supervised Clustering
Machine Learning Clustering: K-means Supervised Learning
Clustering Algorithms
Data Mining K-means Algorithm
Video Google: Text Retrieval Approach to Object Matching in Videos
CSE 5243 Intro. to Data Mining
K Nearest Neighbor Classification
Section 7.12: Similarity By: Ralucca Gera, NPS.
REMOTE SENSING Multispectral Image Classification
Information Retrieval and Web Search
Information Organization: Clustering
Representation of documents and queries
Text Categorization Assigning documents to a fixed set of categories
Principles of Data Mining Published by Springer-Verlag. 2007
From frequency to meaning: vector space models of semantics
Authors: Wai Lam and Kon Fan Low Announcer: Kyu-Baek Hwang
Numerical Analysis Lecture 17.
Hankz Hankui Zhuo Text Categorization Hankz Hankui Zhuo
Cluster Analysis.
Text Categorization Berlin Chen 2003 Reference:
Image Segmentation.
Clustering Techniques
Video Google: Text Retrieval Approach to Object Matching in Videos
Date: 2012/11/15 Author: Jin Young Kim, Kevyn Collins-Thompson,
Clustering The process of grouping samples so that the samples are similar within each group.
SVMs for Document Ranking
VECTOR SPACE MODEL Its Applications and implementations
Presentation transcript:

Similarity Measures for Text Document Clustering Anna Huang Department of Computer Science The University of Waikato, Hamilton, New Zealand BY Farah Kamw

1.INTRODUCTION Clustering is a useful technique that organizes a large quantity of unordered text documents into a small number of meaningful and coherent cluster. A wide variety of distance functions and similarity measures have been used for clustering, such as squared Euclidean distance, and cosine similarity. Text document clustering groups similar documents to form a coherent cluster, while documents that are different have separated apart into different clusters.

1.INTRODUCTION (cont.) In this paper, they compare and analyze the effectiveness of these measures in partitional clustering for text document datasets. their experiments utilize the standard K-means algorithm and they report results on seven text document datasets and five distance/similarity measures that have been most commonly used in text clustering. They use two measures to evaluate the overall quality of clustering solutions—purity and entropy, which are commonly used in clustering.

2.DOCUMENT REPRESENTATION They use the frequency of each term as its weight, which means terms that appear more frequently are more important and descriptive for the document. Let D = { d1, . . . , dn } be a set of documents and T = {t1, . . . ,tm } the set of distinct terms occurring in D. A document is then represented as a m- dimensional vector td. Let tf(d, t) denote the frequency of term t ∈ T in document d ∈ D. Then the vector representation of a document d is

2.DOCUMENT REPRESENTATION (cont.) Terms are basically words. But they applied several standard transformations on the basic term vector representation. First, the stop words are removed. There are words that are non-descriptive for the topic of a document, such as a, and, are and do. They used learning workbench, which contains 527 stop words. Second, words were stemmed using Porter’s suffix- stripping algorithm, so that words with different endings will be mapped into a single word. For example production, produce, produces and product will be mapped to the stem produce.

2.DOCUMENT REPRESENTATION (cont.) Third, the words that appear with less than a given threshold frequency were discarded. Consequently, they select the top 2000 words ranked by their weights and use them in their experiments. In addition , the terms that appear frequently in a small number of documents but rarely in the other documents tend to be more important, and therefore more useful for finding similar documents. So, they transform the basic term frequencies tf(d, t) into the tfidf. (term frequency and inversed document frequency), Here df(t) is the number of documents in which term t appears.

3.SIMILARITY MEASURES In general, similarity/distance measures map the distance or similarity between the symbolic description of two objects into a single numeric value, which depends on two factors— the properties of the two objects and the measure itself.

3.1 Metric To qualify as a metric, a measure d must satisfy the following four conditions. Let x and y be any two objects in a set and d(x, y) be the distance between x and y. 1. d(x, y) >= 0. 2. d(x, y) = 0 if and only if x = y. 3. d(x, y) = d(y, x). 4. d(x, z) = d(x, y) + d(y, z). 1.The distance between any two points must be nonnegative 2.The distance between two objects must be zero if and only if the two objects are identical, that is, 3. Distance must be symmetric, that is, distance from x to y is the same as the distance from y to x. 4. The measure must satisfy the triangle inequality,which is

3.2 Euclidean Distance Euclidean distance is widely used in clustering problems, including clustering text. It satisfies all the above four conditions and therefore is a true metric. Measuring distance between text documents, given two documents da and db represented by their term vectors ta and tb respectively, the Euclidean distance of the two documents is defined as:

3.3 Cosine Similarity When documents are represented as term vectors, the similarity of two documents corresponds to the correlation between the vectors. This is quantified as the cosine of the angle between vectors. Where ta and tb are m-dimensional vectors over the term set T = {t1,…,tm }. Each dimension represents a term with its weight in the document, which is non- negative. As a result, the cosine similarity is non- negative and bounded between [0,1].

3.4 Jaccard Coeficient Measures similarity as the intersection divided by the union of the objects. The formal definition is: The Jaccard coefficient is a similarity measure and ranges between 0 and 1, where 1 means the two objects are the same and 0 means they are completely different. The corresponding distance measure is DJ= 1 - SIMJ

3.5 Pearson Correlation Coefficient Given the term set T={t1,…,tm}a commonly used form is: This is also a similarity measure. However, unlike the other measures, it ranges from +1 to -1 and it is 1 when ta=tb . Dp=1-simp when simp>=0,and Dp=simp if simp<0. In subsequent experiments they use the corresponding distance measure,

3.6 Averaged KL Divergence In information theory based clustering, a document is considered as a probability distribution of terms. The similarity of two documents is measured as the distance between the two corresponding probability distributions. The entropy divergence (KL divergence), is a widely applied measure for evaluating the differences between two probability distributions.

4. K-Mean Clustering Algorithm This is an iterative partitional clustering process that aims to minimize the least squares error criterion. The K-means algorithm works as follows. Given a set of data objects D and a pre-specified number of clusters k, k data objects are randomly selected to initialize k clusters, each one being the centroid of a cluster. The remaining objects are then assigned to the cluster represented by the nearest or most similar centroid. A new centroids are re-computed for each cluster and in turn all documents are re-assigned based on the new centroids. This step iterates until all data objects remain in the same cluster after an update of centroids.

5.Experment 5.1 Datasets These datasets differ in terms of document type, number of categories, average category size, and subjects. In order to ensure diversity, the datasets are from different sources, some containing newspaper articles, some containing newsgroup posts, some being web pages and the remaining being academic papers.

5.2 Evaluation The quality of a clustering result was evaluated using two evaluation measures—purity and entropy. The purity measure evaluates the coherence of a cluster, that is, the degree to which a cluster contains documents from a single category. Given a particular cluster Ci of size ni ,The purity of Ci is formally defined as: In general, the higher the purity value, the better the quality of the cluster is. For each of the above datasets, we obtained a clustering result from the K-means algorithm. The number of clusters is set as the same with the number of pre-assigned categories in the data set. 222. the dominant category in cluster C i and n h represents the number of documents from cluster CI assigned to category h. 333. For an ideal cluster, which only contains documents from a single category, its purity value is 1.

5.2 Evaluation (cont.) The entropy measure evaluates the distribution of categories in a given cluster. The entropy of a cluster Ci with size ni is defined as: For an ideal cluster with documents from only a single category, the entropy of the cluster will be 0. In general, the smaller the entropy value, the better the quality of the cluster is. where c is the total number of categories in the data set

Thank You