Data Mining Techniques: Classification and Clustering

Slides:



Advertisements
Similar presentations
COMP3740 CR32: Knowledge Management and Adaptive Systems
Advertisements

Decision Trees Decision tree representation ID3 learning algorithm
Clustering Basic Concepts and Algorithms
PARTITIONAL CLUSTERING
Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Decision Tree Approach in Data Mining
Pavan J Joshi 2010MCS2095 Special Topics in Database Systems
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan,
Classification Techniques: Decision Tree Learning
Data Mining Techniques: Clustering
Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei Han.
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
Decision Tree Algorithm
Induction of Decision Trees
1 Classification with Decision Trees I Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Basic Data Mining Techniques
Classification Continued
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Lecture 5 (Classification with Decision Trees)
Ranking by Odds Ratio A Probability Model Approach let be a Boolean random variable: document d is relevant to query q otherwise Consider document d as.
Classification and Prediction by Yen-Hsien Lee Department of Information Management College of Management National Sun Yat-Sen University March 4, 2003.
Classification.
Clustering Unsupervised learning Generating “classes”
Chapter 7 Decision Tree.
Fall 2004 TDIDT Learning CS478 - Machine Learning.
Machine Learning Chapter 3. Decision Tree Learning
Mohammad Ali Keyvanrad
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
1 Knowledge Discovery Transparencies prepared by Ho Tu Bao [JAIST] ITCS 6162.
1 Learning Chapter 18 and Parts of Chapter 20 AI systems are complex and may have many parameters. It is impractical and often impossible to encode all.
Classification Techniques: Bayesian Classification
Data Mining – Algorithms: Decision Trees - ID3 Chapter 4, Section 4.3.
CS690L Data Mining: Classification
For Monday No new reading Homework: –Chapter 18, exercises 3 and 4.
CS 8751 ML & KDDDecision Trees1 Decision tree representation ID3 learning algorithm Entropy, Information gain Overfitting.
Chapter 11 Statistical Techniques. Data Warehouse and Data Mining Chapter 11 2 Chapter Objectives  Understand when linear regression is an appropriate.
Clustering.
 Classification 1. 2  Task: Given a set of pre-classified examples, build a model or classifier to classify new cases.  Supervised learning: classes.
Chapter 6 Classification and Prediction Dr. Bernard Chen Ph.D. University of Central Arkansas.
Decision Trees Binary output – easily extendible to multiple output classes. Takes a set of attributes for a given situation or object and outputs a yes/no.
CS 8751 ML & KDDData Clustering1 Clustering Unsupervised learning Generating “classes” Distance/similarity measures Agglomerative methods Divisive methods.
Compiled By: Raj Gaurang Tiwari Assistant Professor SRMGPC, Lucknow Unsupervised Learning.
1 Classification: predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values.
Data Mining By Farzana Forhad CS 157B. Agenda Decision Tree and ID3 Rough Set Theory Clustering.
Decision Trees.
Chapter 4: Algorithms CS 795. Inferring Rudimentary Rules 1R – Single rule – one level decision tree –Pick each attribute and form a single level tree.
Clustering Wei Wang. Outline What is clustering Partitioning methods Hierarchical methods Density-based methods Grid-based methods Model-based clustering.
Chapter 3 Data Mining: Classification & Association Chapter 4 in the text box Section: 4.3 (4.3.1),
Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods.
DATA MINING TECHNIQUES (DECISION TREES ) Presented by: Shweta Ghate MIT College OF Engineering.
Review of Decision Tree Learning Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Data Mining Practical Machine Learning Tools and Techniques Chapter 6.3: Association Rules Rodney Nielsen Many / most of these slides were adapted from:
CSE573 Autumn /11/98 Machine Learning Administrative –Finish this topic –The rest of the time is yours –Final exam Tuesday, Mar. 17, 2:30-4:20.
Classification and Prediction Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Chapter 6 Decision Tree.
DECISION TREES An internal node represents a test on an attribute.
Classification Algorithms
Chapter 6 Classification and Prediction
Data Science Algorithms: The Basic Methods
Decision Tree Saed Sayad 9/21/2018.
Classification and Prediction
Classification Techniques: Bayesian Classification
Machine Learning: Lecture 3
DATA MINING Introductory and Advanced Topics Part II - Clustering
CSCI N317 Computation for Scientific Applications Unit Weka
Clustering Wei Wang.
Text Categorization Berlin Chen 2003 Reference:
A task of induction to find patterns
A task of induction to find patterns
Presentation transcript:

Data Mining Techniques: Classification and Clustering Chein-Shung Hwang

Today Classification Clustering Basic Concepts in Classification Decision Trees ID3/C4.5 Algorithm Bayesian Classification Tools and Software for Classification Clustering Distance Measures Graph-based Techniques K-Means Clustering Tools and Software for Clustering

What Is Classification? The goal of data classification is to organize and categorize data in distinct classes A model is first created based on the data distribution The model is then used to classify new data Given the model, a class can be predicted for new data Classification = prediction for discrete and nominal values With classification, I can predict in which bucket to put the ball, but I can’t predict the weight of the ball

Prediction, Clustering, Classification What is Prediction? The goal of prediction is to forecast or deduce the value of an attribute based on values of other attributes A model is first created based on the data distribution The model is then used to predict future or unknown values Supervised vs. Unsupervised Classification Supervised Classification = Classification We know the class labels and the number of classes Unsupervised Classification = Clustering We do not know the class labels and may not know the number of classes

Classification: 3 Step Process 1. Model construction (Learning): Each record (instance) is assumed to belong to a predefined class, as determined by one of the attributes, called the class label The set of all records used for construction of the model is called training set The model is usually represented in the form of classification rules, (IF-THEN statements) or decision trees 2. Model Evaluation (Accuracy): Estimate accuracy rate of the model based on a test set The known label of test sample is compared with the classified result from model Accuracy rate: percentage of test set samples correctly classified by the model Test set is independent of training set otherwise over-fitting will occur 3. Model Use (Classification): The model is used to classify unseen instances (assigning class labels) Predict the value of an actual attribute

Model Construction

Model Evaluation

Model Use: Classification

Classification Methods Decision Tree Induction Neural Networks Bayesian Classification Association-Based Classification K-Nearest Neighbor Case-Based Reasoning Genetic Algorithms Fuzzy Sets Many More

Decision Trees A decision tree is a flow-chart-like tree structure Internal node denotes a test on an attribute (feature) Branch represents an outcome of the test All records in a branch have the same value for the tested attribute Leaf node represents class label or class label distribution outlook humidity windy P N sunny overcast rain high normal true false

Decision Trees Example: “is it a good day to play golf?” a set of attributes and their possible values: outlook sunny, overcast, rain temperature cool, mild, hot humidity high, normal windy true, false A particular instance in the training set might be: <overcast, hot, normal, false>: play In this case, the target class is a binary attribute, so each instance represents a positive or a negative example.

Using Decision Trees for Classification Examples can be classified as follows 1. look at the example's value for the feature specified 2. move along the edge labeled with this value 3. if you reach a leaf, return the label of the leaf 4. otherwise, repeat from step 1 Example (a decision tree to decide whether to go on a picnic): outlook humidity windy P N sunny overcast rain high normal true false So a new instance: <rainy, hot, normal, true>: ? will be classified as “noplay”

Decision Trees and Decision Rules outlook humidity windy yes no sunny overcast rain > 75% <= 75% > 20 <= 20 If attributes are continuous, internal nodes may test against a threshold. Each path in the tree represents a decision rule: Rule1: If (outlook=“sunny”) AND (humidity<=0.75) Then (play=“yes”) Rule2: If (outlook=“rainy”) AND (wind>20) Then (play=“no”) Rule3: If (outlook=“overcast”) Then (play=“yes”) . . .

Top-Down Decision Tree Generation The basic approach usually consists of two phases: Tree construction At the start, all the training examples are at the root Partition examples are recursively based on selected attributes Tree pruning remove tree branches that may reflect noise in the training data and lead to errors when classifying test data improve classification accuracy Basic Steps in Decision Tree Construction Tree starts a single node representing all data If sample are all same class then node becomes a leaf labeled with class label Otherwise, select feature that best separates sample into individual classes. Recursion stops when: Samples in node belong to the same class (majority) There are no remaining attributes on which to split

Trees Construction Algorithm (ID3) Decision Tree Learning Method (ID3) Input: a set of examples S, a set of features F, and a target set T (target class T represents the type of instance we want to classify, e.g., whether “to play golf”) 1. If every element of S is already in T, return “yes”; if no element of S is in T return “no” 2. Otherwise, choose the best feature f from F (if there are no features remaining, then return failure); 3. Extend tree from f by adding a new branch for each attribute value 4. Distribute training examples to leaf nodes (so each leaf node S is now the set of examples at that node, and F is the remaining set of features not yet selected) 5. Repeat steps 1-5 for each leaf node Main Question: how do we choose the best feature at each step? Note: ID3 algorithm only deals with categorical attributes, but can be extended (as in C4.5) to handle continuous attributes

Choosing the “Best” Feature Using Information Gain to find the “best” (most discriminating) feature Entropy, E(I) of a set of instance I, containing p positive and n negative examples Gain(A, I) is the expected reduction in entropy due to feature (attribute) A the jth descendant of I is the set of instances with value vj for A S: [9+,5-] Outlook? E = -(9/14).log(9/14) - (5/14).log(5/14) = 0.940 overcast rainy sunny “yes”, since all positive examples [4+,0-] [2+,3-] [3+,2-]

Decision Tree Learning - Example S: [9+,5-] (E = 0.940) humidity? high normal [3+,4-] (E = 0.985) [6+,1-] (E = 0.592) Gain(S, humidity) = .940 - (7/14)*.985 - (7/14)*.592 = .151 S: [9+,5-] (E = 0.940) wind? weak strong So, classifying examples by humidity provides more information gain than by wind. In this case, however, you can verify that outlook has largest information gain, so it’ll be selected as root [6+,2-] (E = 0.811) [3+,3-] (E = 1.00) Gain(S, wind) = .940 - (8/14)*.811 - (8/14)*1.0 = .048

Decision Tree Learning - Example Partially learned decision tree which attribute should be tested here? S: [9+,5-] Outlook {D1, D2, …, D14} sunny overcast rainy ? yes ? [2+,3-] [4+,0-] [3+,2-] {D1, D2, D8, D9, D11} {D3, D7, D12, D13} {D4, D5, D6, D10, D14} Ssunny = {D1, D2, D8, D9, D11} Gain(Ssunny, humidity) = .970 - (3/5)*0.0 - (2/5)*0.0 = .970 Gain(Ssunny, temp) = .970 - (2/5)*0.0 - (2/5)*1.0 - (1/5)*0.0 = .570 Gain(Ssunny, wind) = .970 - (2/5)*1.0 - (3/5)*.918 = .019

Dealing With Continuous Variables Partition continuous attribute into a discrete set of intervals sort the examples according to the continuous attribute A identify adjacent examples that differ in their target classification generate a set of candidate thresholds midway problem: may generate too many intervals Another Solution: take a minimum threshold M of the examples of the majority class in each adjacent partition; then merge adjacent partitions with the same majority class 70.5 77.5 Example: M = 3 Same majority, so they are merged Final mapping: temperature £ 77.5 ==> “yes”; temperature > 77.5 ==> “no”

Improving on Information Gain Info. Gain tends to favor attributes with a large number of values larger distribution ==> lower entropy ==> larger Gain Quinlan suggests using Gain Ratio penalize for large number of values Example: “outlook” Outlook overcast sunny rainy S: [9+,5-] S1: [4+,0-] S2 : [2+,3-] S3 : [3+,2-] SplitInfo (outlook, S) = -(4/14).log(4/14) - (5/14).log(5/14) - (5/14).log(5/14) = 0.156 GainRatio (outlook, S) = 0.246 / 0.156 = 1.577

Over-fitting in Classification A tree generated may over-fit the training examples due to noise or too small a set of training data Two approaches to avoid over-fitting: (Stop earlier): Stop growing the tree earlier (Post-prune): Allow over-fit and then post-prune the tree Approaches to determine the correct final tree size: Separate training and testing sets or use cross-validation Use all the data for training, but apply a statistical test (e.g., chi-square) to estimate whether expanding or pruning a node may improve over entire distribution Use Minimum Description Length (MDL) principle: halting growth of the tree when the encoding is minimized. Rule post-pruning (C4.5): converting to rules before pruning

Pruning the Decision Tree A decision tree constructed using the training data may need to be pruned over-fitting may result in branches or leaves based on too few examples pruning is the process of removing branches and subtrees that are generated due to noise; this improves classification accuracy Subtree Replacement: merge a subtree into a leaf node Using a set of data different from the training data At a tree node, if the accuracy without splitting is higher than the accuracy with splitting, replace the subtree with a leaf node; label it using the majority class color yes no red blue 1 2 Suppose with test set we find 3 red “no” examples, and 1 blue “yes” example. We can replace the tree with a single “no” node. After replacement there will be only 2 errors instead of 5.

Bayesian Classification It is a statistical classifier based on Bayes theorem It uses probabilistic learning by calculating explicit probabilities for hypothesis A naïve Bayesian classifier, that assumes total independence between attributes, is commonly used and performs well with large data sets The model is incremental in the sense that each training example can incrementally increase or decrease the probability that a hypothesis is correct. Prior knowledge can be combined with observed data Given a data sample X with an unknown class label, H is the hypothesis that X belongs to a specific class C The conditional probability of hypothesis H given X, Pr(H|X), follows the Bayes theorem: Practical difficulty: requires initial knowledge of many probabilities, significant computational cost

Naïve Bayesian Classifier Suppose we have n classes C1 , C2 ,…,Cn. Given an unknown sample X, the classifier will predict that X=(x1 ,x2 ,…,xn) belongs to the class with the highest conditional probability: Maximize Pr(X | Ci).Pr(Ci) / Pr(X) => maximize Pr(X | Ci).Pr(Ci) Note: Pr(Ci) = si / s, and Greatly reduces the computation cost, only count the class distribution Naïve: class conditional independence

Naïve Bayesian Classifier - Example Given a training set, we can compute the probabilities X = <sunny, mild, high, true> Pr(X | “no”).Pr(“no”) = (3/5 . 2/5 . 4/5 . 3/5) . 5/14 = 0.04 Pr(X | “yes”).Pr(“yes”) = (2/9 . 4/9 . 3/9 . 3/9) . 9/14 = 0.007

Classification Example - Bank Data Want to determine likely responders to a direct mail campaign a new product, a "Personal Equity Plan" (PEP) training data include records kept about how previous customers responded and bought the product in this case the target class is “pep” with binary value want to build a model and apply it to new data (a customer list) in which the value of the class attribute is not available

Data Preparation Several steps for prepare data for Weka and for See5 open training data in Excel, remove the “id” column, save results (as a comma delimited file (e.g., “bank.csv”) do the same with new customer data, but also add a new column called “pep” as the last column; the value of this column for each record should be “?” Weka must convert the the data to ARFF format attribute specification and data are in the same file the data portion is just the comma delimited data file without the label row See5/C5 create a “name” file and a “data” file “name” file contains attribute specification; “data” file is same as above first line of “name” file must be the name(s) of the target class(es) - in this case “pep”

Data File Format for Weka @relation ’train-bank-data' @attribute 'age' real @attribute 'sex' {'MALE','FEMALE'} @attribute 'region' {'INNER_CITY','RURAL','TOWN','SUBURBAN'} @attribute 'income' real @attribute 'married' {'YES','NO'} @attribute 'children' real @attribute 'car' {'YES','NO'} @attribute 'save_act' {'YES','NO'} @attribute 'current_act' {'YES','NO'} @attribute 'mortgage' {'YES','NO'} @attribute 'pep' {'YES','NO'} @data 48,FEMALE,INNER_CITY,17546,NO,1,NO,NO,NO,NO,YES 40,MALE,TOWN,30085.1,YES,3,YES,NO,YES,YES,NO . . . Training Data @relation 'new-bank-data' @attribute 'age' real @attribute 'region' {'INNER_CITY','RURAL','TOWN','SUBURBAN'} . . . @attribute 'pep' {'YES','NO'} @data 23,MALE,INNER_CITY,18766.9,YES,0,YES,YES,NO,YES,? 30,MALE,RURAL,9915.67,NO,1,NO,YES,NO,YES,? New Cases

Data File Format for See5/C5 pep. age: continuous. sex: MALE,FEMALE. region: INNER_CITY,RURAL,TOWN,SUBURBAN. income: continuous. married: YES,NO. children: continuous. car: YES,NO. save_act: YES,NO. current_act: YES,NO. mortgage: YES,NO. pep: YES,NO. Name file for Training Data Data file for Training Data 48,FEMALE,INNER_CITY,17546,NO,1,NO,NO,NO,NO,YES 40,MALE,TOWN,30085.1,YES,3,YES,NO,YES,YES,NO 51,FEMALE,INNER_CITY,16575.4,YES,0,YES,YES,YES,NO,NO . . . Note: no “name file” is necessary for new cases, but the file name must use the same stem, and a suffix “.cases” 23,MALE,INNER_CITY,18766.9,YES,0,YES,YES,NO,YES,? 30,MALE,RURAL,9915.67,NO,1,NO,YES,NO,YES,? 45,FEMALE,RURAL,21881.6,NO,0,YES,YES,YES,NO,? . . .

C4.5 Implementation in Weka To build a model (decision tree) using the classifiers.j48.J48 class from command line or using the simple java-based command line interface children <= 2 | children <= 0 | | married = YES | | | mortgage = YES | | | | save_act = YES: NO (16.0/2.0) | | | | save_act = NO: YES (9.0/1.0) | | | mortgage = NO: NO (59.0/6.0) | | married = NO | | | | save_act = YES: NO (12.0) | | | | save_act = NO: YES (3.0) | | | mortgage = NO: YES (29.0/2.0) | children > 0 | | income <= 29622 | | | children <= 1 | | | | income <= 12640.3: NO (5.0) | | | | income > 12640.3 | | | | | current_act = YES: YES (28.0/1.0) | | | | | current_act = NO | | | | | | income <= 17390.1: NO (3.0) | | | | | | income > 17390.1: YES (6.0) | | | children > 1: NO (47.0/3.0) | | income > 29622: YES (48.0/2.0) children > 2 | income <= 43228.2: NO (30.0/2.0) | income > 43228.2: YES (5.0) Decision Tree Output (pruned) Command for building the model (additional parameters can be specified for pruning, cross validation, etc.) > java weka.classifiers.j48.J48 -t <file-path-name>.arff -d <file-path-name>.model

C4.5 Implementation in Weka === Error on training data === Correctly Classified Instances 281 93.6667 % Incorrectly Classified Instances 19 6.3333 % Mean absolute error 0.1163 Root mean squared error 0.2412 Relative absolute error 23.496 % Root relative squared error 48.4742 % Total Number of Instances 300 === Confusion Matrix === a b <-- classified as 122 13 | a = YES 6 159 | b = NO === Stratified cross-validation === Correctly Classified Instances 274 91.3333 % Incorrectly Classified Instances 26 8.6667 % Mean absolute error 0.1434 Root mean squared error 0.291 Relative absolute error 28.9615 % Root relative squared error 58.4922 % 118 17 | a = YES 9 156 | b = NO The rest of the output contains statistical information about the model, including confusion matrix, error rates, etc. The model is now contained in the (binary) file <file-path-name>.model

C4.5 Implementation in Weka Applying the model to new cases: java weka.classifiers.j48.J48 -p -l <file-pathnaem>.model -T <newdatafile-pathnaem>.arff 0 NO 0.875 ? 1 NO 1.0 ? 2 YES 0.9310344827586207 ? 3 YES 0.9583333333333334 ? 4 NO 0.8983050847457628 ? 5 YES 0.9642857142857143 ? 6 NO 0.875 ? 7 YES 1.0 ? 8 NO 0.9333333333333333 ? 9 YES 0.9642857142857143 ? 10 NO 0.875 ? . . . 195 YES 0.9583333333333334 ? 196 NO 0.9361702127659575 ? 197 YES 1.0 ? 198 NO 0.8983050847457628 ? 199 NO 0.9361702127659575 ? The output gives the predicted class for each new instance along with its predicted acuracy. Since we removed the “id” field, we now need to map these predictions to the original “new case” records (e.g. using Excel)

Classification Using See5/C5 Note: See5/C5 also has options for creating the model as a set of decision rules (as opposed to a decision tree) Either model can be used to classify new unclassified cases.

Classification Using See5/C5 Decision tree: income > 30085.1: :...children > 0: YES (43/5) : children <= 0: : :...married = YES: NO (19/2) : married = NO: : :...mortgage = YES: NO (3) : mortgage = NO: YES (5) income <= 30085.1: :...children > 1: NO (50/4) children <= 1: :...children <= 0: :...save_act = YES: NO (27/5) : save_act = NO: : :...married = NO: YES (6) : married = YES: : :...mortgage = YES: YES (6/1) : mortgage = NO: NO (12/2) children > 0: :...income <= 12681.9: NO (5) income > 12681.9: :...current_act = YES: YES (19) current_act = NO: :...car = YES: NO (2) car = NO: YES (3) Class specified by attribute `pep' ** This demonstration version cannot process ** ** more than 200 training or test cases. ** Read 200 cases (11 attributes) from bank-train.data Evaluation on training data (200 cases): Decision Tree ---------------- Size Errors 13 19( 9.5%) << (a) (b) <-classified as ---- ---- 76 13 (a): class YES 6 105 (b): class NO Time: 0.1 secs

See5/C5: Applying Model to New Cases Need to use the executable file “sample.exe” (note that source code is available to allow you to build classifier into your applications From the Command Line: prompt> sample -f bank-train > bank-new.out Case Given Predicted No Class Class 1 ? NO [0.79] 2 ? NO [0.86] 3 ? NO [0.79] 4 ? YES [0.87] 5 ? NO [0.79] . . . 197 ? NO [0.90] 198 ? YES [0.88] 199 ? NO [0.79] 200 ? NO [0.90] Output classification file for new cases (along with predicted accuracy for each new case).

Classification Using See5/C5 Building the model based on decision rules: Rule 1: (31, lift 2.2) income > 12681.9 children > 0 children <= 1 current_act = YES -> class YES [0.970] Rule 2: (20, lift 2.1) car = NO -> class YES [0.955] Rule 3: (17, lift 2.1) income > 30085.1 married = NO mortgage = NO -> class YES [0.947] Rule 4: (7, lift 2.0) married = NO children <= 0 save_act = NO -> class YES [0.889] Rule 5: (43/5, lift 1.9) income > 30085.1 children > 0 -> class YES [0.867] Rule 6: (7/1, lift 1.7) mortgage = YES -> class YES [0.778] . . .

What is Clustering in Data Mining? Clustering is a process of partitioning a set of data (or objects) in a set of meaningful sub-classes, called clusters Helps users understand the natural grouping or structure in a data set Cluster: a collection of data objects that are “similar” to one another and thus can be treated collectively as one group but as a collection, they are sufficiently different from other groups Clustering unsupervised classification no predefined classes

Requirements of Clustering Methods Scalability Dealing with different types of attributes Discovery of clusters with arbitrary shape Minimal requirements for domain knowledge to determine input parameters Able to deal with noise and outliers Insensitive to order of input records The curse of dimensionality Interpretability and usability

Applications of Clustering Clustering has wide applications in Pattern Recognition Spatial Data Analysis: create thematic maps in GIS by clustering feature spaces detect spatial clusters and explain them in spatial data mining Image Processing Market Research Information Retrieval Document or term categorization Information visualization and IR interfaces Web Mining Cluster Web usage data to discover groups of similar access patterns Web Personalization

Clustering Methodologies Two general methodologies Partitioning Based Algorithms Hierarchical Algorithms Partitioning Based divide a set of N items into K clusters (top-down) Hierarchical agglomerative: pairs of items or clusters are successively linked to produce larger clusters divisive: start with the whole set as a cluster and successively divide sets into smaller partitions

Distance or Similarity Measures Measuring Distance In order to group similar items, we need a way to measure the distance between objects (e.g., records) Note: distance = inverse of similarity Often based on the representation of objects as “feature vectors” An Employee DB Term Frequencies for Documents Which objects are more similar?

Distance or Similarity Measures Properties of Distance Measures: for all objects A and B, dist(A, B) ³ 0, and dist(A, B) = dist(B, A) for any object A, dist(A, A) = 0 dist(A, C) £ dist(A, B) + dist (B, C) Common Distance Measures: Manhattan distance: Euclidean distance: Cosine similarity: Can be normalized to make values fall between 0 and 1.

Distance or Similarity Measures Weighting Attributes in some cases we want some attributes to count more than others associate a weight with each of the attributes in calculating distance, e.g., Nominal (categorical) Attributes can use simple matching: distance=1 if values match, 0 otherwise or convert each nominal attribute to a set of binary attribute, then use the usual distance measure if all attributes are nominal, we can normalize by dividing the number of matches by the total number of attributes Normalization: want values to fall between 0 an 1: other variations possible

Distance or Similarity Measures Example max distance for age: 100000-19000 = 79000 max distance for age: 52-27 = 25 dist(ID2, ID3) = SQRT( 0 + (0.04)2 + (0.44)2 ) = 0.44 dist(ID2, ID4) = SQRT( 1 + (0.72)2 + (0.12)2 ) = 1.24

Domain Specific Distance Functions For some data sets, we may need to use specialized functions we may want a single or a selected group of attributes to be used in the computation of distance - same problem as “feature selection” may want to use special properties of one or more attribute in the data natural distance functions may exist in the data Example: Zip Codes distzip(A, B) = 0, if zip codes are identical distzip(A, B) = 0.1, if first 3 digits are identical distzip(A, B) = 0.5, if first digits are identical distzip(A, B) = 1, if first digits are different Example: Customer Solicitation distsolicit(A, B) = 0, if both A and B responded distsolicit(A, B) = 0.1, both A and B were chosen but did not respond distsolicit(A, B) = 0.5, both A and B were chosen, but only one responded distsolicit(A, B) = 1, one was chosen, but the other was not

Distance (Similarity) Matrix Similarity (Distance) Matrix based on the distance or similarity measure we can construct a symmetric matrix of distance (or similarity values) (i, j) entry in the matrix is the distance (similarity) between items i and j Note that dij = dji (i.e., the matrix is symmetric. So, we only need the lower triangle part of the matrix. The diagonal is all 1’s (similarity) or all 0’s (distance)

Example: Term Similarities in Documents Term-Term Similarity Matrix

Similarity (Distance) Thresholds A similarity (distance) threshold may be used to mark pairs that are “sufficiently” similar Using a threshold value of 10 in the previous example

Graph Representation The similarity matrix can be visualized as an undirected graph each item is represented by a node, and edges represent the fact that two items are similar (a one in the similarity threshold matrix) T1 T3 T4 T6 T8 T5 T2 T7 If no threshold is used, then matrix can be represented as a weighted graph

Simple Clustering Algorithms If we are interested only in threshold (and not the degree of similarity or distance), we can use the graph directly for clustering Clique Method (complete link) all items within a cluster must be within the similarity threshold of all other items in that cluster clusters may overlap generally produces small but very tight clusters Single Link Method any item in a cluster must be within the similarity threshold of at least one other item in that cluster produces larger but weaker clusters Other methods star method - start with an item and place all related items in that cluster string method - start with an item; place one related item in that cluster; then place anther item related to the last item entered, and so on

Simple Clustering Algorithms Clique Method a clique is a completely connected subgraph of a graph in the clique method, each maximal clique in the graph becomes a cluster T1 T3 Maximal cliques (and therefore the clusters) in the previous example are: {T1, T3, T4, T6} {T2, T4, T6} {T2, T6, T8} {T1, T5} {T7} Note that, for example, {T1, T3, T4} is also a clique, but is not maximal. T5 T4 T2 T7 T6 T8

Simple Clustering Algorithms Single Link Method selected an item not in a cluster and place it in a new cluster place all other similar item in that cluster repeat step 2 for each item in the cluster until nothing more can be added repeat steps 1-3 for each item that remains unclustered T1 T3 In this case the single link method produces only two clusters: {T1, T3, T4, T5, T6, T2, T8} {T7} Note that the single link method does not allow overlapping clusters, thus partitioning the set of items. T5 T4 T2 T7 T6 T8

Clustering with Existing Clusters The notion of comparing item similarities can be extended to clusters themselves, by focusing on a representative vector for each cluster cluster representatives can be actual items in the cluster or other “virtual” representatives such as the centroid this methodology reduces the number of similarity computations in clustering clusters are revised successively until a stopping condition is satisfied, or until no more changes to clusters can be made Partitioning Methods reallocation method - start with an initial assignment of items to clusters and then move items from cluster to cluster to obtain an improved partitioning Single pass method - simple and efficient, but produces large clusters, and depends on order in which items are processed Hierarchical Agglomerative Methods starts with individual items and combines into clusters then successively combine smaller clusters to form larger ones grouping of individual items can be based on any of the methods discussed earlier

K-Means Algorithm The basic algorithm (based on reallocation method): 1. select K data points as the initial representatives 2. for i = 1 to N, assign item xi to the most similar centroid (this gives K clusters) 3. for j = 1 to K, recalculate the cluster centroid Cj 4. repeat steps 2 and 3 until these is (little or) no change in clusters Example: Clustering Terms Initial (arbitrary) assignment: C1 = {T1,T2}, C2 = {T3,T4}, C3 = {T5,T6} Cluster Centroids

Example: K-Means Example (continued) Now using simple similarity measure, compute the new cluster-term similarity matrix Now compute new cluster centroids using the original document-term matrix The process is repeated until no further changes are made to the clusters

K-Means Algorithm Strength of the k-means: Weakness of the k-means: Relatively efficient: O(tkn), where n is # of objects, k is # of clusters, and t is # of iterations. Normally, k, t << n Often terminates at a local optimum Weakness of the k-means: Applicable only when mean is defined; what about categorical data? Need to specify k, the number of clusters, in advance Unable to handle noisy data and outliers Variations of K-Means usually differ in: Selection of the initial k means Dissimilarity calculations Strategies to calculate cluster means

Hierarchical Algorithms Use distance matrix as clustering criteria does not require the number of clusters k as an input, but needs a termination condition

Hierarchical Agglomerative Clustering HAC starts with unclustered data and performs successive pairwise joins among items (or previous clusters) to form larger ones this results in a hierarchy of clusters which can be viewed as a dendrogram useful in pruning search in a clustered item set, or in browsing clustering results Some commonly used HACM methods Single Link: at each step join most similar pairs of objects that are not yet in the same cluster Complete Link: use least similar pair between each cluster pair to determine inter-cluster similarity - all items within one cluster are linked to each other within a similarity threshold Ward’s method: at each step join cluster pair whose merger minimizes the increase in total within-group error sum of squares (based on distance between centroids) - also called the minimum variance method Group Average (Mean): use average value of pairwise links within a cluster to determine inter-cluster similarity (i.e., all objects contribute to inter-cluster similarity)

Hierarchical Agglomerative Clustering Dendrogram for a hierarchy of clusters A B C D E F G H I