Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Mining Techniques: Classification and Clustering Chein-Shung Hwang.

Similar presentations


Presentation on theme: "Data Mining Techniques: Classification and Clustering Chein-Shung Hwang."— Presentation transcript:

1 Data Mining Techniques: Classification and Clustering Chein-Shung Hwang

2 2 Today  Classification  Basic Concepts in Classification  Decision Trees  ID3/C4.5 Algorithm  Bayesian Classification  Tools and Software for Classification  Clustering  Distance Measures  Graph-based Techniques  K-Means Clustering  Tools and Software for Clustering

3 3 What Is Classification?  The goal of data classification is to organize and categorize data in distinct classes  A model is first created based on the data distribution  The model is then used to classify new data  Given the model, a class can be predicted for new data  Classification = prediction for discrete and nominal values  With classification, I can predict in which bucket to put the ball, but I can’t predict the weight of the ball

4 4 Prediction, Clustering, Classification  What is Prediction?  The goal of prediction is to forecast or deduce the value of an attribute based on values of other attributes  A model is first created based on the data distribution  The model is then used to predict future or unknown values  Supervised vs. Unsupervised Classification  Supervised Classification = Classification  We know the class labels and the number of classes  Unsupervised Classification = Clustering  We do not know the class labels and may not know the number of classes

5 5 Classification: 3 Step Process  1. Model construction (Learning):  Each record (instance) is assumed to belong to a predefined class, as determined by one of the attributes, called the class label  The set of all records used for construction of the model is called training set  The model is usually represented in the form of classification rules, (IF- THEN statements) or decision trees  2. Model Evaluation (Accuracy):  Estimate accuracy rate of the model based on a test set  The known label of test sample is compared with the classified result from model  Accuracy rate: percentage of test set samples correctly classified by the model  Test set is independent of training set otherwise over-fitting will occur  3. Model Use (Classification):  The model is used to classify unseen instances (assigning class labels)  Predict the value of an actual attribute

6 6 Model Construction

7 7 Model Evaluation

8 8 Model Use: Classification

9 9 Classification Methods  Decision Tree Induction  Neural Networks  Bayesian Classification  Association-Based Classification  K-Nearest Neighbor  Case-Based Reasoning  Genetic Algorithms  Fuzzy Sets  Many More  Decision Tree Induction  Neural Networks  Bayesian Classification  Association-Based Classification  K-Nearest Neighbor  Case-Based Reasoning  Genetic Algorithms  Fuzzy Sets  Many More

10 10 Decision Trees  A decision tree is a flow-chart-like tree structure  Internal node denotes a test on an attribute (feature)  Branch represents an outcome of the test  All records in a branch have the same value for the tested attribute  Leaf node represents class label or class label distribution outlook humiditywindy P PN P N sunny overcast rain highnormal true false

11 11 Decision Trees  Example: “is it a good day to play golf?”  a set of attributes and their possible values: outlooksunny, overcast, rain temperaturecool, mild, hot humidityhigh, normal windytrue, false A particular instance in the training set might be: : play In this case, the target class is a binary attribute, so each instance represents a positive or a negative example.

12 12 Using Decision Trees for Classification  Examples can be classified as follows  1. look at the example's value for the feature specified  2. move along the edge labeled with this value  3. if you reach a leaf, return the label of the leaf  4. otherwise, repeat from step 1  Example (a decision tree to decide whether to go on a picnic): outlook humiditywindy P PN P N sunny overcast rain highnormal true false So a new instance: : ? will be classified as “noplay”

13 13 Decision Trees and Decision Rules outlook humiditywindy yes no yes no sunny overcast rain > 75% <= 75% > 20 <= 20 If attributes are continuous, internal nodes may test against a threshold. Rule1: If (outlook=“sunny”) AND (humidity<=0.75) Then (play=“yes”) Rule2: If (outlook=“rainy”) AND (wind>20) Then (play=“no”) Rule3: If (outlook=“overcast”) Then (play=“yes”)... Each path in the tree represents a decision rule:

14 14 Top-Down Decision Tree Generation  The basic approach usually consists of two phases:  Tree construction  At the start, all the training examples are at the root  Partition examples are recursively based on selected attributes  Tree pruning  remove tree branches that may reflect noise in the training data and lead to errors when classifying test data  improve classification accuracy  Basic Steps in Decision Tree Construction  Tree starts a single node representing all data  If sample are all same class then node becomes a leaf labeled with class label  Otherwise, select feature that best separates sample into individual classes.  Recursion stops when:  Samples in node belong to the same class (majority)  There are no remaining attributes on which to split

15 15 Trees Construction Algorithm (ID3)  Decision Tree Learning Method (ID3)  Input: a set of examples S, a set of features F, and a target set T (target class T represents the type of instance we want to classify, e.g., whether “to play golf”)  1. If every element of S is already in T, return “yes”; if no element of S is in T return “no”  2. Otherwise, choose the best feature f from F (if there are no features remaining, then return failure);  3. Extend tree from f by adding a new branch for each attribute value  4. Distribute training examples to leaf nodes (so each leaf node S is now the set of examples at that node, and F is the remaining set of features not yet selected)  5. Repeat steps 1-5 for each leaf node  Main Question:  how do we choose the best feature at each step? Note: ID3 algorithm only deals with categorical attributes, but can be extended (as in C4.5) to handle continuous attributes Note: ID3 algorithm only deals with categorical attributes, but can be extended (as in C4.5) to handle continuous attributes

16 16 Choosing the “Best” Feature  Using Information Gain to find the “best” (most discriminating) feature  Entropy, E(I) of a set of instance I, containing p positive and n negative examples  Gain(A, I) is the expected reduction in entropy due to feature (attribute) A  the jth descendant of I is the set of instances with value v j for A Outlook? overcast sunny rainy S: [9+,5-] [4+,0-] [2+,3-][3+,2-] E = -(9/14).log(9/14) - (5/14).log(5/14) = “yes”, since all positive examples

17 17 Decision Tree Learning - Example humidity? high normal S: [9+,5-] (E = 0.940) [3+,4-] (E = 0.985) [6+,1-] (E = 0.592) Gain(S, humidity) = (7/14)* (7/14)*.592 =.151 wind? weak strong S: [9+,5-] (E = 0.940) [6+,2-] (E = 0.811) [3+,3-] (E = 1.00) Gain(S, wind) = (8/14)* (8/14)*1.0 =.048 So, classifying examples by humidity provides more information gain than by wind. In this case, however, you can verify that outlook has largest information gain, so it’ll be selected as root

18 18 Decision Tree Learning - Example  Partially learned decision tree  which attribute should be tested here? Outlook overcast sunny rainy S: [9+,5-] [4+,0-][2+,3-][3+,2-] ? ? yes {D1, D2, …, D14} {D1, D2, D8, D9, D11} {D3, D7, D12, D13}{D4, D5, D6, D10, D14} S sunny = {D1, D2, D8, D9, D11} Gain(S sunny, humidity) = (3/5)*0.0 - (2/5)*0.0 =.970 Gain(S sunny, temp) = (2/5)*0.0 - (2/5)*1.0 - (1/5)*0.0 =.570 Gain(S sunny, wind) = (2/5)*1.0 - (3/5)*.918 =.019

19 19 Dealing With Continuous Variables  Partition continuous attribute into a discrete set of intervals  sort the examples according to the continuous attribute A  identify adjacent examples that differ in their target classification  generate a set of candidate thresholds midway  problem: may generate too many intervals  Another Solution:  take a minimum threshold M of the examples of the majority class in each adjacent partition; then merge adjacent partitions with the same majority class Example: M = Same majority, so they are merged Final mapping: temperature  77.5 ==> “yes”; temperature > 77.5 ==> “no”

20 20 Improving on Information Gain  Info. Gain tends to favor attributes with a large number of values  larger distribution ==> lower entropy ==> larger Gain  Quinlan suggests using Gain Ratio  penalize for large number of values  Example: “outlook” Outlook overcast sunny rainy S: [9+,5-] S 1 : [4+,0-] S 2 : [2+,3-] S 3 : [3+,2-] SplitInfo (outlook, S) = -(4/14).log(4/14) - (5/14).log(5/14) - (5/14).log(5/14) = GainRatio (outlook, S) = / = 1.577

21 21 Over-fitting in Classification  A tree generated may over-fit the training examples due to noise or too small a set of training data  Two approaches to avoid over-fitting:  (Stop earlier): Stop growing the tree earlier  (Post-prune): Allow over-fit and then post-prune the tree  Approaches to determine the correct final tree size:  Separate training and testing sets or use cross-validation  Use all the data for training, but apply a statistical test (e.g., chi-square) to estimate whether expanding or pruning a node may improve over entire distribution  Use Minimum Description Length (MDL) principle: halting growth of the tree when the encoding is minimized.  Rule post-pruning (C4.5): converting to rules before pruning

22 22 Pruning the Decision Tree  A decision tree constructed using the training data may need to be pruned  over-fitting may result in branches or leaves based on too few examples  pruning is the process of removing branches and subtrees that are generated due to noise; this improves classification accuracy  Subtree Replacement: merge a subtree into a leaf node  Using a set of data different from the training data  At a tree node, if the accuracy without splitting is higher than the accuracy with splitting, replace the subtree with a leaf node; label it using the majority class color yes no red blue 1 2 Suppose with test set we find 3 red “no” examples, and 1 blue “yes” example. We can replace the tree with a single “no” node. After replacement there will be only 2 errors instead of 5. Suppose with test set we find 3 red “no” examples, and 1 blue “yes” example. We can replace the tree with a single “no” node. After replacement there will be only 2 errors instead of 5.

23 23 Bayesian Classification  It is a statistical classifier based on Bayes theorem  It uses probabilistic learning by calculating explicit probabilities for hypothesis  A naïve Bayesian classifier, that assumes total independence between attributes, is commonly used and performs well with large data sets  The model is incremental in the sense that each training example can incrementally increase or decrease the probability that a hypothesis is correct. Prior knowledge can be combined with observed data  Given a data sample X with an unknown class label, H is the hypothesis that X belongs to a specific class C  The conditional probability of hypothesis H given X, Pr(H|X), follows the Bayes theorem:  Practical difficulty: requires initial knowledge of many probabilities, significant computational cost

24 24 Naïve Bayesian Classifier  Suppose we have n classes C 1, C 2,…,C n. Given an unknown sample X, the classifier will predict that X=(x 1,x 2,…,x n ) belongs to the class with the highest conditional probability:  Maximize Pr(X | C i ).Pr(C i ) / Pr(X) => maximize Pr(X | C i ).Pr(C i )  Note: Pr(C i ) = s i / s, and  Greatly reduces the computation cost, only count the class distribution  Naïve: class conditional independence

25 25 Naïve Bayesian Classifier - Example  Given a training set, we can compute the probabilities X = Pr(X | “no”).Pr(“no”) = (3/5. 2/5. 4/5. 3/5). 5/14 = 0.04 Pr(X | “yes”).Pr(“yes”) = (2/9. 4/9. 3/9. 3/9). 9/14 = 0.007

26 26 Classification Example - Bank Data  Want to determine likely responders to a direct mail campaign  a new product, a "Personal Equity Plan" (PEP)  training data include records kept about how previous customers responded and bought the product  in this case the target class is “pep” with binary value  want to build a model and apply it to new data (a customer list) in which the value of the class attribute is not available

27 27 Data Preparation  Several steps for prepare data for Weka and for See5  open training data in Excel, remove the “id” column, save results (as a comma delimited file (e.g., “bank.csv”)  do the same with new customer data, but also add a new column called “pep” as the last column; the value of this column for each record should be “?”  Weka  must convert the the data to ARFF format  attribute specification and data are in the same file  the data portion is just the comma delimited data file without the label row  See5/C5  create a “name” file and a “data” file  “name” file contains attribute specification; “data” file is same as above  first line of “name” file must be the name(s) of the target class(es) - in this case “pep”

28 28 Data File Format for 'age' 'sex' 'region' 'income' 'married' 'children' 'car' 'save_act' 'current_act' 'mortgage' 'pep' 48,FEMALE,INNER_CITY,17546,NO,1,NO,NO,NO,NO,YES 'age' 'sex' 'region' 'income' 'married' 'children' 'car' 'save_act' 'current_act' 'mortgage' 'pep' 48,FEMALE,INNER_CITY,17546,NO,1,NO,NO,NO,NO,YES 'age' 'region' 'pep' 23,MALE,INNER_CITY, ,YES,0,YES,YES,NO,YES,? 'age' 'region' 'pep' 23,MALE,INNER_CITY, ,YES,0,YES,YES,NO,YES,? 30,MALE,RURAL, ,NO,1,NO,YES,NO,YES,? Training Data New Cases

29 29 Data File Format for See5/C5 pep. age: continuous. sex: MALE,FEMALE. region: INNER_CITY,RURAL,TOWN,SUBURBAN. income: continuous. married: YES,NO. children: continuous. car: YES,NO. save_act: YES,NO. current_act: YES,NO. mortgage: YES,NO. pep: YES,NO. pep. age: continuous. sex: MALE,FEMALE. region: INNER_CITY,RURAL,TOWN,SUBURBAN. income: continuous. married: YES,NO. children: continuous. car: YES,NO. save_act: YES,NO. current_act: YES,NO. mortgage: YES,NO. pep: YES,NO. Name file for Training Data 23,MALE,INNER_CITY, ,YES,0,YES,YES,NO,YES,? 30,MALE,RURAL, ,NO,1,NO,YES,NO,YES,? 45,FEMALE,RURAL, ,NO,0,YES,YES,YES,NO,?... 48,FEMALE,INNER_CITY,17546,NO,1,NO,NO,NO,NO,YES 40,MALE,TOWN, ,YES,3,YES,NO,YES,YES,NO 51,FEMALE,INNER_CITY, ,YES,0,YES,YES,YES,NO,NO... 48,FEMALE,INNER_CITY,17546,NO,1,NO,NO,NO,NO,YES 40,MALE,TOWN, ,YES,3,YES,NO,YES,YES,NO 51,FEMALE,INNER_CITY, ,YES,0,YES,YES,YES,NO,NO... Data file for Training Data Note: no “name file” is necessary for new cases, but the file name must use the same stem, and a suffix “.cases”

30 30 C4.5 Implementation in Weka  To build a model (decision tree) using the classifiers.j48.J48 class  from command line or using the simple java-based command line interface > java weka.classifiers.j48.J48 -t.arff -d.model children <= 2 | children <= 0 | | married = YES | | | mortgage = YES | | | | save_act = YES: NO (16.0/2.0) | | | | save_act = NO: YES (9.0/1.0) | | | mortgage = NO: NO (59.0/6.0) | | married = NO | | | mortgage = YES | | | | save_act = YES: NO (12.0) | | | | save_act = NO: YES (3.0) | | | mortgage = NO: YES (29.0/2.0) | children > 0 | | income <= | | | children <= 1 | | | | income <= : NO (5.0) | | | | income > | | | | | current_act = YES: YES (28.0/1.0) | | | | | current_act = NO | | | | | | income <= : NO (3.0) | | | | | | income > : YES (6.0) | | | children > 1: NO (47.0/3.0) | | income > 29622: YES (48.0/2.0) children > 2 | income <= : NO (30.0/2.0) | income > : YES (5.0) children <= 2 | children <= 0 | | married = YES | | | mortgage = YES | | | | save_act = YES: NO (16.0/2.0) | | | | save_act = NO: YES (9.0/1.0) | | | mortgage = NO: NO (59.0/6.0) | | married = NO | | | mortgage = YES | | | | save_act = YES: NO (12.0) | | | | save_act = NO: YES (3.0) | | | mortgage = NO: YES (29.0/2.0) | children > 0 | | income <= | | | children <= 1 | | | | income <= : NO (5.0) | | | | income > | | | | | current_act = YES: YES (28.0/1.0) | | | | | current_act = NO | | | | | | income <= : NO (3.0) | | | | | | income > : YES (6.0) | | | children > 1: NO (47.0/3.0) | | income > 29622: YES (48.0/2.0) children > 2 | income <= : NO (30.0/2.0) | income > : YES (5.0) Decision Tree Output (pruned) Command for building the model (additional parameters can be specified for pruning, cross validation, etc.)

31 31 C4.5 Implementation in Weka The model is now contained in the (binary) file.model === Error on training data === Correctly Classified Instances % Incorrectly Classified Instances % Mean absolute error Root mean squared error Relative absolute error % Root relative squared error % Total Number of Instances 300 === Confusion Matrix === a b <-- classified as | a = YES | b = NO === Stratified cross-validation === Correctly Classified Instances % Incorrectly Classified Instances % Mean absolute error Root mean squared error Relative absolute error % Root relative squared error % Total Number of Instances 300 === Confusion Matrix === a b <-- classified as | a = YES | b = NO === Error on training data === Correctly Classified Instances % Incorrectly Classified Instances % Mean absolute error Root mean squared error Relative absolute error % Root relative squared error % Total Number of Instances 300 === Confusion Matrix === a b <-- classified as | a = YES | b = NO === Stratified cross-validation === Correctly Classified Instances % Incorrectly Classified Instances % Mean absolute error Root mean squared error Relative absolute error % Root relative squared error % Total Number of Instances 300 === Confusion Matrix === a b <-- classified as | a = YES | b = NO The rest of the output contains statistical information about the model, including confusion matrix, error rates, etc.

32 32 C4.5 Implementation in Weka  Applying the model to new cases: java weka.classifiers.j48.J48 -p -l.model -T.arff 0 NO ? 1 NO 1.0 ? 2 YES ? 3 YES ? 4 NO ? 5 YES ? 6 NO ? 7 YES 1.0 ? 8 NO ? 9 YES ? 10 NO ? YES ? 196 NO ? 197 YES 1.0 ? 198 NO ? 199 NO ? 0 NO ? 1 NO 1.0 ? 2 YES ? 3 YES ? 4 NO ? 5 YES ? 6 NO ? 7 YES 1.0 ? 8 NO ? 9 YES ? 10 NO ? YES ? 196 NO ? 197 YES 1.0 ? 198 NO ? 199 NO ? The output gives the predicted class for each new instance along with its predicted acuracy. Since we removed the “id” field, we now need to map these predictions to the original “new case” records (e.g. using Excel)

33 33 Classification Using See5/C5 Note: See5/C5 also has options for creating the model as a set of decision rules (as opposed to a decision tree) Either model can be used to classify new unclassified cases.

34 34 Classification Using See5/C5 Decision tree: income > : :...children > 0: YES (43/5) : children <= 0: : :...married = YES: NO (19/2) : married = NO: : :...mortgage = YES: NO (3) : mortgage = NO: YES (5) income <= : :...children > 1: NO (50/4) children <= 1: :...children <= 0: :...save_act = YES: NO (27/5) : save_act = NO: : :...married = NO: YES (6) : married = YES: : :...mortgage = YES: YES (6/1) : mortgage = NO: NO (12/2) children > 0: :...income <= : NO (5) income > : :...current_act = YES: YES (19) current_act = NO: :...car = YES: NO (2) car = NO: YES (3) Decision tree: income > : :...children > 0: YES (43/5) : children <= 0: : :...married = YES: NO (19/2) : married = NO: : :...mortgage = YES: NO (3) : mortgage = NO: YES (5) income <= : :...children > 1: NO (50/4) children <= 1: :...children <= 0: :...save_act = YES: NO (27/5) : save_act = NO: : :...married = NO: YES (6) : married = YES: : :...mortgage = YES: YES (6/1) : mortgage = NO: NO (12/2) children > 0: :...income <= : NO (5) income > : :...current_act = YES: YES (19) current_act = NO: :...car = YES: NO (2) car = NO: YES (3) Class specified by attribute `pep' ** This demonstration version cannot process ** ** more than 200 training or test cases. ** Read 200 cases (11 attributes) from bank-train.data Class specified by attribute `pep' ** This demonstration version cannot process ** ** more than 200 training or test cases. ** Read 200 cases (11 attributes) from bank-train.data Evaluation on training data (200 cases): Decision Tree Size Errors 13 19( 9.5%) << (a) (b) <-classified as (a): class YES (b): class NO Time: 0.1 secs Evaluation on training data (200 cases): Decision Tree Size Errors 13 19( 9.5%) << (a) (b) <-classified as (a): class YES (b): class NO Time: 0.1 secs

35 35 See5/C5: Applying Model to New Cases  Need to use the executable file “sample.exe” (note that source code is available to allow you to build classifier into your applications  From the Command Line: prompt> sample -f bank-train > bank-new.out CaseGivenPredicted NoClassClass 1 ? NO [0.79] 2 ? NO [0.86] 3 ? NO [0.79] 4 ? YES [0.87] 5 ? NO [0.79] ? NO [0.90] 198 ? YES [0.88] 199 ? NO [0.79] 200 ? NO [0.90] CaseGivenPredicted NoClassClass 1 ? NO [0.79] 2 ? NO [0.86] 3 ? NO [0.79] 4 ? YES [0.87] 5 ? NO [0.79] ? NO [0.90] 198 ? YES [0.88] 199 ? NO [0.79] 200 ? NO [0.90] Output classification file for new cases (along with predicted accuracy for each new case).

36 36 Classification Using See5/C5 Rule 1: (31, lift 2.2) income > children > 0 children <= 1 current_act = YES -> class YES [0.970] Rule 2: (20, lift 2.1) income > children > 0 children <= 1 car = NO -> class YES [0.955] Rule 3: (17, lift 2.1) income > married = NO mortgage = NO -> class YES [0.947] Rule 1: (31, lift 2.2) income > children > 0 children <= 1 current_act = YES -> class YES [0.970] Rule 2: (20, lift 2.1) income > children > 0 children <= 1 car = NO -> class YES [0.955] Rule 3: (17, lift 2.1) income > married = NO mortgage = NO -> class YES [0.947] Rule 4: (7, lift 2.0) married = NO children <= 0 save_act = NO -> class YES [0.889] Rule 5: (43/5, lift 1.9) income > children > 0 -> class YES [0.867] Rule 6: (7/1, lift 1.7) children <= 0 save_act = NO mortgage = YES -> class YES [0.778]... Rule 4: (7, lift 2.0) married = NO children <= 0 save_act = NO -> class YES [0.889] Rule 5: (43/5, lift 1.9) income > children > 0 -> class YES [0.867] Rule 6: (7/1, lift 1.7) children <= 0 save_act = NO mortgage = YES -> class YES [0.778]...  Building the model based on decision rules:

37 37 What is Clustering in Data Mining?  Cluster:  a collection of data objects that are “similar” to one another and thus can be treated collectively as one group  but as a collection, they are sufficiently different from other groups  Clustering  unsupervised classification  no predefined classes Clustering is a process of partitioning a set of data (or objects) in a set of meaningful sub-classes, called clusters Helps users understand the natural grouping or structure in a data set

38 38 Requirements of Clustering Methods  Scalability  Dealing with different types of attributes  Discovery of clusters with arbitrary shape  Minimal requirements for domain knowledge to determine input parameters  Able to deal with noise and outliers  Insensitive to order of input records  The curse of dimensionality  Interpretability and usability  Scalability  Dealing with different types of attributes  Discovery of clusters with arbitrary shape  Minimal requirements for domain knowledge to determine input parameters  Able to deal with noise and outliers  Insensitive to order of input records  The curse of dimensionality  Interpretability and usability

39 39 Applications of Clustering  Clustering has wide applications in Pattern Recognition  Spatial Data Analysis:  create thematic maps in GIS by clustering feature spaces  detect spatial clusters and explain them in spatial data mining  Image Processing  Market Research  Information Retrieval  Document or term categorization  Information visualization and IR interfaces  Web Mining  Cluster Web usage data to discover groups of similar access patterns  Web Personalization

40 40 Clustering Methodologies  Two general methodologies  Partitioning Based Algorithms  Hierarchical Algorithms  Partitioning Based  divide a set of N items into K clusters (top-down)  Hierarchical  agglomerative: pairs of items or clusters are successively linked to produce larger clusters  divisive: start with the whole set as a cluster and successively divide sets into smaller partitions

41 41 Distance or Similarity Measures  Measuring Distance  In order to group similar items, we need a way to measure the distance between objects (e.g., records)  Note: distance = inverse of similarity  Often based on the representation of objects as “feature vectors” An Employee DB Term Frequencies for Documents Which objects are more similar?

42 42 Distance or Similarity Measures  Properties of Distance Measures:  for all objects A and B, dist(A, B)  0, and dist(A, B) = dist(B, A)  for any object A, dist(A, A) = 0  dist(A, C)  dist(A, B) + dist (B, C)  Common Distance Measures:  Manhattan distance:  Euclidean distance:  Cosine similarity: Can be normalized to make values fall between 0 and 1. Can be normalized to make values fall between 0 and 1.

43 43 Distance or Similarity Measures  Weighting Attributes  in some cases we want some attributes to count more than others  associate a weight with each of the attributes in calculating distance, e.g.,  Nominal (categorical) Attributes  can use simple matching: distance=1 if values match, 0 otherwise  or convert each nominal attribute to a set of binary attribute, then use the usual distance measure  if all attributes are nominal, we can normalize by dividing the number of matches by the total number of attributes  Normalization:  want values to fall between 0 an 1:  other variations possible

44 44 Distance or Similarity Measures  Example  max distance for age: =  max distance for age: = 25  dist(ID2, ID3) = SQRT( 0 + (0.04) 2 + (0.44) 2 ) = 0.44  dist(ID2, ID4) = SQRT( 1 + (0.72) 2 + (0.12) 2 ) = 1.24

45 45 Domain Specific Distance Functions  For some data sets, we may need to use specialized functions  we may want a single or a selected group of attributes to be used in the computation of distance - same problem as “feature selection”  may want to use special properties of one or more attribute in the data  natural distance functions may exist in the data Example: Zip Codes dist zip (A, B) = 0, if zip codes are identical dist zip (A, B) = 0.1, if first 3 digits are identical dist zip (A, B) = 0.5, if first digits are identical dist zip (A, B) = 1, if first digits are different Example: Zip Codes dist zip (A, B) = 0, if zip codes are identical dist zip (A, B) = 0.1, if first 3 digits are identical dist zip (A, B) = 0.5, if first digits are identical dist zip (A, B) = 1, if first digits are different Example: Customer Solicitation dist solicit (A, B) = 0, if both A and B responded dist solicit (A, B) = 0.1, both A and B were chosen but did not respond dist solicit (A, B) = 0.5, both A and B were chosen, but only one responded dist solicit (A, B) = 1, one was chosen, but the other was not Example: Customer Solicitation dist solicit (A, B) = 0, if both A and B responded dist solicit (A, B) = 0.1, both A and B were chosen but did not respond dist solicit (A, B) = 0.5, both A and B were chosen, but only one responded dist solicit (A, B) = 1, one was chosen, but the other was not

46 46 Distance (Similarity) Matrix Note that d ij = d ji (i.e., the matrix is symmetric. So, we only need the lower triangle part of the matrix. The diagonal is all 1’s (similarity) or all 0’s (distance) Note that d ij = d ji (i.e., the matrix is symmetric. So, we only need the lower triangle part of the matrix. The diagonal is all 1’s (similarity) or all 0’s (distance)  Similarity (Distance) Matrix  based on the distance or similarity measure we can construct a symmetric matrix of distance (or similarity values)  (i, j) entry in the matrix is the distance (similarity) between items i and j

47 47 Example: Term Similarities in Documents Term-Term Similarity Matrix Term-Term Similarity Matrix

48 48 Similarity (Distance) Thresholds  A similarity (distance) threshold may be used to mark pairs that are “sufficiently” similar Using a threshold value of 10 in the previous example

49 49 Graph Representation  The similarity matrix can be visualized as an undirected graph  each item is represented by a node, and edges represent the fact that two items are similar (a one in the similarity threshold matrix) T1 T3 T4 T6 T8 T5 T2 T7 If no threshold is used, then matrix can be represented as a weighted graph If no threshold is used, then matrix can be represented as a weighted graph

50 50 Simple Clustering Algorithms  If we are interested only in threshold (and not the degree of similarity or distance), we can use the graph directly for clustering  Clique Method (complete link)  all items within a cluster must be within the similarity threshold of all other items in that cluster  clusters may overlap  generally produces small but very tight clusters  Single Link Method  any item in a cluster must be within the similarity threshold of at least one other item in that cluster  produces larger but weaker clusters  Other methods  star method - start with an item and place all related items in that cluster  string method - start with an item; place one related item in that cluster; then place anther item related to the last item entered, and so on

51 51 Simple Clustering Algorithms  Clique Method  a clique is a completely connected subgraph of a graph  in the clique method, each maximal clique in the graph becomes a cluster T1 T3 T4 T6 T8 T5 T2 T7 Maximal cliques (and therefore the clusters) in the previous example are: {T1, T3, T4, T6} {T2, T4, T6} {T2, T6, T8} {T1, T5} {T7} Note that, for example, {T1, T3, T4} is also a clique, but is not maximal.

52 52 Simple Clustering Algorithms  Single Link Method  selected an item not in a cluster and place it in a new cluster  place all other similar item in that cluster  repeat step 2 for each item in the cluster until nothing more can be added  repeat steps 1-3 for each item that remains unclustered T1 T3 T4 T6 T8 T5 T2 T7 In this case the single link method produces only two clusters: {T1, T3, T4, T5, T6, T2, T8} {T7} Note that the single link method does not allow overlapping clusters, thus partitioning the set of items.

53 53 Clustering with Existing Clusters  The notion of comparing item similarities can be extended to clusters themselves, by focusing on a representative vector for each cluster  cluster representatives can be actual items in the cluster or other “virtual” representatives such as the centroid  this methodology reduces the number of similarity computations in clustering  clusters are revised successively until a stopping condition is satisfied, or until no more changes to clusters can be made  Partitioning Methods  reallocation method - start with an initial assignment of items to clusters and then move items from cluster to cluster to obtain an improved partitioning  Single pass method - simple and efficient, but produces large clusters, and depends on order in which items are processed  Hierarchical Agglomerative Methods  starts with individual items and combines into clusters  then successively combine smaller clusters to form larger ones  grouping of individual items can be based on any of the methods discussed earlier

54 54 K-Means Algorithm  The basic algorithm (based on reallocation method): 1. select K data points as the initial representatives 2. for i = 1 to N, assign item x i to the most similar centroid (this gives K clusters) 3. for j = 1 to K, recalculate the cluster centroid C j 4. repeat steps 2 and 3 until these is (little or) no change in clusters  Example: Clustering Terms Initial (arbitrary) assignment: C1 = {T1,T2}, C2 = {T3,T4}, C3 = {T5,T6} Cluster Centroids

55 55 Example: K-Means  Example (continued) Now using simple similarity measure, compute the new cluster-term similarity matrix Now compute new cluster centroids using the original document-term matrix The process is repeated until no further changes are made to the clusters

56 56 K-Means Algorithm  Strength of the k-means:  Relatively efficient: O(tkn), where n is # of objects, k is # of clusters, and t is # of iterations. Normally, k, t << n  Often terminates at a local optimum  Weakness of the k-means:  Applicable only when mean is defined; what about categorical data?  Need to specify k, the number of clusters, in advance  Unable to handle noisy data and outliers  Variations of K-Means usually differ in:  Selection of the initial k means  Dissimilarity calculations  Strategies to calculate cluster means

57 57 Hierarchical Algorithms  Use distance matrix as clustering criteria  does not require the number of clusters k as an input, but needs a termination condition

58 58 Hierarchical Agglomerative Clustering  HAC starts with unclustered data and performs successive pairwise joins among items (or previous clusters) to form larger ones  this results in a hierarchy of clusters which can be viewed as a dendrogram  useful in pruning search in a clustered item set, or in browsing clustering results  Some commonly used HACM methods  Single Link: at each step join most similar pairs of objects that are not yet in the same cluster  Complete Link: use least similar pair between each cluster pair to determine inter-cluster similarity - all items within one cluster are linked to each other within a similarity threshold  Ward’s method: at each step join cluster pair whose merger minimizes the increase in total within-group error sum of squares (based on distance between centroids) - also called the minimum variance method  Group Average (Mean): use average value of pairwise links within a cluster to determine inter-cluster similarity (i.e., all objects contribute to inter-cluster similarity)

59 59 Hierarchical Agglomerative Clustering Dendrogram for a hierarchy of clusters ABCDEFG HI


Download ppt "Data Mining Techniques: Classification and Clustering Chein-Shung Hwang."

Similar presentations


Ads by Google