Presentation is loading. Please wait.

Presentation is loading. Please wait.

Decision Trees and an Introduction to Classification.

Similar presentations


Presentation on theme: "Decision Trees and an Introduction to Classification."— Presentation transcript:

1 Decision Trees and an Introduction to Classification

2 Classification: Definition l Given a collection of records (training set ) –Each record contains a set of attributes, one of the attributes is the class. l Find a model for class attribute as a function of the values of other attributes. l Goal: previously unseen records should be assigned a class as accurately as possible –Decision trees evaluated on test set

3 Illustrating Classification Task

4 Classifier Evaluation l Model Performance is evaluated on a test set — The test set has the class labels filled in, just like the training set — The test set and training set normally created by splitting the original data — This reduces the amount of data for training, which is bad, but this is the best way to get a good measure of the classifier’s performance — Assumption is that future data will adhere to the distribution in the test set, which is not always true

5 Examples of Classification Task l Predicting tumor cells as benign or malignant l Classifying credit card transactions as legitimate or fraudulent l Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil l Categorizing news stories as finance, weather, entertainment, sports, etc

6 Introduction to Decision Trees l Decision Trees –Powerful/popular for classification & prediction –It is similar to a flowchart –Represents sets of non-overlapping rules  Rules can be expressed in English –IF Age <=43 & Sex = Male & Credit Card Insurance = No THEN Life Insurance Promotion = No –Useful to explore data to gain insight into relationships of a large number of features (input variables) to a target (output) variable l You use mental decision trees often! l Game: “20 Questions”

7 Introduction to Decision Trees l A structure that can be used to divide up a large collection of records into successively smaller sets of records by applying a sequence of simple tests l A decision tree model consists of a set of rules for dividing a large heterogeneous population into smaller, more homogeneous groups with respect to a particular target variable l Decision trees partition the “space” of all possible examples and assigns a class label to each partition

8 Classification Techniques l Decision Tree based Methods l Rule-based Methods l Memory based reasoning l Neural Networks l Naïve Bayes and Bayesian Belief Networks l Support Vector Machines

9 Example of a Decision Tree categorical continuous class Refund MarSt TaxInc YES NO YesNo Married Single, Divorced < 80K> 80K Splitting Attributes Training Data Model: Decision Tree

10 Another Example of Decision Tree categorical continuous class MarSt Refund TaxInc YES NO Yes No Married Single, Divorced < 80K> 80K There could be more than one tree that fits the same data!

11 Decision Tree Classification Task Decision Tree

12 Apply Model to Test Data Refund MarSt TaxInc YES NO YesNo Married Single, Divorced < 80K> 80K Test Data Start from the root of tree.

13 Apply Model to Test Data Refund MarSt TaxInc YES NO YesNo Married Single, Divorced < 80K> 80K Test Data

14 Apply Model to Test Data Refund MarSt TaxInc YES NO YesNo Married Single, Divorced < 80K> 80K Test Data

15 Apply Model to Test Data Refund MarSt TaxInc YES NO YesNo Married Single, Divorced < 80K> 80K Test Data

16 Apply Model to Test Data Refund MarSt TaxInc YES NO YesNo Married Single, Divorced < 80K> 80K Test Data

17 Apply Model to Test Data Refund MarSt TaxInc YES NO YesNo Married Single, Divorced < 80K> 80K Test Data Assign Cheat to “No”

18 Decision Tree Classification Task Decision Tree

19 Decision Tree Induction l Many Algorithms: –Hunt’s Algorithm (one of the earliest) –CART (Classification and Regression Trees) –ID3, C4.5 l We have easy access to several: –C4.5 (freely available and I have a copy on storm) –C5.0 (commercial version of C4.5 on storm) –WEKA free suite has J48 a java reimplementation of C4.5

20 How Would you Design a DT Algorithm? l Can you define the algorithm recursively? –First, do you know what recursion is? –Yes, at each invocation you only need to build one more level of the tree l What decisions to you need to make at each call? –What feature f to split on –How to partition the examples based on f  for numerical features, what number to split on  for categorical features, what values with each split –When to stop l How might you make these decisions?

21 General Structure of Hunt’s Algorithm l Let D t be the set of training records that reach a node t l General Procedure: –If D t contains records that only belong to the same class y t, then t is a leaf node labeled as y t –If D t is an empty set, then t is a leaf node labeled by the default class, y d –If D t contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. DtDt ?

22 Hunt’s Algorithm Don’t Cheat Refund Don’t Cheat Don’t Cheat YesNo Refund Don’t Cheat YesNo Marital Status Don’t Cheat Single, Divorced Married Taxable Income Don’t Cheat < 80K>= 80K Refund Don’t Cheat YesNo Marital Status Don’t Cheat Single, Divorced Married

23 Tree Induction l Greedy strategy –Split the records based on an attribute test that optimizes certain criterion –Greedy because not globally optimal  In context of decision trees, no extra look ahead  What are the costs and benefits of looking ahead? l Issues –Determine how to split the records  How to select the attribute to split? How would you do it?  How to determine the best split? How would you do it? –Determine when to stop splitting  Why not go until you can’t go any further?

24 How to Specify Test Condition? l Depends on attribute types –Nominal/Categorical (limited number of predefined values) –Ordinal (like nominal but there is an ordering) –Continuous (ordering and many values: numbers) l Depends on number of ways to split –2-way split –Multi-way split

25 Splitting Based on Nominal Attributes l Multi-way split: Use as many partitions as distinct values. l Binary split: Divides values into two subsets. Need to find optimal partitioning. CarType Family Sports Luxury CarType {Family, Luxury} {Sports} CarType {Sports, Luxury} {Family} OR

26 l Multi-way split: Use as many partitions as distinct values. l Binary split: Divides values into two subsets. Need to find optimal partitioning. l What about this split? Splitting Based on Ordinal Attributes Size Small Medium Large Size {Medium, Large} {Small} Size {Small, Medium} {Large} OR Size {Small, Large} {Medium}

27 Splitting Based on Continuous Attributes l Different ways of handling –Discretization to form ordinal categorical attribute  Static – discretize once at the beginning  Dynamic – ranges can be found by equal interval bucketing, equal frequency bucketing (percentiles), or clustering. –Binary Decision: (A < v) or (A  v)  consider all possible splits and finds the best cut  can be more compute intensive

28 Splitting Based on Continuous Attributes

29 Tree Induction l Greedy strategy. –Split the records based on an attribute test that optimizes certain criterion. l Issues –Determine how to split the records  How to specify the attribute test condition?  How to determine the best split? –Determine when to stop splitting

30 How to Determine the Best Split Before Splitting: 10 records of class 0, 10 records of class 1 Which test condition is the best? Why is student id a bad feature to use?

31 How to Determine the Best Split l Greedy approach: –Nodes with homogeneous class distribution are preferred l Need a measure of node impurity: Non-homogeneous, High degree of impurity Homogeneous, Low degree of impurity

32 Measures of Node Impurity l Gini Index l Entropy l Misclassification error

33 How to Find the Best Split B? YesNo Node N3Node N4 A? YesNo Node N1Node N2 Before Splitting: M0 M1 M2M3M4 M12 M34 Gain = M0 – M12 vs M0 – M34

34 Measure of Impurity: GINI (at node t) l Gini Index for a given node t with classes j NOTE: the conditional probability p( j | t) is computed as the relative frequency of class j at node t l Example: Two classes C1 & C2 and node t has 5 C1 and 5 C2 examples. Compute Gini(t) –1 – [p(C1|t) + p(C2|t)] = 1 – [(5/10) 2 + [(5/10) 2 ] –1 – [¼ + ¼] = ½. –Do you think this Gini value indicates a good split or bad split? Is it an extreme value?

35 More on Gini l Worst Gini corresponds to probabilities of 1/n c, where n c is the number of classes. –For 2-class problems the worst Gini will be ½ l How do we get the best Gini? Come up with an example for node t with 10 examples for classes C1 and C2 –10 C1 and 0 C2 –Now what is the Gini?  1 – [(10/10) 2 + (0/10) 2 = 1 – [1 + 0] = 0 –So 0 is the best Gini l So for 2-class problems: – Gini varies from 0 (best) to ½ (worst).

36 Some More Examples l Below we see the Gini values for 4 nodes with different distributions. They are ordered from best to worst. See next slide for details –Note that thus far we are only computing GINI for one node. We need to compute it for a split and then compute the change in Gini from the parent node.

37 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 37 Examples for computing GINI P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Gini = 1 – P(C1) 2 – P(C2) 2 = 1 – 0 – 1 = 0 P(C1) = 1/6 P(C2) = 5/6 Gini = 1 – (1/6) 2 – (5/6) 2 = 0.278 P(C1) = 2/6 P(C2) = 4/6 Gini = 1 – (2/6) 2 – (4/6) 2 = 0.444

38 Splitting Based on GINI l When a node p is split into k partitions (children), the quality of split is computed as, where,n i = number of records at child i, n = number of records at node p. l Should make sense- weighted average where weight is proportional to number of examples at the node –If two nodes and one node has 5 examples and the other has 10 examples, the first node has weight 5/15 and the second node has weight 10/15

39 Computing GINI for Binary Feature l Splits into two partitions l Weighting gives more importance to larger partitions l Are we better off stopping at node B or splitting? –Compare Gini before and after. Do it! B? YesNo Node N1Node N2

40 Computing GINI for Binary Feature l Splits into two partitions l Weighting gives more importance to larger partitions l Are we better off stopping at node B or splitting? –Compare impurity before and after B? YesNo Node N1Node N2 Gini(N1) = 1 – (5/7) 2 – (2/7) 2 = 0.41 Gini(N2) = 1 – (1/5) 2 – (4/5) 2 = 0.32 7/12 x 0.41 + 5/12 x 0.32 = 0.375.375 Gini (Children) =

41 Categorical Attributes: Computing Gini Index l For each distinct value, gather counts for each class in the dataset l Use the count matrix to make decisions l Exercise: Compute the Gini’s Multi-way splitTwo-way split (find best partition of values)

42 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 42 Continuous Attributes: Computing Gini Index l Use Binary Decisions based on one value l Several Choices for the splitting value –Number of possible splitting values = Number of distinct values l Each splitting value has a count matrix associated with it –Class counts in each of the partitions, A < v and A  v l Simple method to choose best v –For each v, scan the database to gather count matrix and compute its Gini index –Computationally Inefficient! Repetition of work.

43 Continuous Attributes: Computing Gini Index... l For efficient computation: for each attribute, –Sort the attribute on values –Linearly scan these values, each time updating the count matrix and computing gini index –Choose the split position that has the least gini index Split Positions Sorted Values

44 Alternative Splitting Criteria based on INFO l Entropy at a given node t: (NOTE: p( j | t) is the relative frequency of class j at node t). –Measures homogeneity of a node.  Maximum (log 2 n c ) when records are equally distributed among all classes implying least information  Minimum (0.0) when all records belong to one class, implying most information –Entropy based computations are similar to the GINI index computations

45 Examples for computing Entropy P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Entropy = – 0 log 2 0 – 1 log 2 1 = – 0 – 0 = 0 P(C1) = 1/6 P(C2) = 5/6 Entropy = – (1/6) log 2 (1/6) – (5/6) log 2 (5/6) = 0.65 P(C1) = 2/6 P(C2) = 4/6 Entropy = – (2/6) log 2 (2/6) – (4/6) log 2 (4/6) = 0.92 P(C1) = 3/6=1/2 P(C2) = 3/6 = 1/2 Entropy = – (1/2) log 2 (1/2) – (1/2) log 2 (1/2) = -(1/2)(-1) – (1/2)(-1) = ½ + ½ = 1

46 How to Calculate log 2 x l Many calculators only have a button for log 10 x and log e x (note log typically means log 10 ) l You can calculate the log for any base b as follows: –log b (x) = log k (x) / log k (b) –Thus log 2 (x) = log 10 (x) / log 10 (2) –Since log 10 (2) =.301, just calculate the log base 10 and divide by.301 to get log base 2. –You can use this for HW if needed

47 Splitting Based on INFO... l Information Gain: Parent Node, p is split into k partitions; n i is number of records in partition i –Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) –Used in ID3 and C4.5 –Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.

48 Splitting Criteria based on Classification Error l Classification error at a node t : l Measures misclassification error made by a node.  Maximum (1 - 1/n c ) when records are equally distributed among all classes, implying least interesting information  Minimum (0.0) when all records belong to one class, implying most interesting information

49 Examples for Computing Error P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Error = 1 – max (0, 1) = 1 – 1 = 0 P(C1) = 1/6 P(C2) = 5/6 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6 P(C1) = 2/6 P(C2) = 4/6 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3

50 Comparison among Splitting Criteria For a 2-class problem:

51 Misclassification Error vs Gini A? YesNo Node N1Node N2 Gini(N1) = 1 – (3/3) 2 – (0/3) 2 = 0 Gini(N2) = 1 – (4/7) 2 – (3/7) 2 = 0.489 Gini(Children) = 3/10 * 0 + 7/10 * 0.489 = 0.342 Gini improves !! What happens to GINI?

52 Misclassification Error vs Gini A? YesNo Node N1Node N2 What happens to Error Rate? What is error rate at root? 3/10 After Split? 3/10 x 0 + 7/10 x 3/7 = 21/70 = 3/10 Error Rate Does not Change! But the split is probably useful because N1 is in good shape and maybe we can now split N2

53 Discussion l Error rate is often the metric used to evaluate a classifier (but not always) –So it seems reasonable to use error rate to determine the best split –That is, why not just use a splitting metric that matches the ultimate evaluation metric? –But this is wrong!  The reason is related to the fact that decision trees use a greedy strategy, so we need to use a splitting metric that leads to globally better results  The other metrics will empirically outperform error rate, although there is no proof for this.

54 Decision Tree Based Classification l Performance Criteria: –Time to construct classifier –Time to classify unknown records –Interpretability –Predictive ability l How do you think DTs perform for these criteria? –Inexpensive to construct –Extremely fast at classifying unknown records –Easy to interpret –Accuracy comparable to other classification techniques for many simple data sets

55 Example: C4.5 l Simple depth-first construction l Uses Information Gain l Sorts Continuous Attributes at each node l Needs entire data to fit in memory l Unsuitable for Large Datasets –Needs out-of-core sorting l You can download the software from: http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz

56 Practical Issues of Classification l Underfitting and Overfitting l Missing Values l Costs of Classification

57 Underfitting and Overfitting Overfitting Underfitting: when model is too simple, both training and test errors are large Often use a validation set to determine proper number of nodes

58 Overfitting due to Noise Decision boundary is distorted by noise point

59 Overfitting due to Insufficient Examples Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region - Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification task Solid circles: training data Hollow circles: test data

60 Notes on Overfitting l Overfitting results in decision trees that are more complex than necessary l Training error no longer provides a good estimate of how well the tree will perform on previously unseen records l Need new ways for estimating errors

61 Estimating Generalization Errors l Re-substitution errors: error on training (  e(t) ) l Generalization errors: error on testing (  e’(t)) l Methods for estimating generalization errors: –Optimistic approach: e’(t) = e(t) –Pessimistic approach:  For each leaf node: e’(t) = (e(t)+0.5) –Note that e(t) is # misclassified records  Total errors: e’(T) = e(T) + N  0.5 (N: number of leaf nodes)  For a tree with 30 leaf nodes and 10 errors on training (out of 1000 instances): Training error = 10/1000 = 1% Generalization error = (10 + 30  0.5)/1000 = 2.5% –Reduced error pruning (REP):  uses validation data set to estimate generalization error. Simple to do but requires more data.

62 Occam’s Razor l Given two models of similar generalization errors, one should prefer the simpler model l For complex models, there is a greater chance that it was fitted accidentally by errors in data l Therefore, one should include model complexity when evaluating a model

63 Occam’s Razor l Some people argue against Occam’s razor –There will always be a possible world where the simplest model is not best (no free lunch) –One can argue based solely on empirical evidence that because the sun rises every day, as likely as not it will not rise tomorrow –They would argue that induction does not work!  May be silly, but there is something to this  We do need an inductive or extra-evidentiary leap  Induction is impossible without a bias, which is not the same thing as in statistics. –Occam’s razor is one example of a bias –Without a bias, how could you say something about an instance you have never seen before. Could not generalize at all.

64 Minimum Description Length (MDL) l Cost(Model,Data) = Cost(Data|Model) + Cost(Model) –Cost is the number of bits needed for encoding. –Search for the least costly model. l Cost(Data|Model) encodes the misclassification errors. l Cost(Model) uses node encoding (number of children) plus splitting condition encoding.

65 How to Address Overfitting l Pre-Pruning (Early Stopping Rule) –Stop the algorithm before it becomes a fully-grown tree –Typical stopping conditions for a node:  Stop if all instances belong to the same class  Stop if all the attribute values are the same –More restrictive conditions:  Stop if number of instances is less than some user-specified threshold  Stop if class distribution of instances are independent of the available features (e.g., using  2 test)  Stop if expanding the current node does not improve impurity measures (e.g., Gini or information gain).

66 How to Address Overfitting… l Post-pruning –Grow decision tree to its entirety –Trim the nodes of the decision tree in a bottom-up fashion –If generalization error improves after trimming, replace sub-tree by a leaf node. –Class label of leaf node is determined from majority class of instances in the sub-tree –Can use MDL for post-pruning

67 Other Issues l Data Fragmentation l Search Strategy l Expressiveness l Tree Replication

68 Data Fragmentation l Number of instances gets smaller as you traverse down the tree l Number of instances at the leaf nodes could be too small to make any statistically significant decision

69 Search Strategy l Finding an optimal decision tree is NP-hard l The algorithm presented so far uses a greedy, top-down, recursive partitioning strategy to induce a reasonable solution

70 Expressiveness l Decision tree provides expressive representation for learning discrete-valued function –But they do not generalize well to certain types of Boolean functions  Example: parity function: –Class = 1 if there is an even number of Boolean attributes with truth value = True –Class = 0 if there is an odd number of Boolean attributes with truth value = True  For accurate modeling, must have a complete tree l Not expressive enough for modeling continuous variables –Particularly when test condition involves only a single attribute at-a-time

71 Decision Boundary Border line between two neighboring regions of different classes is known as decision boundary Decision boundary is parallel to axes because test condition involves a single attribute at-a-time

72 Oblique Decision Trees x + y < 1 Class = + Class = Test condition may involve multiple attributes More expressive representation Finding optimal test condition is computationally expensive

73 Tree Replication Same subtree appears in multiple branches

74 Model Evaluation l Metrics for Performance Evaluation –How to evaluate the performance of a model? l Methods for Performance Evaluation –How to obtain reliable estimates?

75 Metrics for Performance Evaluation l Focus on the predictive capability of a model –Rather than how fast it takes to classify or build models, scalability, etc. l Confusion Matrix: PREDICTED CLASS ACTUAL CLASS Class=YesClass=No Class=Yesab Class=Nocd a: TP (true positive) b: FN (false negative) c: FP (false positive) d: TN (true negative)

76 Metrics for Performance Evaluation… l Most widely-used metric: PREDICTED CLASS ACTUAL CLASS Class=PClass=N Class=Pa (TP) b (FN) Class=Nc (FP) d (TN) Error Rate = 1 - accuracy

77 Limitation of Accuracy l Consider a 2-class problem –Number of Class 0 examples = 9990 –Number of Class 1 examples = 10 l If model predicts everything to be class 0, accuracy is 9990/10000 = 99.9 % –Accuracy is misleading because model does not detect any class 1 example

78 Cost Matrix PREDICTED CLASS ACTUAL CLASS C(i|j) Class=YesClass=No Class=YesC(Yes|Yes)C(No|Yes) Class=NoC(Yes|No)C(No|No) C(i|j): Cost of misclassifying class j example as class i

79 Computing Cost of Classification Cost Matrix PREDICTED CLASS ACTUAL CLASS C(i|j) +- +100 -10 Model M 1 PREDICTED CLASS ACTUAL CLASS +- +15040 -60250 Model M 2 PREDICTED CLASS ACTUAL CLASS +- +25045 -5200 Accuracy = 80% Cost = 3910 Accuracy = 90% Cost = 4255

80 © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 80 Cost-Sensitive Measures PREDICTED CLASS ACTUAL CLASS Class=YesClass=No Class=Yesa (TP) b (FN) Class=Noc (FP) d (TN)

81 Model Evaluation l Metrics for Performance Evaluation –How to evaluate the performance of a model? l Methods for Performance Evaluation –How to obtain reliable estimates?

82 Methods for Performance Evaluation l How to obtain a reliable estimate of performance? l Performance of a model may depend on other factors besides the learning algorithm: –Class distribution –Cost of misclassification –Size of training and test sets

83 Learning Curve l Learning curve shows how accuracy changes with varying sample size l Requires a sampling schedule for creating learning curve: l Arithmetic sampling (Langley, et al) l Geometric sampling (Provost et al)

84 Methods of Estimation l Holdout –Reserve 2/3 for training and 1/3 for testing l Random subsampling –Repeated holdout l Cross validation –Partition data into k disjoint subsets –k-fold: train on k-1 partitions, test on the remaining one –Leave-one-out: k=n


Download ppt "Decision Trees and an Introduction to Classification."

Similar presentations


Ads by Google