Data Mining Decision Tree Induction

Slides:



Advertisements
Similar presentations
CHAPTER 9: Decision Trees
Advertisements

Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Introduction Training Complexity, Pruning CART vs. ID3 vs. C4.5
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan,
Classification: Definition Given a collection of records (training set ) –Each record contains a set of attributes, one of the attributes is the class.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Classification: Definition l Given a collection of records (training set) l Find a model.
1 Data Mining Classification Techniques: Decision Trees (BUSINESS INTELLIGENCE) Slides prepared by Elizabeth Anglo, DISCS ADMU.
Decision Tree.
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan, Steinbach,
Classification Kuliah 4 4/29/2015. Classification: Definition  Given a collection of records (training set )  Each record contains a set of attributes,
Classification Techniques: Decision Tree Learning
Chapter 7 – Classification and Regression Trees
Chapter 7 – Classification and Regression Trees
Classification: Basic Concepts and Decision Trees.
Lecture Notes for Chapter 4 Introduction to Data Mining
Classification: Decision Trees, and Naïve Bayes etc. March 17, 2010 Adapted from Chapters 4 and 5 of the book Introduction to Data Mining by Tan, Steinbach,
CSci 8980: Data Mining (Fall 2002)
Lecture 5 (Classification with Decision Trees)
Example of a Decision Tree categorical continuous class Splitting Attributes Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Theses slides are based on the slides by
ID3 Algorithm Allan Neymark CS157B – Spring 2007.
DATA MINING LECTURE 9 Classification Basic Concepts Decision Trees.
1 Data Mining Lecture 3: Decision Trees. 2 Classification: Definition l Given a collection of records (training set ) –Each record contains a set of attributes,
Chapter 9 – Classification and Regression Trees
Chapter 4 Classification. 2 Classification: Definition Given a collection of records (training set ) –Each record contains a set of attributes, one of.
Classification. 2 Classification: Definition  Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes.
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
Modul 6: Classification. 2 Classification: Definition  Given a collection of records (training set ) Each record contains a set of attributes, one of.
Review - Decision Trees
Decision Trees Jyh-Shing Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University.
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan, Steinbach,
For Wednesday No reading Homework: –Chapter 18, exercise 6.
For Monday No new reading Homework: –Chapter 18, exercises 3 and 4.
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan, Steinbach,
Classification: Basic Concepts, Decision Trees. Classification: Definition l Given a collection of records (training set ) –Each record contains a set.
Decision Trees Example of a Decision Tree categorical continuous class Refund MarSt TaxInc YES NO YesNo Married Single, Divorced < 80K> 80K Splitting.
Machine Learning Decision Trees. E. Keogh, UC Riverside Decision Tree Classifier Ross Quinlan Antenna Length Abdomen Length.
Lecture Notes for Chapter 4 Introduction to Data Mining
Presentation on Decision trees Presented to: Sir Marooof Pasha.
1 Illustration of the Classification Task: Learning Algorithm Model.
Big Data Analysis and Mining Qinpei Zhao 赵钦佩 2015 Fall Decision Tree.
Classification: Basic Concepts, Decision Trees. Classification Learning: Definition l Given a collection of records (training set) –Each record contains.
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining By Tan, Steinbach,
Decision Trees.
Introduction to Data Mining Clustering & Classification Reference: Tan et al: Introduction to data mining. Some slides are adopted from Tan et al.
Review of Decision Tree Learning Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
CSE573 Autumn /11/98 Machine Learning Administrative –Finish this topic –The rest of the time is yours –Final exam Tuesday, Mar. 17, 2:30-4:20.
DECISION TREES An internal node represents a test on an attribute.
Decision Trees an introduction.
Lecture Notes for Chapter 4 Introduction to Data Mining
Artificial Intelligence
Data Science Algorithms: The Basic Methods
Decision Tree Saed Sayad 9/21/2018.
Data Mining Classification: Basic Concepts and Techniques
Classification and Prediction
ID3 Algorithm.
Classification Basic Concepts, Decision Trees, and Model Evaluation
Machine Learning” Notes 2
Data Mining: Concepts and Techniques
Machine Learning Chapter 3. Decision Tree Learning
Basic Concepts and Decision Trees
Machine Learning: Lecture 3
Machine Learning Chapter 3. Decision Tree Learning
آبان 96. آبان 96 Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan,
Statistical Learning Dong Liu Dept. EEIS, USTC.
Data Mining Decision Trees
Data Mining Decision Trees
STT : Intro. to Statistical Learning
COP5577: Principles of Data Mining Fall 2008 Lecture 4 Dr
Presentation transcript:

Data Mining Decision Tree Induction

Classification Techniques Linear Models Support Vector Machines Decision Tree based Methods Rule-based Methods Memory based reasoning Neural Networks Naïve Bayes and Bayesian Belief Networks Support Vector Machines

Example of a Decision Tree categorical continuous class Splitting Attributes Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K NO YES Training Data Model: Decision Tree

Another Decision Tree Example categorical categorical continuous class MarSt Single, Divorced Married NO Refund No Yes NO TaxInc < 80K > 80K NO YES More than one tree may perfectly fit the data

Decision Tree Classification Task

Apply Model to Test Data Start from the root of tree. Refund MarSt TaxInc YES NO Yes No Married Single, Divorced < 80K > 80K

Apply Model to Test Data Refund MarSt TaxInc YES NO Yes No Married Single, Divorced < 80K > 80K

Apply Model to Test Data Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K NO YES

Apply Model to Test Data Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K NO YES

Apply Model to Test Data Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K NO YES

Apply Model to Test Data Refund Yes No NO MarSt Married Assign Cheat to “No” Single, Divorced TaxInc NO < 80K > 80K NO YES

Decision Tree Terminology

Decision Tree Induction Many Algorithms: Hunt’s Algorithm (one of the earliest) CART ID3, C4.5 SLIQ,SPRINT John Ross Quinlan is a computer science researcher in data mining and decision theory. He has contributed extensively to the development of decision tree algorithms, including inventing the canonical C4.5 and ID3 algorithms. 

Decision Tree Classifier 10 1 2 3 4 5 6 7 8 9 Ross Quinlan Abdomen Length > 7.1? Antenna Length no yes Antenna Length > 6.0? Katydid no yes Grasshopper Katydid Abdomen Length

Decision trees predate computers Antennae shorter than body? Yes No 3 Tarsi? Grasshopper Yes No Foretiba has ears? Yes No Cricket Decision trees predate computers Katydids Camel Cricket

Definition Decision tree is a classifier in the form of a tree structure Decision node: specifies a test on a single attribute Leaf node: indicates the value of the target attribute Arc/edge: split of one attribute Path: a disjunction of test to make the final decision Decision trees classify instances or examples by starting at the root of the tree and moving through it until a leaf node.

Decision Tree Classification Decision tree generation consists of two phases Tree construction At start, all the training examples are at the root Partition examples recursively based on selected attributes This can also be called supervised segmentation This emphasizes that we are segmenting the instance space Tree pruning Identify and remove branches that reflect noise or outliers

Decision Tree Representation Each internal node tests an attribute Each branch corresponds to attribute value Each leaf node assigns a classification outlook sunny overcast rain humidity yes wind high normal strong weak no yes no yes

How do we Construct a Decision Tree? Basic algorithm (a greedy algorithm) Tree is constructed in a top-down recursive divide-and-conquer manner At start, all the training examples are at the root Examples are partitioned recursively based on selected attributes. Test attributes are selected on the basis of a heuristic or statistical measure (e.g., info. gain) Why do we call this a greedy algorithm? Because it makes locally optimal decisions (at each node).

When Do we Stop Partitioning? All samples for a node belong to same class No remaining attributes majority voting used to assign class No samples left

How to Pick Locally Optimal Split Hunt’s algorithm: recursively partition training records into successively purer subsets. How to measure purity/impurity? Entropy and associated information gain Gini Classification error rate Never used in practice but good for understanding and simple exercises

How to Determine Best Split Before Splitting: 10 records of class 0, 10 records of class 1 Which test condition is the best? Why is student id a bad feature to use?

How to Determine Best Split Greedy approach: Nodes with homogeneous class distribution are preferred Need a measure of node impurity: Non-homogeneous, High degree of impurity Homogeneous, Low degree of impurity

Information Theory Think of playing "20 questions": I am thinking of an integer between 1 and 1,000 -- what is it? What is the first question you would ask? What question will you ask? Why? Entropy measures how much more information you need before you can identify the integer. Initially, there are 1000 possible values, which we assume are equally likely. What is the maximum number of question you need to ask?

Entropy Entropy (disorder, impurity) of a set of examples, S, relative to a binary classification is: where p1 is the fraction of positive examples in S and p0 is fraction of negatives. If all examples are in one category, entropy is zero (we define 0log(0)=0) If examples are equally mixed (p1=p0=0.5), entropy is a maximum of 1. For multi-class problems with c categories, entropy generalizes to:

Entropy for Binary Classification The entropy is 0 if the outcome is certain. The entropy is maximum if we have no knowledge of the system (or any outcome is equally possible). Entropy of a 2-class problem with regard to the portion of one of the two groups

Information Gain in Decision Tree Induction Is the expected reduction in entropy caused by partitioning the examples according to this attribute. Assume that using attribute A, a current set will be partitioned into some number of child sets The encoding information that would be gained by branching on A The summation in the above formula is a bit misleading since when doing the summation we weight each entropy by the fraction of total examples in the particular child set. This applies to GINI and error rate also.

Examples for Computing Entropy NOTE: p( j | t) is computed as the relative frequency of class j at node t P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Entropy = – 0 log2 0 – 1 log2 1 = – 0 – 0 = 0 P(C1) = 1/6 P(C2) = 5/6 Entropy = – (1/6) log2 (1/6) – (5/6) log2 (5/6) = 0.65 P(C1) = 2/6 P(C2) = 4/6 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92 P(C1) = 3/6=1/2 P(C2) = 3/6 = 1/2 Entropy = – (1/2) log2 (1/2) – (1/2) log2 (1/2) = -(1/2)(-1) – (1/2)(-1) = ½ + ½ = 1

How to Calculate log2x Many calculators only have a button for log10x and logex (“log” typically means log10) You can calculate the log for any base b as follows: logb(x) = logk(x) / logk(b) Thus log2(x) = log10(x) / log10(2) Since log10(2) = .301, just calculate the log base 10 and divide by .301 to get log base 2. You can use this for HW if needed

Splitting Based on INFO... Information Gain: Parent Node, p is split into k partitions; ni is number of records in partition i Uses a weighted average of the child nodes, where weight is based on number of examples Used in ID3 and C4.5 decision tree learners WEKA’s J48 is a Java version of C4.5 Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.

How Split on Continuous Attributes? For continuous attributes Partition the continuous value of attribute A into a discrete set of intervals Create a new boolean attribute Ac , looking for a threshold c One method is to try all possible splits How to choose c ?

Person Homer 0” 250 36 M Marge 10” 150 34 F Bart 2” 90 10 Lisa 6” 78 8 Hair Length Weight Age Class Homer 0” 250 36 M Marge 10” 150 34 F Bart 2” 90 10 Lisa 6” 78 8 Maggie 4” 20 1 Abe 1” 170 70 Selma 8” 160 41 Otto 180 38 Krusty 200 45 Comic 8” 290 38 ?

Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = 0.9911 yes no Hair Length <= 5? Let us try splitting on Hair length Entropy(1F,3M) = -(1/4)log2(1/4) - (3/4)log2(3/4) = 0.8113 Entropy(3F,2M) = -(3/5)log2(3/5) - (2/5)log2(2/5) = 0.9710 Gain(Hair Length <= 5) = 0.9911 – (4/9 * 0.8113 + 5/9 * 0.9710 ) = 0.0911

Gain(Weight <= 160) = 0.9911 – (5/9 * 0.7219 + 4/9 * 0 ) = 0.5900 Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = 0.9911 yes no Weight <= 160? Let us try splitting on Weight Entropy(4F,1M) = -(4/5)log2(4/5) - (1/5)log2(1/5) = 0.7219 Entropy(0F,4M) = -(0/4)log2(0/4) - (4/4)log2(4/4) = 0 Gain(Weight <= 160) = 0.9911 – (5/9 * 0.7219 + 4/9 * 0 ) = 0.5900

Gain(Age <= 40) = 0.9911 – (6/9 * 1 + 3/9 * 0.9183 ) = 0.0183 Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = 0.9911 yes no age <= 40? Let us try splitting on Age Entropy(3F,3M) = -(3/6)log2(3/6) - (3/6)log2(3/6) = 1 Entropy(1F,2M) = -(1/3)log2(1/3) - (2/3)log2(2/3) = 0.9183 Gain(Age <= 40) = 0.9911 – (6/9 * 1 + 3/9 * 0.9183 ) = 0.0183

This time we find that we can split on Hair length, and we are done! Of the 3 features we had, Weight was best. But while people who weigh over 160 are perfectly classified (as males), the under 160 people are not perfectly classified… So we simply recurse! yes no Weight <= 160? This time we find that we can split on Hair length, and we are done! yes no Hair Length <= 2?

Male Male Female Weight <= 160? We don’t need to keep the data around, just the test conditions. Weight <= 160? yes no How would these people be classified? Hair Length <= 2? Male yes no Male Female

Male Male Female It is trivial to convert Decision Trees to rules… Weight <= 160? yes no Hair Length <= 2? Male yes no Male Female Rules to Classify Males/Females If Weight greater than 160, classify as Male Elseif Hair Length less than or equal to 2, classify as Male Else classify as Female Note: could avoid use of “elseif” by specifying all test conditions from root to corresponding leaf.

Once we have learned the decision tree, we don’t even need a computer! This decision tree is attached to a medical machine, and is designed to help nurses make decisions about what type of doctor to call. Decision tree for a typical shared-care setting applying the system for the diagnosis of prostatic obstructions.

The worked examples we have seen were performed on small datasets The worked examples we have seen were performed on small datasets. However with small datasets there is a great danger of overfitting the data… When you have few datapoints, there are many possible splitting rules that perfectly classify the data, but will not generalize to future datasets. Yes No Wears green? Female Male For example, the rule “Wears green?” perfectly classifies the data, so does “Mothers name is Jacqueline?”, so does “Has blue shoes”…

GINI is Another Measure of Impurity Gini for a given node t with classes j NOTE: p( j | t) is again computed as relative frequency of class j at node t Compute best split by computing the partition that yields the lowest GINI where we again take the weighted average of the children’s GINI Best GINI = 0.0 Worst GINI = 0.5

Splitting Criteria based on Classification Error Classification error at a node t : Measures misclassification error made by a node. Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information. This is ½ for 2-class problems Minimum (0.0) when all records belong to one class, implying most interesting information

Examples for Computing Error P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Error = 1 – max (0, 1) = 1 – 1 = 0 Equivalently, predict majority class and determine fraction of errors P(C1) = 1/6 P(C2) = 5/6 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6 P(C1) = 2/6 P(C2) = 4/6 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3

Complete Example using Error Rate Initial sample has 3 C1 and 15 C2 Based on one 3-way split you get the 3 child nodes to the left What is the decrease in error rate? What is the error rate initially? What is it afterwards? As usual you need to take the weighted average (but there is a shortcut)

Error Rate Example Continued Error rate before: 3/18 Error rate after: Shortcut: Number of errors = 0 + 1 + 2 Out of 18 examples Error rate = 3/18 Weighted average method: 6/18 x 0 + 6/18 x 1/6 + 6/18 x 2/6 Simplifies to 1/18 + 2/18 = 3/18

Comparison among Splitting Criteria For a 2-class problem:

Discussion Error rate is often the metric used to evaluate a classifier (but not always) So it seems reasonable to use error rate to determine the best split That is, why not just use a splitting metric that matches the ultimate evaluation metric? But this is wrong! The reason is related to the fact that decision trees use a greedy strategy, so we need to use a splitting metric that leads to globally better results The other metrics will empirically outperform error rate, although there is no proof for this.

How to Specify Test Condition? Depends on attribute types Nominal Ordinal Continuous Depends on number of ways to split 2-way split Multi-way split

Splitting Based on Nominal Attributes Multi-way split: Use as many partitions as distinct values. Binary split: Divides values into two subsets. Need to find optimal partitioning. CarType Family Sports Luxury CarType {Sports, Luxury} {Family} CarType {Family, Luxury} {Sports} OR

Splitting Based on Ordinal Attributes Multi-way split: Use as many partitions as distinct values. Binary split: Divides values into two subsets. Need to find optimal partitioning. What about this split? Size Small Medium Large Size {Small, Medium} {Large} Size {Medium, Large} {Small} OR Size {Small, Large} {Medium}

Splitting Based on Continuous Attributes Different ways of handling Discretization to form an ordinal categorical attribute Static – discretize once at the beginning Dynamic – ranges can be found by equal interval bucketing, equal frequency bucketing (percentiles), or clustering. Binary Decision: (A < v) or (A  v) consider all possible splits and finds the best cut can be more compute intensive

Splitting Based on Continuous Attributes

Data Fragmentation Number of instances gets smaller as you traverse down the tree Number of instances at the leaf nodes could be too small to make statistically significant decision Decision trees can suffer from data fragmentation Especially true if there are many features and not too many examples True or False: All classification methods may suffer data fragmentation. False: not logistic regression or instance-based learning. Only applies to divide-and-conquer methods

Expressiveness Expressiveness relates to flexibility of the classifier in forming decision boundaries Linear models are not that expressive since they can only form linear boundaries Decision tree models can form rectangular regions Which is more expressive and why? Decision trees because they can form many regions, but DTs do have the limitation of only forming axis-parallel boundaries. Decision tree do not generalize well to certain types of functions (like parity which depends on all features) For accurate modeling, must have a complete trees Not expressive enough for modeling continuous variables especially when more than one variable at a time is involved

Decision Boundary Border line between two neighboring regions of different classes is known as decision boundary Decision boundary is parallel to axes because test condition involves a single attribute at-a-time

Oblique Decision Trees x + y < 1 Class = + Class = This special type of decision tree avoids some weaknesses and increases the expressiveness of decision trees This is not what we mean when we refer to decision trees (e.g., on an exam)

Tree Replication This can be viewed as a weakness of decision trees, but this is really a minor issue

Pros and Cons of Decision Trees Advantages: Easy to understand Can get a global view of what is going on and also explain individual decisions Can generate rules from them Fast to build and apply Can handle redundant and irrelevant features and missing values Disadvantages: Limited expressive power May suffer from overfitting and validation set may be necessary to avoid overfitting

More to Come on Decision Trees We have covered most of the essential aspects of decision trees except pruning We will cover pruning next and, more generally, overfitting avoidance We will also cover evaluation, which applies to decision trees but also to all predictive models