Data Science Algorithms: The Basic Methods

Slides:



Advertisements
Similar presentations
Decision Tree Learning - ID3
Advertisements

CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
Huffman code and ID3 Prof. Sin-Min Lee Department of Computer Science.
Decision Tree Approach in Data Mining
Introduction Training Complexity, Pruning CART vs. ID3 vs. C4.5
Classification Techniques: Decision Tree Learning
Naïve Bayes: discussion
Decision Trees.
Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei Han.
Constructing Decision Trees. A Decision Tree Example The weather data example. ID codeOutlookTemperatureHumidityWindyPlay abcdefghijklmnabcdefghijklmn.
1 Classification with Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Classification: Decision Trees
1 Classification with Decision Trees I Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Decision Trees an Introduction.
Inferring rudimentary rules
Classification: Decision Trees 2 Outline  Top-Down Decision Tree Construction  Choosing the Splitting Attribute  Information Gain and Gain Ratio.
Machine Learning Lecture 10 Decision Trees G53MLE Machine Learning Dr Guoping Qiu1.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Fall 2004 TDIDT Learning CS478 - Machine Learning.
Machine Learning Chapter 3. Decision Tree Learning
Classification I. 2 The Task Input: Collection of instances with a set of attributes x and a special nominal attribute Y called class attribute Output:
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.3: Decision Trees Rodney Nielsen Many of.
Data Mining – Algorithms: Decision Trees - ID3 Chapter 4, Section 4.3.
CS690L Data Mining: Classification
For Monday No new reading Homework: –Chapter 18, exercises 3 and 4.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.2 Statistical Modeling Rodney Nielsen Many.
1 Decision Tree Learning Original slides by Raymond J. Mooney University of Texas at Austin.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Sections 4.1 Inferring Rudimentary Rules Rodney Nielsen.
Slide 1 DSCI 4520/5240: Data Mining Fall 2013 – Dr. Nick Evangelopoulos Lecture 5: Decision Tree Algorithms Material based on: Witten & Frank 2000, Olson.
Chapter 4: Algorithms CS 795. Inferring Rudimentary Rules 1R – Single rule – one level decision tree –Pick each attribute and form a single level tree.
Machine Learning Recitation 8 Oct 21, 2009 Oznur Tastan.
Decision Trees by Muhammad Owais Zahid
Chapter 3 Data Mining: Classification & Association Chapter 4 in the text box Section: 4.3 (4.3.1),
Data Mining Chapter 4 Algorithms: The Basic Methods - Constructing decision trees Reporter: Yuen-Kuei Hsueh Date: 2008/7/24.
Review of Decision Tree Learning Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Rodney Nielsen Many / most of these slides were adapted from: I. H. Witten, E. Frank and M. A. Hall Data Mining Practical Machine Learning Tools and Techniques.
Chapter 6 Decision Tree.
Machine Learning Inductive Learning and Decision Trees
DECISION TREES An internal node represents a test on an attribute.
Decision Trees an introduction.
Decision Tree Learning
Data Science Algorithms: The Basic Methods
Data Science Algorithms: The Basic Methods
Algorithms for Classification:
Classification Algorithms
Decision Trees.
Teori Keputusan (Decision Theory)
Prepared by: Mahmoud Rafeek Al-Farra
Data Science Algorithms: The Basic Methods
Artificial Intelligence
Ch9: Decision Trees 9.1 Introduction A decision tree:
Data Science Algorithms: The Basic Methods
Decision Tree Saed Sayad 9/21/2018.
Classification and Prediction
Advanced Artificial Intelligence
ID3 Algorithm.
Machine Learning Techniques for Data Mining
Machine Learning Chapter 3. Decision Tree Learning
Machine Learning: Lecture 3
Decision Trees Decision tree representation ID3 learning algorithm
Classification with Decision Trees
Machine Learning Chapter 3. Decision Tree Learning
Dept. of Computer Science University of Liverpool
Data Mining(中国人民大学) Yang qiang(香港科技大学) Han jia wei(UIUC)
Junheng, Shengming, Yunsheng 10/19/2018
INTRODUCTION TO Machine Learning 2nd Edition
Data Mining CSCI 307, Spring 2019 Lecture 15
Data Mining CSCI 307, Spring 2019 Lecture 6
Presentation transcript:

Data Science Algorithms: The Basic Methods Decision Trees WFH: Data Mining, Chapter 4.3 Rodney Nielsen Many of these slides were adapted from: I. H. Witten, E. Frank and M. A. Hall

Algorithms: The Basic Methods Inferring rudimentary rules Naïve Bayes, probabilistic model Constructing decision trees Constructing rules Association rule learning Linear models Instance-based learning Clustering

Decision Trees

Constructing Decision Trees Strategy: top down Recursive divide-and-conquer fashion First: select attribute for root node Create branch for each possible attribute value Then: split instances into subsets One for each branch extending from the node Finally: repeat recursively for each branch, using only instances that reach the branch Stop if all instances have the same class

DT: Algorithm Learning: Depth-first greedy search through the state space Top down, recursive, divide and conquer Select attribute for node Create branch for each value Split instances into subsets One for each branch Repeat recursively, using only instances that reach the branch Stop if all instances have same class This example assumes numeric features Consider weather example

DT: Algorithm Classification: Run through tree according to attribute values Note: this example assumes each xi is numeric x1<α1 x x2<α2 x3<α3 x4<α4 This example assumes numeric features Consider weather example

DT: ID3 Learning Algorithm ID3(trainingData, attributes) recursive function If (attributes=) OR (all/most trainingData is in one class) return leaf node predicting the majority class x* f best attribute to split on Nd f Create decision node splitting on x* For each possible value, vk, of x* addChild(ID3(trainingData subset with x*=vk , attributes – x*)) (assumes k-way split on categorical ftr) return Nd Student Q: Since Divide and Conquer Decision Trees are recursive, would they be preferred for massive amounts of data?

DT: Attribute Selection Evaluate each attribute Use heuristic choice (generally based on statistics or information theory) x1 x2 x3 - - + + - + - + + - - + + - - - - + - - + - + + + + - + + - - + + - - - + - -

DT: Inductive Bias Inductive bias Small vs. Large Trees Occam’s Razor Student Q: Is it better to have a more extensive (larger) decision tree, or a less extensive (smaller) decision tree? Why do you think so?

Which Attribute to Select? 64% yes 36% no 60% 100% 60% 57% 86% 75% 50% 50% 75% 67%

Which Attribute to Select? 64% yes 36% no 60% 100% 60% 57% 86% 75% 50% 50% 75% 67%

Criterion for Attribute Selection Which is the best attribute? Heuristic: choose the attribute that produces the “purest” nodes Want to get the smallest tree Popular impurity criterion: information gain Information gain increases as the purity of a subset/leaf increases Strategy: choose attribute that gives greatest information gain Student Q: Why should we seek purity in our decision nodes?

Computing Information Gain Measure information in bits Given a probability distribution, the information required to predict/specify an event is the distribution’s entropy Entropy gives the information required in bits (can involve fractions of bits!) Formula for computing the entropy: entropy(p1, p2,…, pn) = - p1 log p1 - p2 log p2 - … - pn log pn Specifying the outcome of a fair coin flip: entropy(0.5,0.5) = -0.5 log 0.5 -0.5 log 0.5 = 1.0 Specifying the outcome from a roll of a die: entropy(1/6,1/6,1/6,1/6,1/6,1/6) = 6 * (-1/6 log 1/6) = 2.58

Entropy H(X) = Entropy(X) [0,N] [N/2,N/2] [N,0]

Example: Attribute Outlook Outlook = Sunny : Outlook = Overcast : Outlook = Rainy : Expected information for attribute: Info([2,3]) = entropy(2/5,3/5) = -2/5 log(2/5) – 3/5 log(3/5) = 0.971 bits Note: this is normally undefined. Info([4,0]) = entropy(1,0) = -1 log(1) - 0 log(0) = 0.0 bits Info([3,2]) = entropy(3/5,2/5) = - 3/5 log(3/5) – 2/5 log(2/5) = 0.971 bits Info([3,2],[4,0],[3,2]) = (5/14) x 0.971 + (4/14) x 0 + (5/14) x 0.971 = 0.693 bits

Computing Information Gain Information gain: unknown info before splitting - unknown info after splitting Information gain for attributes from weather data: gain(Outlook ) = info([9,5]) - info([2,3],[4,0],[3,2]) = 0.940 - 0.693 = 0.247 bits gain(Outlook) = 0.247 bits gain(Temperature) = 0.029 bits gain(Humidity) = 0.152 bits gain(Windy) = 0.048 bits

DT: Information Gain Decrease in entropy as a result of partitioning the data Ex: X=[6+,7-], H(X)= -6/13 log26/13 -7/13 log27/13 = 0.996 InfoGain = 0.996 3/13(-1 log21 – 0 log20) -10/13(-.4 log2.4 -.6 log2.6) = 0.249 InfoGain = 0.996 6/13(-5/6 log25/6 -1/6 log21/6) -7/13(-2/7 log22/7 -5/7 log25/7) = 0.231 0.996 – 2/13(-1 log21 -0 log20) 4/13(-3/4 log23/4 -1/4 log21/4) -7/13(-2/7 log22/7 -5/7 log25/7) = 0.281 x1 - + x2 - + x3 - +

DT: ID3 Learning Algorithm ID3(trainingData, attributes) If (attributes=) OR (all trainingData is in one class) return leaf node predicting the majority class x* f best attribute to split on Nd f Create decision node splitting on x* attributesLeft f attributes – x* (assumes k-way split on categorical ftr) For each possible value, vk, of x* addChild(ID3(trainingData subset with x*=vk , attributesLeft)) return Nd

DT: Inductive Bias Inductive bias Small vs. Large Trees Occam’s Razor

Continuing to Split gain(Temperature ) = 0.571 bits gain(Windy ) = 0.020 bits gain(Humidity ) = 0.971 bits

Final Decision Tree Note: not all leaves need to be pure; sometimes identical instances have different classes  ID3 Splitting stops when data can’t be split any further

Wishlist for a Purity Measure Properties we require from a purity measure: When node is pure, measure should be zero When impurity is maximal (i.e. all classes equally likely), measure should be maximal Measure should obey multistage property (i.e. decisions can be made in several stages) Entropy is the only function that satisfies all three properties!

Highly-Branching Attributes Problematic: attributes with a large number of values (extreme case: ID code) Subsets are more likely to be pure if there is a large number of values Information gain is biased towards choosing attributes with a large number of values This may result in overfitting (selection of an attribute that performs well on training data, but is non-optimal for prediction) Another problem: fragmentation

Weather Data with ID Code N M L K J I H G F E D C B A ID code No True High Mild Rainy Yes False Normal Hot Overcast Sunny Cool Play Windy Humidity Temp. Outlook

Tree Stump for ID Code Attribute Information gain is maximal for ID code (namely 0.940 bits)

Gain Ratio Gain ratio: a modification of the information gain that reduces its bias Gain ratio takes number and size of branches into account when choosing an attribute It corrects the information gain by taking the intrinsic information of a split into account Intrinsic information: entropy of distribution of instances into branches (i.e. how much info do we need to tell which branch an instance belongs to)

Computing the Gain Ratio Example: intrinsic information for ID code Value of attribute decreases as intrinsic information gets larger Called Split Information Gain Ratio = Information Gain / Split Information Info([1,1…,1]) = -1/14 x log1/14 x 14

Gain Ratios for Weather Data 0.019 Gain ratio: 0.029/1.557 0.157 Gain ratio: 0.247/1.577 1.557 Split info: info([4,6,4]) 1.577 Split info: info([5,4,5]) 0.029 Gain: 0.940-0.911 0.247 Gain: 0.940-0.693 0.911 Info: 0.693 Temperature Outlook 0.049 Gain ratio: 0.048/0.985 0.152 Gain ratio: 0.152/1 0.985 Split info: info([8,6]) 1.000 Split info: info([7,7]) 0.048 Gain: 0.940-0.892 Gain: 0.940-0.788 0.892 Info: 0.788 Windy Humidity

More on Gain Ratio “Outlook” still comes out top Problem with gain ratio: it may overcompensate May choose an attribute just because its intrinsic information is very low Standard fix: only consider attributes with greater than average information gain

DT: Hypothesis Space Unrestricted hypothesis space + -

Discussion Top-down induction of decision trees: ID3, algorithm developed by Ross Quinlan Gain ratio just one modification of this basic algorithm C4.5: deals with numeric attributes, missing values, noisy data Similar approach: CART There are many other attribute selection criteria! (But little difference in accuracy of result)