Seminar on Machine Learning Rada Mihalcea Decision Trees Very short intro to Weka January 27, 2003.

Slides:



Advertisements
Similar presentations
1 Machine Learning: Lecture 3 Decision Tree Learning (Based on Chapter 3 of Mitchell T.., Machine Learning, 1997)
Advertisements

Decision Tree Learning - ID3
Decision Trees Decision tree representation ID3 learning algorithm
1er. Escuela Red ProTIC - Tandil, de Abril, Decision Tree Learning 3.1 Introduction –Method for approximation of discrete-valued target functions.
ICS320-Foundations of Adaptive and Learning Systems
Classification Techniques: Decision Tree Learning
Decision Tree Learning 主講人:虞台文 大同大學資工所 智慧型多媒體研究室.
Decision Trees IDHairHeightWeightLotionResult SarahBlondeAverageLightNoSunburn DanaBlondeTallAverageYesnone AlexBrownTallAverageYesNone AnnieBlondeShortAverageNoSunburn.
Part 7.3 Decision Trees Decision tree representation ID3 learning algorithm Entropy, information gain Overfitting.
CS 590M Fall 2001: Security Issues in Data Mining Lecture 4: ID3.
Decision tree LING 572 Fei Xia 1/10/06. Outline Basic concepts Main issues Advanced topics.
Decision Tree Learning Learning Decision Trees (Mitchell 1997, Russell & Norvig 2003) –Decision tree induction is a simple but powerful learning paradigm.
Induction of Decision Trees
Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Prof. Giancarlo Mauri Lezione 3 - Learning Decision.
Decision Trees Decision tree representation Top Down Construction
1 Interacting with Data Materials from a Course in Princeton University -- Hu Yan.
Decision Trees IDHairHeightWeightLotionResult SarahBlondeAverageLightNoSunburn DanaBlondeTallAverageYesnone AlexBrownTallAverageYesNone AnnieBlondeShortAverageNoSunburn.
Ch 3. Decision Tree Learning
Machine Learning Reading: Chapter Text Classification  Is text i a finance new article? PositiveNegative.
Decision Tree Learning
ID3 and Decision tree by Tuan Nguyen May 2008.
Decision tree learning
By Wang Rui State Key Lab of CAD&CG
Fall 2004 TDIDT Learning CS478 - Machine Learning.
Machine Learning Chapter 3. Decision Tree Learning
Learning what questions to ask. 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of.
Machine Learning 1 Introduction
Artificial Intelligence 7. Decision trees
Machine Learning CS 165B Spring 2012
Machine Learning Decision Tree.
Mohammad Ali Keyvanrad
Decision tree learning Maria Simi, 2010/2011 Inductive inference with decision trees  Decision Trees is one of the most widely used and practical methods.
Machine Learning Lecture 10 Decision Tree Learning 1.
CpSc 810: Machine Learning Decision Tree Learning.
Decision-Tree Induction & Decision-Rule Induction
Decision Tree Learning
Data Mining-Knowledge Presentation—ID3 algorithm Prof. Sin-Min Lee Department of Computer Science.
For Wednesday No reading Homework: –Chapter 18, exercise 6.
Machine Learning, Decision Trees, Overfitting Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 14,
For Monday No new reading Homework: –Chapter 18, exercises 3 and 4.
CS 8751 ML & KDDDecision Trees1 Decision tree representation ID3 learning algorithm Entropy, Information gain Overfitting.
CS 5751 Machine Learning Chapter 3 Decision Tree Learning1 Decision Trees Decision tree representation ID3 learning algorithm Entropy, Information gain.
机器学习 陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心. 课程基本信息  主讲教师:陈昱 Tel :  助教:程再兴, Tel :  课程网页:
1 Decision Tree Learning Original slides by Raymond J. Mooney University of Texas at Austin.
Decision Trees, Part 1 Reading: Textbook, Chapter 6.
Decision Tree Learning
Training Examples. Entropy and Information Gain Information answers questions The more clueless I am about the answer initially, the more information.
Decision Tree Learning Presented by Ping Zhang Nov. 26th, 2007.
Machine Learning Recitation 8 Oct 21, 2009 Oznur Tastan.
Outline Decision tree representation ID3 learning algorithm Entropy, Information gain Issues in decision tree learning 2.
Friday’s Deliverable As a GROUP, you need to bring 2N+1 copies of your “initial submission” –This paper should be a complete version of your paper – something.
Decision Tree Learning DA514 - Lecture Slides 2 Modified and expanded from: E. Alpaydin-ML (chapter 9) T. Mitchell-ML.
Review of Decision Tree Learning Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
CSE573 Autumn /11/98 Machine Learning Administrative –Finish this topic –The rest of the time is yours –Final exam Tuesday, Mar. 17, 2:30-4:20.
Machine Learning Inductive Learning and Decision Trees
Università di Milano-Bicocca Laurea Magistrale in Informatica
CS 9633 Machine Learning Decision Tree Learning
Decision Tree Learning
Decision trees (concept learnig)
Machine Learning Lecture 2: Decision Tree Learning.
Decision trees (concept learnig)
Lecture 3: Decision Tree Learning
Decision Tree Saed Sayad 9/21/2018.
Machine Learning Chapter 3. Decision Tree Learning
Machine Learning: Lecture 3
Decision Trees Decision tree representation ID3 learning algorithm
Machine Learning Chapter 3. Decision Tree Learning
Decision Trees.
Decision Trees Decision tree representation ID3 learning algorithm
INTRODUCTION TO Machine Learning
Presentation transcript:

Seminar on Machine Learning Rada Mihalcea Decision Trees Very short intro to Weka January 27, 2003

Last time Concept learning Some ML terminology Finding hypotheses –FIND-S –Version Spaces: Candidate Elimination algorithm

Outline Decision tree representation ID3 learning algorithm Entropy, information gain Overfitting

Decision Tree for PlayTennis Outlook SunnyOvercastRain Humidity HighNormal Wind StrongWeak NoYes No

Decision Tree for PlayTennis Outlook SunnyOvercastRain Humidity HighNormal NoYes Each internal node tests an attribute Each branch corresponds to an attribute value node Each leaf node assigns a classification

No Decision Tree for PlayTennis Outlook SunnyOvercastRain Humidity HighNormal Wind StrongWeak NoYes No Outlook Temperature Humidity Wind PlayTennis Sunny Hot High Weak ?

Decision Tree for Conjunction Outlook SunnyOvercastRain Wind StrongWeak NoYes No Outlook=Sunny  Wind=Weak No

Decision Tree for Disjunction Outlook SunnyOvercastRain Yes Outlook=Sunny  Wind=Weak Wind StrongWeak NoYes Wind StrongWeak NoYes

Decision Tree for XOR Outlook SunnyOvercastRain Wind StrongWeak YesNo Outlook=Sunny XOR Wind=Weak Wind StrongWeak NoYes Wind StrongWeak NoYes

Decision Tree Outlook SunnyOvercastRain Humidity HighNormal Wind StrongWeak NoYes No decision trees represent disjunctions of conjunctions (Outlook=Sunny  Humidity=Normal)  (Outlook=Overcast)  (Outlook=Rain  Wind=Weak)

When to consider Decision Trees Instances describable by attribute-value pairs Target function is discrete valued Disjunctive hypothesis may be required Possibly noisy training data Missing attribute values Examples: –Medical diagnosis –Credit risk analysis –Object classification for robot manipulator (Tan 1993)

Top-Down Induction of Decision Trees ID3 1.A  the “best” decision attribute for next node 2.Assign A as decision attribute for node 3. For each value of A create new descendant 4.Sort training examples to leaf node according to the attribute value of the branch 5.If all training examples are perfectly classified (same value of target attribute) stop, else iterate over new leaf nodes.

Which Attribute is ”best”? A 1 =? TrueFalse [21+, 5-][8+, 30-] [29+,35-] A 2 =? TrueFalse [18+, 33-] [11+, 2-] [29+,35-]

Entropy S is a sample of training examples p + is the proportion of positive examples p - is the proportion of negative examples Entropy measures the impurity of S Entropy(S) = -p + log 2 p + - p - log 2 p -

Entropy Entropy(S)= expected number of bits needed to encode class (+ or -) of randomly drawn members of S (under the optimal, shortest length-code) Information theory optimal length code assign –log 2 p bits to messages having probability p. So the expected number of bits to encode (+ or -) of random member of S: -p + log 2 p + - p - log 2 p - (log 0 = 0)

Information Gain Gain(S,A): expected reduction in entropy due to sorting S on attribute A A 1 =? TrueFalse [21+, 5-][8+, 30-] [29+,35-] A 2 =? TrueFalse [18+, 33-] [11+, 2-] [29+,35-] Gain(S,A)=Entropy(S) -  v  values(A) |S v |/|S| Entropy(S v ) Entropy([29+,35-]) = -29/64 log 2 29/64 – 35/64 log 2 35/64 = 0.99

Information Gain A 1 =? TrueFalse [21+, 5-][8+, 30-] [29+,35-] Entropy([21+,5-]) = 0.71 Entropy([8+,30-]) = 0.74 Gain(S,A 1 )=Entropy(S) -26/64*Entropy([21+,5-]) -38/64*Entropy([8+,30-]) =0.27 Entropy([18+,33-]) = 0.94 Entropy([8+,30-]) = 0.62 Gain(S,A 2 )=Entropy(S) -51/64*Entropy([18+,33-]) -13/64*Entropy([11+,2-]) =0.12 A 2 =? TrueFalse [18+, 33-] [11+, 2-] [29+,35-]

Training Examples DayOutlookTemp.HumidityWindPlay Tennis D1SunnyHotHighWeakNo D2SunnyHotHighStrongNo D3OvercastHotHighWeakYes D4RainMildHighWeakYes D5RainCoolNormalWeakYes D6RainCoolNormalStrongNo D7OvercastCoolNormalWeakYes D8SunnyMildHighWeakNo D9SunnyColdNormalWeakYes D10RainMildNormalStrongYes D11SunnyMildNormalStrongYes D12OvercastMildHighStrongYes D13OvercastHotNormalWeakYes D14RainMildHighStrongNo

Selecting the Next Attribute Humidity HighNormal [3+, 4-][6+, 1-] S=[9+,5-] E=0.940 Gain(S,Humidity) =0.940-(7/14)*0.985 – (7/14)*0.592 =0.151 E=0.985 E=0.592 Wind WeakStrong [6+, 2-][3+, 3-] S=[9+,5-] E=0.940 E=0.811E=1.0 Gain(S,Wind) =0.940-(8/14)*0.811 – (6/14)*1.0 =0.048

Selecting the Next Attribute Outlook Sunny Rain [2+, 3-] [3+, 2-] S=[9+,5-] E=0.940 Gain(S,Outlook) =0.940-(5/14)* (4/14)*0.0 – (5/14)* =0.247 E=0.971 Over cast [4+, 0] E=0.0 Temp ?

ID3 Algorithm Outlook SunnyOvercastRain Yes [D1,D2,…,D14] [9+,5-] S sunny =[D1,D2,D8,D9,D11] [2+,3-] ? ? [D3,D7,D12,D13] [4+,0-] [D4,D5,D6,D10,D14] [3+,2-] Gain(S sunny, Humidity)=0.970-(3/5)0.0 – 2/5(0.0) = Gain(S sunny, Temp.)=0.970-(2/5)0.0 –2/5(1.0)-(1/5)0.0 = Gain(S sunny, Wind)=0.970= -(2/5)1.0 – 3/5(0.918) = 0.019

ID3 Algorithm Outlook SunnyOvercastRain Humidity HighNormal Wind StrongWeak NoYes No [D3,D7,D12,D13] [D8,D9,D11] [D6,D14] [D1,D2] [D4,D5,D10]

Hypothesis Space Search ID A A A2 - A4 + - A2 - A3 - +

Hypothesis Space Search ID3 Hypothesis space is complete! –Target function surely in there… Outputs a single hypothesis No backtracking on selected attributes (greedy search) –Local minimal (suboptimal splits) Statistically-based search choices –Robust to noisy data Inductive bias (search bias) –Prefer shorter trees over longer ones –Place high information gain attributes close to the root

Converting a Tree to Rules Outlook SunnyOvercastRain Humidity HighNormal Wind StrongWeak NoYes No R 1 : If (Outlook=Sunny)  (Humidity=High) Then PlayTennis=No R 2 : If (Outlook=Sunny)  (Humidity=Normal) Then PlayTennis=Yes R 3 : If (Outlook=Overcast) Then PlayTennis=Yes R 4 : If (Outlook=Rain)  (Wind=Strong) Then PlayTennis=No R 5 : If (Outlook=Rain)  (Wind=Weak) Then PlayTennis=Yes

Continuous Valued Attributes Create a discrete attribute to test continuous Temperature = C (Temperature > C) = {true, false} Where to set the threshold? Temperature15 0 C 18 0 C 19 0 C 22 0 C 24 0 C 27 0 C PlayTennisNo Yes No

Attributes with many Values Problem: if an attribute has many values, maximizing InformationGain will select it. E.g.: Imagine using Date= as attribute perfectly splits the data into subsets of size 1 Use GainRatio instead of information gain as criteria: GainRatio(S,A) = Gain(S,A) / SplitInformation(S,A) SplitInformation(S,A) = -  i=1..c |S i |/|S| log 2 |S i |/|S| Where S i is the subset for which attribute A has the value v i

Attributes with Cost Consider: Medical diagnosis : blood test costs 1000 SEK Robotics: width_from_one_feet has cost 23 secs. How to learn a consistent tree with low expected cost? Replace Gain by : Gain 2 (S,A)/Cost(A) [Tan, Schimmer 1990] 2 Gain(S,A) -1/(Cost(A)+1) w w  [0,1] [Nunez 1988]

Unknown Attribute Values What if examples are missing values of A? Use training example anyway sort through tree If node n tests A, assign most common value of A among other examples sorted to node n. Assign most common value of A among other examples with same target value Assign probability pi to each possible value vi of A –Assign fraction pi of example to each descendant in tree Classify new examples in the same fashion

Occam’s Razor Prefer shorter hypotheses Why prefer short hypotheses? Argument in favor: –Fewer short hypotheses than long hypotheses –A short hypothesis that fits the data is unlikely to be a coincidence –A long hypothesis that fits the data might be a coincidence Argument opposed: –There are many ways to define small sets of hypotheses –E.g. All trees with a prime number of nodes that use attributes beginning with ”Z” –What is so special about small sets based on size of hypothesis

Overfitting Consider error of hypothesis h over Training data: error train (h) Entire distribution D of data: error D (h) Hypothesis h  H overfits training data if there is an alternative hypothesis h’  H such that error train (h) < error train (h’) and error D (h) > error D (h’)

Overfitting in Decision Tree Learning

Avoid Overfitting How can we avoid overfitting? Stop growing when data split not statistically significant Grow full tree then post-prune

Reduced-Error Pruning Split data into training and validation set Do until further pruning is harmful: 1. Evaluate impact on validation set of pruning each possible node (plus those below it) 2. Greedily remove the one that less improves the validation set accuracy Produces smallest version of most accurate subtree

Effect of Reduced Error Pruning

Rule-Post Pruning 1. Convert tree to equivalent set of rules 2. Prune each rule independently of each other 3. Sort final rules into a desired sequence to use Method used in C4.5

Cross-Validation Estimate the accuracy of a hypothesis induced by a supervised learning algorithm Predict the accuracy of a hypothesis over future unseen instances Select the optimal hypothesis from a given set of alternative hypotheses –Pruning decision trees –Model selection –Feature selection Combining multiple classifiers (boosting)

Holdout Method Partition data set D = {(v 1,y 1 ),…,(v n,y n )} into training D t and validation set D h =D\D t Training D t Validation D\D t acc h = 1/h  (vi,yi)  Dh  (I(D t,v i ),y i ) I(D t,v i ) : output of hypothesis induced by learner I trained on data D t for instance v i  (i,j) = 1 if i=j and 0 otherwise Problems: makes insufficient use of data training and validation set are correlated

Cross-Validation k-fold cross-validation splits the data set D into k mutually exclusive subsets D 1,D 2,…,D k Train and test the learning algorithm k times, each time it is trained on D\D i and tested on D i D1D1 D2D2 D3D3 D4D4 D1D1 D2D2 D3D3 D4D4 D1D1 D2D2 D3D3 D4D4 D1D1 D2D2 D3D3 D4D4 D1D1 D2D2 D3D3 D4D4 acc cv = 1/n  (vi,yi)  D  (I(D\D i,v i ),y i )

Cross-Validation Uses all the data for training and testing Complete k-fold cross-validation splits the dataset of size m in all (m over m/k) possible ways (choosing m/k instances out of m) Leave n-out cross-validation sets n instances aside for testing and uses the remaining ones for training (leave one-out is equivalent to n-fold cross-validation) –Leave one out is widely used In stratified cross-validation, the folds are stratified so that they contain approximately the same proportion of labels as the original data set

Bootstrap Samples n instances uniformly from the data set with replacement Probability that any given instance is not chosen after n samples is (1-1/n) n  e -1  The bootstrap sample is used for training the remaining instances are used for testing acc boot = 1/b  i=1 b (0.632  0 i acc s ) where  0 i is the accuracy on the test data of the i-th bootstrap sample, acc s is the accuracy estimate on the training set and b the number of bootstrap samples

Wrapper Model Input features Feature subset search Feature subset evaluation Feature subset evaluation Induction algorithm

Wrapper Model Evaluate the accuracy of the inducer for a given subset of features by means of n-fold cross-validation The training data is split into n folds, and the induction algorithm is run n times. The accuracy results are averaged to produce the estimated accuracy. Forward elimination: Starts with the empty set of features and greedily adds the feature that improves the estimated accuracy at most Backward elimination: Starts with the set of all features and greedily removes features and greedily removes the worst feature

Bagging For each trial t=1,2,…,T create a bootstrap sample of size N. Generate a classifier C t from the bootstrap sample The final classifier C* takes class that receives the majority votes among the C t Training set 1 Training set 2 Training set T … C1C1 C2C2 CTCT train … instance C*C* yesno yes

Bagging Bagging requires ”instable” classifiers like for example decision trees or neural networks ”The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.” (Breiman 1996)