Presentation is loading. Please wait.

Presentation is loading. Please wait.

Decision Trees Reading: Textbook, “Learning From Examples”, Section 3.

Similar presentations


Presentation on theme: "Decision Trees Reading: Textbook, “Learning From Examples”, Section 3."— Presentation transcript:

1 Decision Trees Reading: Textbook, “Learning From Examples”, Section 3

2 DayOutlook Temp Humidity Wind PlayTennis D1Sunny HotHigh Weak No D2Sunny HotHigh Strong No D3Overcast HotHigh Weak Yes D4Rain MildHigh Weak Yes D5Rain CoolNormal Weak Yes D6Rain CoolNormal Strong No D7Overcast CoolNormal StrongYes D8Sunny MildHigh Weak No D9Sunny CoolNormal Weak Yes D10Rain MildNormal Weak Yes D11Sunny MildNormal Strong Yes D12Overcast MildHigh Strong Yes D13Overcast HotNormal Weak Yes D14Rain MildHigh Strong No Training data:

3 Decision Trees Target concept: “Good days to play tennis” Example: Classification?

4 How can good decision trees be automatically constructed? Would it be possible to use a “generate-and-test” strategy to find a correct decision tree? –I.e., systematically generate all possible decision trees, in order of size, until a correct one is generated.

5 Why should we care about finding the simplest (i.e., smallest) correct decision tree?

6 Decision Tree Induction Goal is, given set of training examples, construct decision tree that will classify those training examples correctly (and, hopefully, generalize) Original idea of decision trees developed in 1960s by psychologists Hunt, Marin, and Stone, as model of human concept learning. (CLS = “Concept Learning System”) In 1970s, AI researcher Ross Quinlan used this idea for AI concept learning: –ID3 (“Itemized Dichotomizer 3”), 1979

7 The Basic Decision Tree Learning Algorithm (ID3) 1.Determine which attribute is, by itself, the most useful one for distinguishing the two classes over all the training data. Put it at the root of the tree.

8 Outlook

9 The Basic Decision Tree Learning Algorithm (ID3) 1.Determine which attribute is, by itself, the most useful one for distinguishing the two classes over all the training data. Put it at the root of the tree. 2.Create branches from the root node for each possible value of this attribute. Sort training examples to the appropriate value.

10 Outlook SunnyOvercastRain D1, D2, D8 D9, D11 D3, D7, D12 D13 D4, D5, D6 D10, D14

11 The Basic Decision Tree Learning Algorithm (ID3) 1.Determine which attribute is, by itself, the most useful one for distinguishing the two classes over all the training data. Put it at the root of the tree. 2.Create branches from the root node for each possible value of this attribute. Sort training examples to the appropriate value. 3.At each descendant node, determine which attribute is, by itself, the most useful one for distinguishing the two classes for the corresponding training data. Put that attribute at that node.

12 Outlook SunnyOvercastRain Humidity Yes Wind

13 The Basic Decision Tree Learning Algorithm (ID3) 1.Determine which attribute is, by itself, the most useful one for distinguishing the two classes over all the training data. Put it at the root of the tree. 2.Create branches from the root node for each possible value of this attribute. Sort training examples to the appropriate value. 3.At each descendant node, determine which attribute is, by itself, the most useful one for distinguishing the two classes for the corresponding training data. Put that attribute at that node. 4.Go to 2, but for the current node. Note: This is greedy search with no backtracking

14 The Basic Decision Tree Learning Algorithm (ID3) 1.Determine which attribute is, by itself, the most useful one for distinguishing the two classes over all the training data. Put it at the root of the tree. 2.Create branches from the root node for each possible value of this attribute. Sort training examples to the appropriate value. 3.At each descendant node, determine which attribute is, by itself, the most useful one for distinguishing the two classes for the corresponding training data. Put that attribute at that node. 4.Go to 2, but for the current node. Note: This is greedy search with no backtracking

15 How to determine which attribute is the best classifier for a set of training examples? E.g., why was Outlook chosen to be the root of the tree?

16 “Impurity” of a split Task: classify as Female or Male Instances: Jane, Mary, Alice, Bob, Allen, Doug Each instance has two binary attributes: “wears lipstick” and “has long hair”

17 “Impurity” of a split Wears lipstick T F Jane, Mary, Alice Pure splitImpure split Bob, Allen, Doug T F Jane, Mary, BobAlice, Allen, Doug Has long hair For the each node of the tree we want to choose attribute that gives purest split. But how to measure degree of impurity of a split ?

18 Entropy Let S be a set of training examples. p + = proportion of positive examples. p − = proportion of negative examples Entropy measures the degree of uniformity or non- uniformity in a collection. Roughly measures how predictable collection is, only on basis of distribution of + and − examples.

19 Entropy When is entropy zero? When is entropy maximum, and what is its value?

20

21 Entropy gives minimum number of bits of information needed to encode the classification of an arbitrary member of S. –If p + = 1, don’t need any bits (entropy 0) –If p + =.5, need one bit (+ or -) –If p + =.8, can encode collection of {+,-} values using on average less than 1 bit per value Can you explain how we might do this?

22 Entropy of each branch? Wears lipstick T F Jane, Mary, Alice Pure splitImpure split Bob, Allen, Doug T F Jane, Mary, BobAlice, Allen, Doug Has long hair

23 DayOutlook Temp Humidity Wind PlayTennis D1Sunny HotHigh Weak No D2Sunny HotHigh Strong No D3Overcast HotHigh Weak Yes D4Rain MildHigh Weak Yes D5Rain CoolNormal Weak Yes D6Rain CoolNormal Strong No D7Overcast CoolNormal StrongYes D8Sunny MildHigh Weak No D9Sunny CoolNormal Weak Yes D10Rain MildNormal Weak Yes D11Sunny MildNormal Strong Yes D12Overcast MildHigh Strong Yes D13Overcast HotNormal Weak Yes D14Rain MildHigh Strong No What is the entropy of the “Play Tennis” training set?

24 Suppose you’re now given a new example. In absence of any additional information, what classification should you guess?

25 What is the average entropy of the “Humidity” attribute?

26

27 In-class exercise: Calculate information gain of the “Outlook” attribute.

28 Formal definition of Information Gain

29 DayOutlook Temp Humidity Wind PlayTennis D1Sunny HotHigh Weak No D2Sunny HotHigh Strong No D3Overcast HotHigh Weak Yes D4Rain MildHigh Weak Yes D5Rain CoolNormal Weak Yes D6Rain CoolNormal Strong No D7Overcast CoolNormal StrongYes D8Sunny MildHigh Weak No D9Sunny CoolNormal Weak Yes D10Rain MildNormal Weak Yes D11Sunny MildNormal Strong Yes D12Overcast MildHigh Strong Yes D13Overcast HotNormal Weak Yes D14Rain MildHigh Strong No

30 Operation of ID3 1. Compute information gain for each attribute. Outlook Temperature Humidity Wind

31

32 ID3’s Inductive Bias Given a set of training examples, there are typically many decision trees consistent with that set. –E.g., what would be another decision tree consistent with the example training data? Of all these, which one does ID3 construct? –First acceptable tree found in greedy search

33 ID3’s Inductive Bias, continued Algorithm does two things: –Favors shorter trees over longer ones –Places attributes with highest information gain closest to root. What would be an algorithm that explicitly constructs the shortest possible tree consistent with the training data?

34 ID3’s Inductive Bias, continued ID3: Efficient approximation to “find shortest tree” method Why is this a good thing to do?

35 Overfitting ID3 grows each branch of the tree just deeply enough to perfectly classify the training examples. What if number of training examples is small? What if there is noise in the data? Both can lead to overfitting –First case can produce incomplete tree –Second case can produce too-complicated tree. But...what is bad about over-complex trees?

36 Overfitting, continued Formal definition of overfitting: –Given a hypothesis space H, a hypothesis h  H is said to overfit the training data if there exists some alternative h’  H, such that TrainingError(h) < TrainingError(h’), but TestError(h’) < TestError(h).

37 Overfitting, continued Accuracy Size of tree (number of nodes) test data training data Medical data set

38 Overfitting, continued How to avoid overfitting: –Stop growing the tree early, before it reaches point of perfect classification of training data. –Allow tree to overfit the data, but then prune the tree.

39 Pruning a Decision Tree Pruning: –Remove subtree below a decision node. –Create a leaf node there, and assign most common classification of the training examples affiliated with that node. –Helps reduce overfitting

40 DayOutlook Temp Humidity Wind PlayTennis D1Sunny HotHigh Weak No D2Sunny HotHigh Strong No D3Overcast HotHigh Weak Yes D4Rain MildHigh Weak Yes D5Rain CoolNormal Weak Yes D6Rain CoolNormal Strong No D7Overcast CoolNormal StrongYes D8Sunny MildHigh Weak No D9Sunny CoolNormal Weak Yes D10Rain MildNormal Weak Yes D11Sunny MildNormal Strong Yes D12Overcast MildHigh Strong Yes D13Overcast HotNormal Weak Yes D14Rain MildHigh Strong No D15Sunny HotNormal Strong No Training data:

41 Example Outlook SunnyOvercastRain Humidity Yes Wind HighNormal StrongWeak Temperature Yes Hot Mild Cool No Yes No

42 Example Outlook SunnyOvercastRain Humidity Yes Wind HighNormal StrongWeak Temperature Yes Hot Mild Cool No Yes No

43 Example Outlook SunnyOvercastRain Humidity Yes Wind HighNormal StrongWeak Temperature Yes Hot Mild Cool No Yes No D9 Sunny Cool Normal Weak Yes D11 Sunny Mild Normal Strong Yes D15 Sunny Hot Normal Strong No

44 Example Outlook SunnyOvercastRain Humidity Yes Wind HighNormal StrongWeak Temperature Yes Hot Mild Cool No Yes No D9 Sunny Cool Normal Weak Yes D11 Sunny Mild Normal Strong Yes D15 Sunny Hot Normal Strong No Majority: Yes

45 Example Outlook SunnyOvercastRain Humidity Yes Wind HighNormal StrongWeak Yes No Yes

46 How to decide which subtrees to prune?

47 Need to divide data into: Training set Pruning (validation) set Test set

48 Reduced Error Pruning: –Consider each decision node as candidate for pruning. –For each node, try pruning node. Measure accuracy of pruned tree over pruning set. –Select single-node pruning that yields best increase in accuracy over pruning set. –If no increase, select one of the single-node prunings that does not decrease accuracy. –If all prunings decrease accuracy, then don’t prune. Otherwise, continue this process until further pruning is harmful.

49 Simple validation Split training data into training set and validation set. Use training set to train model with a given set of parameters (e.g., # training epochs). Then use validation set to predict generalization accuracy. Finally, use separate test set to test final classifier. validation training training time or nodes pruned or... Error rate stop training/pruning/... here

50 Miscellaneous If you weren’t here last time, see me during the break Graduate students (545) sign up for paper presentations –This is optional for undergrads (445) –Two volunteers for Wednesday April 17 Coursepack on reserve in library Course mailing list: MLSpring2013@cs.pdx.edu

51 Today Decision trees ID3 algorithm for constructing decision trees Calculating information gain Overfitting Reduced error pruning pruning Continuous attribute values Gain ratio UCI ML Repository Optdigits data set C4.5 Evaluating classifiers Homework 1 Recap from last time

52 DayOutlook Temp Humidity Wind PlayTennis D1Sunny HotHigh Weak No D2Sunny HotHigh Strong No D3Overcast HotHigh Weak Yes D4Rain Mild High Weak Yes D5Rain Cool Normal Weak Yes D6Rain CoolNormal Strong No D7Overcast CoolNormal StrongYes D8Sunny MildHigh Weak No D9Sunny CoolNormal Weak Yes D10Rain MildNormal Weak Yes D11Sunny MildNormal Strong Yes D12Overcast MildHigh Strong Yes D13Overcast HotNormal Weak Yes D14Rain MildHigh Strong No Exercise: What is information gain of Wind? E(S) =.94

53 Continuous valued attributes Original decision trees: Two discrete aspects: –Target class (e.g., “PlayTennis”) has discrete values –Attributes (e.g., “Temperature”) have discrete values How to incorporate continuous-valued decision attributes? –E.g., Temperature  [0,100]

54 Continuous valued attributes, continued Create new attributes, e.g., Temperature c true if Temperature >= c, false otherwise. How to choose c? –Find c that maximizes information gain.

55 DayOutlook Temp Humidity Wind PlayTennis D1Sunny 85High Weak No D2Sunny 72High Strong No D3Overcast 62High Weak Yes D4Rain 60High Weak Yes D5Rain 20Normal StrongNo D6Rain 10Normal WeakYes Training data:

56 Sort examples according to values of Temperature found in training set Temperature: 102060627285 PlayTennis: YesNoYesYesNoNo Find adjacent examples that differ in target classification. Choose candidate c as midpoint of the corresponding interval. –Can show that optimal c must always lie at such a boundary. Then calculate information gain for each candidate c. Choose best one. Put new attribute Temperature c in pool of attributes.

57 Example Temperature: 102060627285 PlayTennis: YesNoYesYesNoNo

58 Example Temperature: 102060627285 PlayTennis: YesNoYesYesNoNo

59 Example Temperature: 102060627285 PlayTennis: YesNoYesYesNoNo c =15c =40c =67

60 Example Temperature: 102060627285 PlayTennis: YesNoYesYesNoNo c =15c =40c =67 Define new attribute: Temperature 15, with Values(Temperature 15 ) = { =15}

61 DayOutlook Temp Humidity Wind PlayTennis D1Sunny >=15High Weak No D2Sunny >=15High Strong No D3Overcast >=15High Weak Yes D4Rain >=15High Weak Yes D5Rain >=15Normal StrongNo D6Rain <15Normal WeakYes Training data: What is Gain(S, Temperature 15 )?

62 All nodes in decision tree are of the form AiAi  Threshold < Threshold

63 Alternative measures for selecting attributes Recall intuition behind information gain measure: –We want to choose attribute that does the most work in classifying the training examples by itself. –So measure how much information is gained (or how much entropy decreased) if that attribute is known.

64 However, information gain measure favors attributes with many values. Extreme example: Suppose that we add attribute “Date” to each training example. Each training example has a different date.

65 DayDateOutlookTemp HumidityWind PlayTennis D13/1SunnyHotHighWeak No D23/2SunnyHotHighStrong No D33/3OvercastHotHighWeak Yes D43/4RainMild HighWeak Yes D53/5RainCool Normal Weak Yes D63/6RainCoolNormal Strong No D73/7OvercastCoolNormal StrongYes D83/8SunnyMildHighWeak No D93/9SunnyCoolNormalWeak Yes D103/10RainMildNormal Weak Yes D113/11SunnyMildNormal Strong Yes D123/12OvercastMildHighStrong Yes D133/13Overcast HotNormal Weak Yes D143/14RainMildHighStrong No Gain (S, Outlook) =.94 -.694 =.246 What is Gain (S, Date)?

66 Date will be chosen as root of the tree. But of course the resulting tree will not generalize

67 Gain Ratio Quinlan proposed another method of selecting attributes, called “gain ratio”: Suppose attribute A splits the training data S into m subsets. Call the subsets S 1, S 2,..., S m. We can define a set: The Penalty Term is the entropy of this set. For example: What is the Penalty Term for the “Date” attribute? How about for “Outlook”?

68 DayDateOutlookTemp HumidityWind PlayTennis D13/1SunnyHotHighWeak No D23/2SunnyHotHighStrong No D33/3OvercastHotHighWeak Yes D43/4RainMild HighWeak Yes D53/5RainCool Normal Weak Yes D63/6RainCoolNormal Strong No D73/7OvercastCoolNormal StrongYes D83/8SunnyMildHighWeak No D93/9SunnyCoolNormalWeak Yes D103/10RainMildNormal Weak Yes D113/11SunnyMildNormal Strong Yes D123/12OvercastMildHighStrong Yes D133/13Overcast HotNormal Weak Yes D143/14RainMildHighStrong No

69 UCI ML Repository http://archive.ics.uci.edu/ml/ http://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of +Handwritten+Digits optdigits-pictures optdigits.info optdigits.names

70 Homework 1 How to download homework and data Demo of C4.5 Accounts on Linuxlab? How to get to Linux Lab Need help on Linux? Newer version C5.0: http://www.rulequest.com/see5-info.html


Download ppt "Decision Trees Reading: Textbook, “Learning From Examples”, Section 3."

Similar presentations


Ads by Google