Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machine Learning Decision Tree.

Similar presentations


Presentation on theme: "Machine Learning Decision Tree."— Presentation transcript:

1 Machine Learning Decision Tree

2

3 Decision Tree Outlook Sunny Overcast Rain Humidity Yes Wind High
Normal Strong Weak No Yes No Yes

4

5 Classification by Decision Tree Induction
A flow-chart-like tree structure Internal node denotes an attribute Branch represents the values of the node attribute Leaf nodes represent class labels or class distribution

6 Decision trees high Income? yes no Criminal record? NO yes no NO YES

7 Constructing a decision tree, one step at a time
+a, -c, +i, +e, +o, +u: Y -a, +c, -i, +e, -o, -u: N +a, -c, +i, -e, -o, -u: Y -a, -c, +i, +e, -o, -u: Y -a, +c, +i, -e, -o, -u: N -a, -c, +i, -e, -o, +u: Y +a, -c, -i, -e, +o, -u: N +a, +c, +i, -e, +o, -u: N Constructing a decision tree, one step at a time address? yes no -a, +c, -i, +e, -o, -u: N -a, -c, +i, +e, -o, -u: Y -a, +c, +i, -e, -o, -u: N -a, -c, +i, -e, -o, +u: Y +a, -c, +i, +e, +o, +u: Y +a, -c, +i, -e, -o, -u: Y +a, -c, -i, -e, +o, -u: N +a, +c, +i, -e, +o, -u: N criminal? criminal? yes no yes no -a, +c, -i, +e, -o, -u: N -a, +c, +i, -e, -o, -u: N -a, -c, +i, +e, -o, -u: Y -a, -c, +i, -e, -o, +u: Y +a, -c, +i, +e, +o, +u: Y +a, -c, +i, -e, -o, -u: Y +a, -c, -i, -e, +o, -u: N +a, +c, +i, -e, +o, -u: N income? yes no Address was maybe not the best attribute to start with… +a, -c, +i, +e, +o, +u: Y +a, -c, +i, -e, -o, -u: Y +a, -c, -i, -e, +o, -u: N

8 A Worked Example Weekend Weather Parents Money Decision (Category) W1
Sunny Yes Rich Cinema W2 No Tennis W3 Windy W4 Rainy Poor W5 Stay in W6 W7 W8 Shopping W9 W10

9 Decision Tree Learning
Building a Decision Tree First test all attributes and select the on that would function as the best root; Break-up the training set into subsets based on the branches of the root node; Test the remaining attributes to see which ones fit best underneath the branches of the root node; Continue this process for all other branches until all examples of a subset are of one type there are no examples left (return majority classification of the parent) there are no more attributes left (default value should be majority classification)

10 Decision Tree Learning
Determining which attribute is best (Entropy & Gain) Entropy (E) is the minimum number of bits needed in order to classify an arbitrary example as yes or no E(S) = ci=1 –pi log2 pi , Where S is a set of training examples, c is the number of classes, and pi is the proportion of the training set that is of class i For our entropy equation 0 log2 0 = 0 The information gain G(S,A) where A is an attribute G(S,A)  E(S) - v in Values(A) (|Sv| / |S|) * E(Sv)

11 Information Gain Gain(S,A): expected reduction in entropy due to sorting S on attribute A Gain(S,A)=Entropy(S) - vvalues(A) |Sv|/|S| Entropy(Sv) Entropy([29+,35-]) = -29/64 log2 29/64 – 35/64 log2 35/64 = 0.99 A1=? True False [21+, 5-] [8+, 30-] [29+,35-] A2=? True False [18+, 33-] [11+, 2-] [29+,35-]

12 Training Examples Day Outlook Temp. Humidity Wind Play Tennis D1 Sunny
Hot High Weak No D2 Strong D3 Overcast Yes D4 Rain Mild D5 Cool Normal D6 D7 D8 D9 Cold D10 D11 D12 D13 D14

13 ID3 Algorithm [D1,D2,…,D14] [9+,5-] Outlook Sunny Overcast Rain
Ssunny=[D1,D2,D8,D9,D11] [2+,3-] [D3,D7,D12,D13] [4+,0-] [D4,D5,D6,D10,D14] [3+,2-] Yes ? ? Gain(Ssunny , Humidity)=0.970-(3/5)0.0 – 2/5(0.0) = 0.970 Gain(Ssunny , Temp.)=0.970-(2/5)0.0 –2/5(1.0)-(1/5)0.0 = 0.570 Gain(Ssunny , Wind)=0.970= -(2/5)1.0 – 3/5(0.918) = 0.019

14 ID3 Algorithm Outlook Sunny Overcast Rain Humidity Yes Wind
[D3,D7,D12,D13] High Normal Strong Weak No Yes No Yes [D6,D14] [D4,D5,D10] [D1,D2] [D8,D9,D11]

15 Converting a Tree to Rules
Outlook Sunny Overcast Rain Humidity High Normal Wind Strong Weak No Yes R1: If (Outlook=Sunny)  (Humidity=High) Then PlayTennis=No R2: If (Outlook=Sunny)  (Humidity=Normal) Then PlayTennis=Yes R3: If (Outlook=Overcast) Then PlayTennis=Yes R4: If (Outlook=Rain)  (Wind=Strong) Then PlayTennis=No R5: If (Outlook=Rain)  (Wind=Weak) Then PlayTennis=Yes

16 Decision tree classifiers
Does not require any prior knowledge of data distribution, works well on noisy data. Has been applied to: classify medical patients based on the disease, equipment malfunction by cause, loan applicant by likelihood of payment.

17 Decision trees The internal nodes are simple decision rules on one or more attributes and leaf nodes are predicted class labels. Salary < 1 M Prof = teaching Age < 30 Cover number of topics on this subject Goal: commentary, cover system integration and architectural aspects, shortcomings of existing systems and future directions.. Not a teaching lecture. Details of specific algorithms covered on demand. Trailer: details can be addressed later. Goo d Bad Bad Goo d

18 Sample Experience Table
Example Attributes Target Hour Weather Accident Stall Commute D1 8 AM Sunny No Long D2 Cloudy Yes D3 10 AM Short D4 9 AM Rainy D5 D6 D7 D8 Medium D9 D10 D11 D12 D13

19 Decision Tree Analysis
For choosing the best course of action when future outcomes are uncertain.

20 Classification Data: It has k attributes A1, … Ak. Each tuple (case or example) is described by values of the attributes and a class label. Goal: To learn rules or to build a model that can be used to predict the classes of new (or future or test) cases. The data used for building the model is called the training data. Induction is different from deduction and DBMS does not not support induction; The result of induction is higher-level information or knowledge: general statements about data There are many approaches. Refer to the lecture notes for CS3244 available at the Co-Op. We focus on three approaches here, other examples: Other approaches Instance-based learning other neural networks Concept learning (Version space, Focus, Aq11, …) Genetic algorithms Reinforcement learning

21 Data and its format Data Data type Data format attribute-value pairs
with/without class Data type continuous/discrete nominal Data format Flat If not flat, what should we do?

22 Induction Algorithm We calculate the gain for each attribute and choose the max gain to be the node in the tree. After build the node calculate the gain for other attribute and choose again the max of them.

23 Decision Tree Induction Algorithm
Create a root node for the tree If all cases are positive, return single-node tree with label + If all cases are negative, return single-node tree with label - Otherwise begin For each possible value of node Add a new tree branch Let cases be subset of all data that have this value Add new node with new subtree until leaf node

24 Impurity Measures Information entropy:
Zero when consisting of only one class, one when all classes in equal number Other measures of impurity: Gini:

25 Review of Log2 log2(0) = unkown log2(1) = 0 log2(2) = 1 log2(4) = 2

26 Example No. A1 A2 Classification 1 T + 2 3 F - 4 5 6

27 Example of Decision Tree
Size Shape Color Choice Medium Brick Blue Yes Small Wedge Red No Large Sphere Pillar Green

28 Classification —A Two-Step Process
Model construction: describing a set of predetermined classes based on a training set. It is also called learning. Each tuple/sample is assumed to belong to a predefined class The model is represented as classification rules, decision trees, or mathematical formulae Model usage: for classifying future test data/objects Estimate accuracy of the model The known label of test example is compared with the classified result from the model Accuracy rate is the % of test cases that are correctly classified by the model If the accuracy is acceptable, use the model to classify data tuples whose class labels are not known.

29 Classification Process (1): Model Construction
Algorithms Training Data Classifier (Model) IF rank = ‘professor’ OR years > 6 THEN tenured = ‘yes’

30 Classification Process (2): Use the Model in Prediction
Classifier Testing Data Unseen Data (Jeff, Professor, 4) Tenured?

31 Example consider the problem of learning whether or not to go jogging on a particular day. Possible Values Attribute Warm, Cold, Raining WEATHER Yes, No JOGGED_YESTERDAY

32 The Data CLASSIFICATION JOGGED_YESTERDAY WEATHER + N C - Y W R

33 Test Data CLASSIFICATION JOGGED_YESTERDAY WEATHER - Y W + N R C

34 ID3: Some Issues Sometimes we arrive to a node with no examples.
This means that the example has not been observed. We just assigned as value the majority vote of its parent Sometimes we arrive to a node with both positive and negative examples and no attributes left. This means that there is noise in the data. We just assigned as value the majority vote of the examples

35 Problems with Decision Tree
ID3 is not optimal Uses expected entropy reduction, not actual reduction Must use discrete (or discretized) attributes What if we left for work at 9:30 AM? We could break down the attributes into smaller values…

36 Problems with Decision Trees
While decision trees classify quickly, the time for building a tree may be higher than another type of classifier Decision trees suffer from a problem of errors propagating throughout a tree A very serious problem as the number of classes increases

37 Decision Tree characteristics
The training data may contain missing attribute values. Decision tree methods can be used even when some training examples have unknown values.

38 Unknown Attribute Values
What is some examples missing values of A? Use training example anyway sort through tree If node n tests A, assign most common value of A among other examples sorted to node n. Assign most common value of A among other examples with same target value Assign probability pi to each possible value vi of A Assign fraction pi of example to each descendant in tree

39 Rule Generation Once a decision tree has been constructed, it is a simple matter to convert it into set of rules. Converting to rules allows distinguishing among the different contexts in which a decision node is used.

40 Rule Generation Converting to rules improves readability.
Rules are often easier for people to understand. To generate rules, trace each path in the decision tree, from root node to leaf node

41 Rule Simplification Once a rule set has been devised:
Once individual rules have been simplified by eliminating redundant rules and unnecessary rules. Attempt to replace those rules that share the most common consequent by a default rule that is triggered when no other rule is triggered.

42 Attribute-Based Representations
Examples of decisions

43 Continuous Valued Attributes
Create a discrete attribute to test continuous Temperature = 24.50C (Temperature > 20.00C) = {true, false} Where to set the threshold? Temperatur 150C 180C 190C 220C 240C 270C PlayTennis No Yes

44 Pruning Trees There is another technique for reducing the number of attributes used in a tree - pruning Two types of pruning: Pre-pruning (forward pruning) Post-pruning (backward pruning)

45 Prepruning In prepruning, we decide during the building process when to stop adding attributes (possibly based on their information gain) However, this may be problematic – Why? Sometimes attributes individually do not contribute much to a decision, but combined, they may have a significant impact

46 Postpruning Postpruning waits until the full decision tree has built and then prunes the attributes Two techniques: Subtree Replacement Subtree Raising

47 Subtree Replacement Entire subtree is replaced by a single leaf node A
5 4 1 2 3

48 Subtree Replacement Node 6 replaced the subtree
Generalizes tree a little more, but may increase accuracy A B 6 5 4

49 Subtree Raising Entire subtree is raised onto another node A B C 5 4 1
2 3

50 Subtree Raising Entire subtree is raised onto another node
This was not discussed in detail as it is not clear whether this is really worthwhile (as it is very time consuming) A C 1 2 3

51


Download ppt "Machine Learning Decision Tree."

Similar presentations


Ads by Google