Presentation is loading. Please wait.

Presentation is loading. Please wait.

Decision Tree Learning

Similar presentations


Presentation on theme: "Decision Tree Learning"— Presentation transcript:

1 Decision Tree Learning
Chapter 3 Decision Tree Learning Decision tree representation ID3 learning algorithm Entropy, information gain Overfitting

2 Review example: Image Categorization (two phases)
Training Labels Training Images Classifier Training Training Image Features Trained Classifier Image Features Testing Test Image Trained Classifier Outdoor Prediction

3 Occam’s Razor: prefer the simplest hypothesis consistent with data
Inductive Learning Learning a function from examples Occam’s Razor: prefer the simplest hypothesis consistent with data One of the most widely used inductive learnings: Decision Tree Learning

4 Decision Tree Example Each internal node corresponds to a test
+ : Filled with blue - : Filled with red I Color Shape Size + - big small round square red green blue Each internal node corresponds to a test Each branch corresponds to a result of the test Each leaf node assigns a classification

5 PlayTennis: Training Examples
Sample, attribute, target_attribute

6 A Decision Tree for the concept PlayTennis
Outlook? Humidity? Wind? Sunny Overcast Rain Yes High Normal Strong Weak No Finding the most suitable attribute for the root Finding redundant attributes, like temperature

7 Convert a tree to rule

8 Decision trees can represent any Boolean function
If (O=Sunny AND H=Normal) OR (O=Overcast) OR (O=Rain AND W=Weak) then YES “A disjunction of conjunctions of constraints on attribute values”

9 Decision trees can represent any Boolean function
In the worst case, it need exponentially many nodes XOR, as an extreme case

10 Decision tree, decision boundaries

11 Decision Regions

12 Decision Trees One of the most widely used and practical methods for inductive inference Approximates discrete-valued functions can be extended to continuous valued functions Can be used for classification (most common) or regression problems رگراسیون: تحلیل رابطه بین دو پارامتر پیوسته، فیت کردن یک خط روی چند نقطه

13 Decision Trees for Regression
(Continuous values)

14 Divide and Conquer Internal decision nodes Leaves
Univariate: Uses a single attribute, xi Discrete xi : n-way split for n possible values Continuous xi : Binary split : xi > wm Multivariate: Uses more than one attributes Leaves Classification: Class labels Regression: Numeric Once the tree is trained, a new instance is classified by starting at the root and following the path as dictated by the test results for this instance.

15 Decision tree learning algorithm
Learning process: finding the tree from the training set For a given training set, there are many trees that code it without any error Finding the smallest tree is NP-complete (Quinlan 1986), hence we are forced to use some (local) search algorithm to find reasonable solutions

16 ID3: The basic decision tree learning algorithm
Basic idea: A decision tree can be constructed by considering attributes of instances one by one. Which attribute should be considered first? The height of a decision tree depends on the order attributes that are considered. ==> Entropy

17 Which Attribute is ”best”?
True False [21+, 5-] [8+, 30-] [29+,35-] A2=? True False [18+, 33-] [11+, 2-] [29+,35-] Entropy: Large Entropy => more information

18 Entropy Entropy(S) = -p+ log2 p+ - p- log2 p- S is training examples
p+ is the proportion of positive examples p- is the proportion of negative examples Exercise: Calculate the Entropy in two cases (Note that: 0Log20 =0): P+ = 0.5, P- = 0.5 P+ = 1, P- = 0 Question: Why (-) in the equation?

19 Entropy A measure of uncertainty Example: Information theory
Probability of Head in a fair coin Same for a coin with two head Information theory آنتروپی زیاد=عدم قطعیت زیاد=بیشتر تصادفی بودن پدیده=اطلاعات بیشتر= نیاز به تعداد بیت های بیشتر برای کد کردن

20 Entropy For multi-class problems with c categories, entropy generalizes to: Q: Why log2?

21 Information Gain Gain(S,A): expected reduction in entropy due to sorting S on attribute A where Sv is the subset of S having value v for attribute A

22 Information Gain Entropy(S) = ?, Gain(S,A1)=?, Gain(S,A2)=?
True False [21+, 5-] [8+, 30-] [29+,35-] A2=? True False [18+, 33-] [11+, 2-] [29+,35-] Entropy(S) = ?, Gain(S,A1)=?, Gain(S,A2)=? Entropy([29+,35-]) = -29/64 log2 29/64 – 35/64 log2 35/64 = 0.99

23 Information Gain Entropy([18+,33-]) = 0.94 Entropy([11+,2-]) = 0.62
True False [21+, 5-] [8+, 30-] [29+,35-] A2=? True False [18+, 33-] [11+, 2-] [29+,35-] Entropy([18+,33-]) = 0.94 Entropy([11+,2-]) = 0.62 Gain(S,A2)=Entropy(S) -51/64*Entropy([18+,33-]) -13/64*Entropy([11+,2-]) =0.11 Entropy([21+,5-]) = 0.71 Entropy([8+,30-]) = 0.74 Gain(S,A1)=Entropy(S) -26/64*Entropy([21+,5-]) -38/64*Entropy([8+,30-]) =0.27 A1 is higher in the tree

24 ID3 for the playTennis example

25 ID3: The Basic Decision Tree Learning Algorithm
What is the “best” attribute? [“best” = with highest information gain] Answer: Outlook

26 ID3 (Cont’d) Yes Outlook Humidity Wind D14 What are the
Sunny Rain Overcast D6 D1 D8 D10 D3 D14 D4 D11 D12 D9 D2 D7 D5 D13 Yes What are the “best” next attributes? Humidity Wind

27 PlayTennis Decision Tree
Outlook? Humidity? Wind? Sunny Overcast Rain Yes High Normal Strong Weak No

28 Stopping criteria each leaf-node contains examples of one type, or
algorithm ran out of attributes

29 ID3

30 Over fitting in Decision Trees
Why “over”-fitting? A model can become more complex than the true target function (concept) when it tries to satisfy noisy data as well. hypothesis complexity accuracy on training data on test data

31 Over fitting in Decision Trees

32 Overfitting Example Testing Ohms Law: V = IR
Perfect fit to training data with an 9th degree polynomial (can fit n points exactly with an n-1 degree polynomial) Experimentally measure 10 points Fit a curve to the Resulting data. current (I) voltage (V) Ohm was wrong, we have found a more accurate function!

33 Overfitting Example Testing Ohms Law: V = IR current (I) voltage (V)
Better generalization with a linear function that fits training data less accurately.

34 Avoiding over-fitting the data
How can we avoid overfitting? There are 2 approaches: Early stopping: stop growing the tree before it perfectly classifies the training data Pruning: grow full tree, then prune Reduced error pruning Rule post-pruning Pruning approach is found more useful in practice.

35 Other issues in Decision tree learning
Incorporating continuous valued attributes Alternative measures for selecting attributes Handling training examples with missing attribute value Handling attributes with different costs

36 Strengths and Advantages of Decision Trees
Rule extraction from trees A decision tree can be used for feature extraction (e.g. seeing which features are useful) Interpretability: human experts may verify and/or discover patterns It is a compact and fast classification method

37 Your Assignments HW1 is uploaded, Due date: 94/08/14
Proposal: same due date One page maximum Include the following information: Project title Data set Project idea. approximately two paragraphs. Software you will need to write. Papers to read. Include 1-3 relevant papers. Teammate (if any) and work division. We expect projects done in a group to be more substantial than projects done individually.


Download ppt "Decision Tree Learning"

Similar presentations


Ads by Google