Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning what questions to ask. 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of.

Similar presentations


Presentation on theme: "Learning what questions to ask. 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of."— Presentation transcript:

1 Learning what questions to ask

2 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of a data instance that is to be classified  Each node is a question about the value that the instance to be classified has in a particular dimension OutlookHumidityWindPlay Tennis? SunnyNormalWeak ??? How would the decision tree classify this data instance Discrete Data Fan-out of each node determined by how many different values that dimension can take-on Play Tennis?

3 8/29/03Decision Trees3  Training data is used to build the tree  How decide what question to ask first?  Remember the curse of dimensionality  There might be just a few dimensions that are important and the rest could be random

4 8/29/03Decision Trees4  What question can I ask about the data that will give me the most information gain  Closer to being able to classify…  Identifying the most important dimension (most important question) What is the outlook? How humid is it? How windy is it?

5 8/29/03Decision Trees5  Approach comes out of Information Theory  From Wikipedia : developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data  Basically, how much information can I cram into a given signal (how many bits can I encode) Another statistical approach

6 8/29/03Decision Trees6  Starts with entropy…  Entropy is a measure of the homogeneity of the data  Purely random (nothing but noise) is maximum entropy  Linearly separable data is minimum entropy  What does that mean with discrete data? Given all instances with a sunny outlook, what if all of them were classified “yes, play tennis” that were “low humidity” and all of them were classified “no, do not play tennis” that were “high humidity” High entropy or low? Given all instances with a sunny outlook, what if half were “yes, play tennis” and half “no, don’t play” no matter what the humidity High entropy or low?

7 8/29/03Decision Trees7  If going to measure…  Want a statistical approach that yields…

8 8/29/03Decision Trees8  What if a sample was 20% 80%  Log 2 (.2) = log(.2)/log(2)  Log 2 (.2) = -2.321928  Log(.8) = -0.3219281  -(.2)*(-2.321928) – (.8)*(-0.3219281)  0.7219281  What if 80% 20%  Same  What if 50% 50%  Highest entropy, 1

9 8/29/03Decision Trees9  Can extend to more classes  Not just positive and negative If set base to number of classes back to summing to 1 at max Sum to number of classes if stick with base 2 From book: Entropy is a measure of the expected encoding length measured in bits If set base to number of classes back to summing to 1 at max Sum to number of classes if stick with base 2 From book: Entropy is a measure of the expected encoding length measured in bits

10 Humidity question or Windy question? 8/29/03Decision Trees10  Simply, expected reduction in entropy caused by partitioning the examples according to this attribute Scales the contribution of each answer according to membership If entropy of S is 1 and each of the entropies for the answers is 1 then … 1 – 1 so zero Information gain is zero If entropy of S is 1 and each of the entropies for the answers is 1 then … 1 – 1 so zero Information gain is zero If entropy of S is 1 and each of the entropies for the answers is 0 then … 1 – 0 so one Information gain is 1 If entropy of S is 1 and each of the entropies for the answers is 0 then … 1 – 0 so one Information gain is 1

11 8/29/03Decision Trees11  What is the information gain

12 8/29/03Decision Trees12  Recursive algorithm: ID3  Iterative Dichotomizer 3 ID3(S, attributes yet to be processed) Create a Root node for the tree Base cases If S are all same class, return the single node tree root with that label If attributes is empty return r node with label equal to most common class Otherwise Find attribute with greatest information gain Set decision attribute for root For each value of the chosen attribute Add a new branch below root Determine S v for that value If S v is empty Add a leaf with label of most common class Else Add subtree to this branch: ID3(Sv, attributes – this attribute)

13 8/29/03Decision Trees13  Which attribute next?

14 8/29/03Decision Trees14  Next attribute?

15 8/29/03Decision Trees15  Is there a branch for every answer? What if no training samples had overcast as their outlook? Could you classify a new unknown or test instance if it had overcast in that dimension?

16 8/29/03Decision Trees16  Tree often perfectly classifies training data  Not guaranteed but usually: if exhaust every dimension as drill-down last decision node might have answers that are still “impure” but is labeled with most abundant class  For instance: on the cancer data my tree had no leaves deeper than 4 levels  It basically memorizes the training data  Is this the best policy?  What if had a node that “should” be pure but had a single exception?

17 8/29/03Decision Trees17  Decision boundary  Sometimes it is better to live with a little error than to try to get perfection

18 8/29/03Decision Trees18  Wikipedia  In statistics, overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship.

19 8/29/03Decision Trees19  Bayesian finds boundary that minimizes error  If we trim the decision tree’s leaves—similar effect  i.e. don’t try to memorize every single training sample

20 8/29/03Decision Trees20  Don’t know until you know  Withhold some data  Use to test

21 8/29/03Decision Trees21  Stop growing tree early  Set some threshold for allowable entropy  Post Pruning  Build tree then remove as long as it improves

22 8/29/03Decision Trees22  Remove each decision node in turn and check performance  Removing a decision node means removing all sub-trees below it and assigning the most common class  Remove (permanently) the decision node that caused the greatest increase in accuracy  Rinse and repeat

23 8/29/03Decision Trees23

24 8/29/03Decision Trees24  A series of rules  A node could both be present and not be present  Imagine a bifurcation and one track has only the first and last “node”

25 8/29/03Neural Networks25  Bootstrap aggregating (bagging )  Helps to avoid overfitting  Usually applied to decision tree models (though not exclusively)

26 8/29/03Neural Networks26  Machine learning ensemble meta-algorithm  Create a bunch of models  Do so by bootstrap sampling the training data  Let all the models vote Q1 Q2Q3Q4 Q1 Q2Q3Q4 Q1 Q2Q3Q4 Q1 Q2Q3Q4 Q1 Q2Q3Q4 Q1 Q2Q3Q4 Q1 Q2Q3Q4 Pick me!

27 8/29/03Decision Trees27  Forest is a bunch of trees  Each tree has access to a random subset of attributes/dimensions

28 8/29/03Decision Trees28  Greedy algorithm  Tries to race to an answer  Finds the next question that best splits the data into classes by answer  Result:  Short trees are preferred

29 8/29/03Decision Trees29  The simplest answer is often the best  But does this lead to the best classifier  Book has a philosophical discussion about this without resolving the issue

30 8/29/03Decision Trees30  Many classifiers simply give an answer  No reason  Decision trees one of the few that provides such insights

31 8/29/0331Decision Trees


Download ppt "Learning what questions to ask. 8/29/03Decision Trees2  Job is to build a tree that represents a series of questions that the classifier will ask of."

Similar presentations


Ads by Google