Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machine Learning in Practice Lecture 18

Similar presentations


Presentation on theme: "Machine Learning in Practice Lecture 18"— Presentation transcript:

1 Machine Learning in Practice Lecture 18
Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute

2 Plan for the Day Announcements Rule Based Learning
Questions? Quiz Feedback Rule Based Learning Revisit the Tic Tac Toe Problem Start thinking about Optimization and Tuning

3 Quiz Feedback Only one person got everything right
Were the readings confusing this time? Association Rule Mining vs. Rule Learning

4 Rule Based Learning

5 Rules Versus Trees Tree based learning is divide and conquer
Decision based on what will have the biggest overall effect on “purity” at leaf nodes Rule based learning is “separate-and-conquer” Considers only one class at a time (usually starting with smallest) What separates this class from the default class

6 Trees vs. Rules J48

7 Trees vs. Rules J48

8 Locally Optimal Solutions
Forshadowing..... Optimal Solution Locally Optimal Solution

9 Covering Algorithms Rule based algorithms are called covering algorithms Whereas tree based algorithms take all classes into account at the same time, covering algorithms only consider one class at a time Rule based algorithms look for a set of conditions that achieve high accuracy on one class at a time

10 Accuracy versus Information Gain
[A A A A B B B B B] [A A A A B B B B B] [A A A B] [A B B B B] [B B B] [A A A A B B] Accuracy: 78% Accuracy: 78% Information: .76 Information: .61 * Note that lower resulting information means higher information gain.

11 Accuracy vs Information Gain

12 Rules Don’t Need to be Applied in Order
Rules that predict the same class can be re-ordered without affecting performance If rules are treated as un-ordered, rules associated with different classes might match at the same time In that case you need to have a tie breaker Maybe rule accuracy Maybe based on prior probabilities of each class

13 Rule Learning Note that the rules below for each class consider different subsets of attributes Note that two conditions were necessary to most accurately predict yum – rule learning algorithms add conditions to rules until accuracy is high enough The more complex a rule becomes, the more likely it is to over-fit @relation is-yummy If chocolate cake and not vanilla ice cream then yum If vanilla ice cream then good If vanilla cake then ok @attribute ice-cream {chocolate, vanilla, coffee, rocky-road, strawberry} @attribute cake {chocolate, vanilla} @attribute yummy {yum,good,ok} @data chocolate,chocolate,yum vanilla,chocolate,good coffee,chocolate,yum coffee,vanilla,ok rocky-road,chocolate,yum strawberry,vanilla,ok

14 Rule Induction by Pruning Rules from Trees
Rules can be read off of trees They will be overly complex But they can be pruned in a “greedy” fashion using the same principles discussed here You might get duplicate rules then, so remove those In practice this is very inefficient

15 Rules versus Trees Decision tree learning is a divide and conquer approach Top-down, looking to attributes that achieve useful splits in data Trees can be converted into sets of rules If you then Tutor If not(you) and Imperitive then Tutor If not(you) and not(Imperitive) and good then Tutor not(good) and WordCount > 2 and not(all-I) then Tutor all-I and not(So) then Student all-I and So then Tutor not(good) and WordCount <= 2 and not(on) then Student On then Tutor

16 Ordered Rules More Compact
If you then Tutor If not(you) and Imperitive then Tutor else if good then Tutor else if WordCount > 2 then if not(all-I) then Tutor else if ….. If rules are applied in order, then you can use if-then-else structure But then you’re back to a tree representation

17 Advantages of Classification Rules
If a and b then x If c and d then x Decision trees can’t easily represent disjunctions Sometimes subtrees have to be repeated – this introduces a greater chance of error So rules are a more powerful representation, but more power can lead to more over-fitting!!! a b c d x

18 Advantages of Classification Rules
If a and b then x If c and d then x Classification rules express disjunctions more concisely Decision lists are meant to be applied in order (so context is assumed) Easy to encode “else” conditions a b c d x

19 Rules Versus Trees Because both algorithms make one selection at a time, they will prefer different choices since the criteria are different Rule learning is more prone to over-fitting Rule representations have more power (e.g., disjunctions) Rule learning algorithms tend to make decisions based on more local information Even when Information Gain is used for choosing between options, the set of options considered is different

20 Pruning Rules Just as trees are grown and then pruned, rules are also grown and then pruned Rather than one growth stage followed by one pruning stage, you alternate growth and pruning With rules only reduced error pruning is used Trees can be pruned using reduced error pruning or by estimating error on training data using confidence intervals Rules only have one pruning operation Trees have two pruning operations

21 Rule Learning Manipulations
Pruning Paradigms: How would this rule perform over the whole set by itself versus how would this rule perform after other rules have fired? Do you start with a default? If so, what is that default? Pruning rule: remove the condition that improves the performance of the rule the most over a validation set (or remove conditions in reverse order)

22 Tic Tac Toe

23 Tic Tac Toe O X X X O O X O X

24 Tic Tac Toe: Remember this?
Decision Trees: .67 Kappa SMO: .96 Kappa Naïve Bayes: .28 Kappa O X X X O O X O X

25 Decision Trees

26 How do you think the rule model
would be different? Decision Trees

27 Rules from JRIP .95 Kappa! * When will it fail?

28 Optimization

29 Why Trees and Rules are Sometimes Counter Intuitive
All machine learning algorithms are designed to avoid doing an exhaustive search of the vector space In order to reduce search time, they make simplifying assumptions that sometimes lead to counter-intuitive results We have talked about some variations on basic tree and rule learning These affect which options are visible at each point in the search

30 Locally Optimal Solutions

31 Why Trees and Rules are Sometimes Counter Intuitive
The simplifying assumptions bias the search to favor certain regions of the hypothesis space Different algorithms have different biases, so they look at a different subset of solutions When this bias leads the algorithm to an optimal or near optimal solution it is a useful bias Depends largely on quirky characteristics of your data set

32 Why Trees and Rules are Sometimes Counter Intuitive
Simplifying assumptions increase efficiency but may decrease the quality of the derived solutions Tunnel vision Spurious regularities in the data lead to unpredictable results Tuning the parameters of an algorithm changes its bias (i.e., binary spilts vs not) You have to guard against overfitting!

33 Optimizing Parameter Settings
Use a modified form of cross- validation: 1 2 4 5 3 Test Iterate over settings Compare performance over validation set; Pick optimal setting Test on Test Set Validation Still N folds, but each fold has less training data than with standard cross validation Train Or you can have a hold-out Validation set you use for all folds

34 Optimizing Parameter Settings
1 2 4 5 3 Test This approach assumes that you want to estimate the generalization you will get from your learning and tuning approach together. If you just want to know what the best performance you can get on *this* set by tuning, you can just use standard cross-validation Validation Train

35 Take Home Message Tree Based and Rule Based Learners are similar
Rules are readable Greedy algorithms Locally optimal solution Tree Based and Rule Based Learners are different Information gain versus Accuracy Representational power wrt disjunctions


Download ppt "Machine Learning in Practice Lecture 18"

Similar presentations


Ads by Google