Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machine Learning in Practice Lecture 3 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.

Similar presentations


Presentation on theme: "Machine Learning in Practice Lecture 3 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute."— Presentation transcript:

1 Machine Learning in Practice Lecture 3 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute

2 Plan for Today Announcements  Assignment 2  Quiz 1 Weka helpful hints Topic of the day: Input and Output More on cross-validation ARFF format

3 Weka Helpful Hints

4 Increase Heap Size

5 Weka Helpful Hint: Documentation!! Click on More button!

6 Output Predictions Option

7 Important note: Because of the way Weka randomizes the data for cross-validation, the only circumstance under which you can match the instance numbers to positions in your data is if you have separate train and test sets so the order will be preserved!

8 View Classifier Errors

9 Input and Output

10 Representations Concept: the rule you want to learn Instance: one data point from your training or testing data (row in table) Attribute: one of the features that an instance is composed of (column in table)

11 Numeric versus Nominal Attributes What kind of reasoning does your representation enable? Numeric attributes allow instances to be ordered Numeric attributes allow you to measure distance between instances Sometimes numeric attributes make too fine grained of a distinction.2.25.28.31.35.45.47.52.6.63

12 Numeric versus Nominal Attributes.2.25.28.31.35.45.47.52.6.63 Numeric attributes can be discretized into nominal values  Then you lose ordering and distance  Another option is applying a function that maps a range of values into a single numeric attribute Nominal attributes can be mapped into numbers  i.e., decide that blue=1 and green=2  But are inferences made based on this valid?

13 Numeric versus Nominal Attributes.2.25.28.31.35.45.47.52.6.63.2.3.5.6 Numeric attributes can be discretized into nominal values  Then you lose ordering and distance  Another option is applying a function that maps a range of values into a single numeric attribute Nominal attributes can be mapped into numbers  i.e., decide that blue=1 and green=2  But are inferences made based on this valid?

14 Example! Problem: Learn a rule that predicts how much time a person spends doing math problems each day Attributes: You know gender, age, socio- economic status of parents, chosen field if any How would you represent age, and why? What would you expect the target rule to look like?

15 Styles of Learning Classification – learn rules from labeled instances that allow you to assign new instances to a class Association – look for relationships between features, not just rules that predict a class from an instance (more general) Clustering – look for instances that are similar (involves comparisons of multiple features) Numeric Prediction (regression models)

16 Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

17 Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html What else would be affected if wheat were to disappear?

18 Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html How would you represent this data?

19 Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html What would the learned rule look like?

20 Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html What would the learned rule look like?

21 Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

22 Food Web What if you wanted a more general rule: i.e., Affects(Entity1, Entity2) http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

23 Food Web What if you wanted a more general rule: i.e., Affects(Entity1, Entity2) http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html

24 Food Web What if you wanted a more general rule: i.e., Affects(Entity1, Entity2) http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html 122 rows altogether! Now let’s look at the learned rule….

25 Food Web What if you wanted a more general rule: i.e., Affects(Entity1, Entity2) http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html 122 rows altogether! Now let’s look at the learned rule….

26 Food Web What if you wanted a more general rule: i.e., Affects(Entity1, Entity2) http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html 122 rows altogether! Now let’s look at the learned rule…. Does it have to be this complicated?

27 Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html What would your representation for Affects(Entity1, Entity2) look like?

28 Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html What would your representation for Affects(Entity1, Entity2) look like?

29 Food Web http://www.cas.psu.edu/DOCS/WEBCOURSE/WETLAND/WET1/identify.html What would your representation for Affects(Entity1, Entity2) look like?

30 More on Cross- Validation

31 Cross Validation Exercise What is the same? What is different? 1 2 3 45 What surprises you?

32 Compare Folds with Tree Trained on Whole Set 1 2 3 45

33 Train Versus Test Performance on Training Data Performance on Testing Data

34 Which Model Do You Think Will Perform Best on Test Set? 1 2 3 45

35 Fold 1

36 Fold 2

37 Fold 3

38 Fold 4

39 Fold 5

40 Total Performance What do you notice?

41 Total Performance Average Kappa =.5

42 Starting to think about Error Analyses Step 1: Look at the confusion matrix Where are most of the errors occurring? What are possible explanations for systematic errors you see?  Are the instances in the confusable classes too similar to each other? If so, how can we distinguish them?  Are we paying attention to the wrong features?  Are we missing features that would allow us to see commonalities within classes that we are missing?

43 What went wrong on Fold 3? 1 2 3 45

44 Training Set PerformanceTesting Set Performance Hypotheses?

45 What went wrong on Fold 3? Training Set PerformanceTesting Set Performance Hypotheses?

46 What’s the difference?

47 Hypothesis: Problem with first cut

48 Some Examples

49 What do you conclude?

50 Problem with Fold 3 was probably just a sampling fluke. Distribution of classes different between train and test.


Download ppt "Machine Learning in Practice Lecture 3 Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute."

Similar presentations


Ads by Google