Presentation is loading. Please wait.

Presentation is loading. Please wait.

COSC 4368 Intro Supervised Learning Organization

Similar presentations


Presentation on theme: "COSC 4368 Intro Supervised Learning Organization"— Presentation transcript:

1 COSC 4368 Intro Supervised Learning Organization
Introduction to Machine Learning Reinforcement Learning Introduction to Supervised Classification Basics Overfitting Model Evaluation Neural Networks Support Vector Machines Deep Learning (brief)

2 3a. Supervised Learning Basics
Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class. Find a model for the class attribute as a function of the values of other attributes. Goal: previously unseen records should be assigned a class as accurately as possible. A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.

3 Illustrating Classification Task

4 Examples of Classification Task
Predicting tumor cells as benign or malignant Classifying credit card transactions as legitimate or fraudulent Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil Categorizing news stories as finance, weather, entertainment, sports, etc

5 Classification Techniques
Decision Tree based Methods very briefly covered Rule-based Methods Memory based reasoning, instance-based learning Neural Networks covered Naïve Bayes and Bayesian Belief Networks Support Vector Machines covered Ensemble Methods

6 Examples of Decision Trees
categorical continuous class Splitting Attributes Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K NO YES Training Data Model: Decision Tree

7 Decision Tree Classification Task

8 Another Example of Decision Tree
categorical categorical continuous class MarSt Single, Divorced Married NO Refund No Yes NO TaxInc < 80K > 80K NO YES There could be more than one tree that fits the same data!

9 Apply Model to Test Data
Start from the root of tree. Refund MarSt TaxInc YES NO Yes No Married Single, Divorced < 80K > 80K

10 Apply Model to Test Data
Refund MarSt TaxInc YES NO Yes No Married Single, Divorced < 80K > 80K

11 Apply Model to Test Data
Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K NO YES

12 Apply Model to Test Data
Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K NO YES

13 Apply Model to Test Data
Refund Yes No NO MarSt Single, Divorced Married TaxInc NO < 80K > 80K NO YES

14 Apply Model to Test Data
Refund Yes No NO MarSt Married Assign Cheat to “No” Single, Divorced TaxInc NO < 80K > 80K NO YES

15 Decision Tree Classification Task

16 3b. Underfitting and Overfitting
500 circular and 500 triangular data points. Circular points: 0.5  sqrt(x12+x22)  1 Triangular points: sqrt(x12+x22) > 0.5 or sqrt(x12+x22) < 1

17 Underfitting and Overfitting
Complexity of a Decision Tree := number of nodes It uses Complexity of the classification function Underfitting: when model is too simple, both training and test errors are large

18 Overfitting due to Noise
Decision boundary is distorted by noise point

19 Overfitting due to Insufficient Examples
Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region - Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification task

20 Notes on Overfitting Overfitting results in decision trees/models that are more complex than necessary More complex models tend to be more sensitive to noise, missing examples,… If you use complex models a lot of “representative” training examples are needed to avoid overfitting. This is one way to reduce overfitting is to use more training examples. If you have only a few training examples: use less complex models! Models that are less sensitive to minor changes in training models are less prone to overfitting. Training error no longer provides a good estimate of how well the tree will perform on previously unseen records Need new ways for estimating errors

21 Occam’s Razor Given two models of similar generalization errors, one should prefer the simpler model over the more complex model For complex models, there is a greater chance that it was fitted accidentally by noise in data Usually, simple models are usually more robust with respect to noise Therefore, one should include model complexity when evaluating a model

22 3c. Model Evaluation Metrics for Performance Evaluation
How to evaluate the performance of a model? Methods for Performance Evaluation How to obtain reliable estimates? Learning Curves

23 Metrics for Performance Evaluation
Focus on the predictive capability of a model Rather than how fast it takes to classify or build models, scalability, etc. Confusion Matrix: Important: If there are problems with obtaining a “good” classifier inspect the confusion matrix! PREDICTED CLASS ACTUAL CLASS Class=Yes Class=No a b c d a: TP (true positive) b: FN (false negative) c: FP (false positive) d: TN (true negative)

24 Metrics for Performance Evaluation…
Most widely-used metric: PREDICTED CLASS ACTUAL CLASS Class=Yes Class=No a (TP) b (FN) c (FP) d (TN)

25 Cost Matrix PREDICTED CLASS C(i|j) ACTUAL CLASS
Class=Yes Class=No C(Yes|Yes) C(No|Yes) C(Yes|No) C(No|No) C(i|j): Cost of misclassifying class j example as class i

26 Computing Cost of Classification
Cost Matrix PREDICTED CLASS ACTUAL CLASS C(i|j) + - -1 100 1 Model M1 PREDICTED CLASS ACTUAL CLASS + - 150 40 60 250 Model M2 PREDICTED CLASS ACTUAL CLASS + - 250 45 5 200 Accuracy = 80% Cost = 3910 Accuracy = 90% Cost = 4255

27 Cost vs Accuracy Count Cost a b c d p q PREDICTED CLASS
ACTUAL CLASS Class=Yes Class=No a b c d N = a + b + c + d Accuracy = (a + d)/N Cost = p (a + d) + q (b + c) = p (a + d) + q (N – a – d) = q N – (q – p)(a + d) = N [q – (q-p)  (a+d)/N] = N [q – (q-p)  Accuracy] Accuracy is proportional to cost if 1. C(Yes|No)=C(No|Yes) = q 2. C(Yes|Yes)=C(No|No) = p Cost PREDICTED CLASS ACTUAL CLASS Class=Yes Class=No p q

28 Methods for Estimating Model Accuracy
Holdout Reserve 2/3 for training and 1/3 for testing Random subsampling Repeated holdout k-fold Cross validation Partition data into k disjoint subsets k-fold: train on k-1 partitions, test on the remaining one Leave-one-out: k=n Class stratified k-fold cross validation Stratified sampling oversampling vs undersampling Most popular!

29 Learning Curve Learning curve shows how accuracy changes with varying sample size Requires a sampling schedule for creating learning curve: Arithmetic sampling (Langley, et al) Geometric sampling (Provost et al) Effect of small sample size: Bias in the estimate Variance of estimate


Download ppt "COSC 4368 Intro Supervised Learning Organization"

Similar presentations


Ads by Google