Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluating Results of Learning Blaž Zupan www.ailab.si/blaz/predavanja/uisp.

Similar presentations


Presentation on theme: "Evaluating Results of Learning Blaž Zupan www.ailab.si/blaz/predavanja/uisp."— Presentation transcript:

1 Evaluating Results of Learning Blaž Zupan www.ailab.si/blaz/predavanja/uisp

2 Evaluating ML Results Criteria –Accuracy of induced concepts (predictive accuracy) accuracy = probability of correct classification error rate = 1 -accuracy –Comprehensibility Both are important –but comprehensibility is hard to measure –accuracy usually studied Kinds of accuracy –Accuracy on learning data –Accuracy on new data (much more important) –Major topic: estimating accuracy on new data

3 Usual Procedure to Estimate Accuracy All available data Learning set (Training set) Test set (Holdout set) Learning System Induced Classifier Accuracy on test data Main idea: accuracy on test data approximates accuracy on new data Internal Validation External Validation

4 Problems Common mistake –estimating accuracy on new data by accuracy on learning data (resubstitution accuracy) Size of the data set –hopefully test set is representative for new data –no problem when available data abounds Scarce data: major problem –much data is needed for successful learning –much data is needed for reliable accuracy estimate

5 Estimating Accuracy from Test Set Consider –Induced classifier classifies a=73% of test cases correctly –So we expect accuracy on a new data close to 75%. But: How close? How confident we are in this estimate? (this depends on the size of the testing data set)

6 Confidence Intervals Can be used to assess the confidence for our accuracy estimates Confidence intervals 0%50%100% success rate on test data 95% confidence interval

7 Evaluation Schemes (sampling methods)

8 3-Fold Cross Validation dataset train & test #2 train & test #3 evaluate statistics for each iteration and then compute the average reoder arbitrarily train test train & test #1

9 k-Fold Cross Validation Split the data to k sets of approximately equal size (and class distribution, if stratified) For i=1 to k: –Use i-th subset for testing and remaining (k-1) subsets for training Compute average accuracy k-fold CV can be repeated several, say, 100 times

10 Random Sampling (70/30) Random split data to, say, –70% data for training –30% data for testing Learn on training, test on testing data Repeat procedure, say, 100 times, and compute the average accuracy and its confidence intervals

11 Statistics calibration discrimination

12 Calibration and Discrimination Calibration –how accurate are probabilities assigned by the induced model –classification accuracy, sensitivity, specificity,... Discrimination –how good would the model be to distinguish between positive and negative cases –area under ROC

13 Test Statistics: Contingency Table of Classification Results true positive, false positive false negative, true negative

14 Classification Accuracy CA = (TP+TN) / N Proportion of correctly classified examples

15 Sensitivity Sensitivity = TP / (TP + FN) Proportion of correctly detected positive examples In medicine (+, -: presence and absence of a disease): –chance that our model correctly identifies a patient with a disease

16 Specificity Specificity = TN / (FP + TN) Proportion of correctly detected negative examples In medicine: –chance that our model correctly identifies a patient without a disease

17 Other Statistics From DL Sackett et al.: Evidence-Based Medicine, Churchill-Livingstone, 2000.

18 ROC Curves ROC = Receiver Operating Characteristics From 70s used to evaluate medical prognostic models Recently popular within ML [rediscovery?] 1-specificity [FP rate] sensitivity [TP rate] 0%100% 0% 100% a very good model not so good model

19 ROC Curve T = 0 T = 0.5 T = ∞

20 ROC Curve (Recipe) 1.Draw grid: –step 1/N horizontally –step 1/F vertically 2.Sort results by descending predicted probabilities 3.Start at (0,0) 4.From the table, select top row(s) with the highest probability 5.Let rows include p positive and n negative examples: move to a point p grid points up and n right 6.Remove selected rows 7.If any more rows, go to 4

21 ROC Curve (Recipe)

22

23

24

25

26

27 Area Under ROC FP Rate TP Rate 0%100% 0% 100% For every negative example we sum up the number of positive examples with higher estimate, and normalize this score with a product of positive and negative examples. A ROC = P [ P + (positive example) > P + (negative example) ]

28 Area Under ROC Is expected to be from 0.5 to 1.0 The score is not affected by class distributions Characteristic landmarks –0.5: random classifier –below 0.7: poor classification –0.7 to 0.8: ok, reasonable classification –0.8 to 0.9: here is where very good predictive models start FP Rate TP Rate 0%100% 0% 100%

29 Final Thoughts Never test on the learning set Use some sampling procedure for testing At the end, evaluate both –predictive performance –semantical content Bottom line: good models are those that are useful in practice


Download ppt "Evaluating Results of Learning Blaž Zupan www.ailab.si/blaz/predavanja/uisp."

Similar presentations


Ads by Google