Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation of learned models Kurt Driessens again with slides stolen from Evgueni Smirnov and Hendrik Blockeel.

Similar presentations


Presentation on theme: "Evaluation of learned models Kurt Driessens again with slides stolen from Evgueni Smirnov and Hendrik Blockeel."— Presentation transcript:

1 Evaluation of learned models Kurt Driessens again with slides stolen from Evgueni Smirnov and Hendrik Blockeel

2 Overview Motivation Metrics for Classifier Evaluation Methods for Classifier Evaluation & Comparison Costs in Data Mining – Cost-Sensitive Classification and Learning – Lift Charts – ROC Curves

3 Motivation It is important to evaluate classifier’s generalization performance in order to: – Determine whether to employ the classifier; (For example: when learning the effectiveness of medical treatments from a limited-size data, it is important to estimate the accuracy of the classifiers.) – Optimize the classifier. (For example: when post-pruning decision trees we must evaluate the accuracy of the decision trees on each pruning step.)

4 data Target data Processed data Transformed data Patterns Knowledge Selection Preprocessing & cleaning Transformation & feature selection Data Mining Interpretation Evaluation Model’s Evaluation in the KDD Process

5 How to evaluate the Classifier’s Generalization Performance? Predicted class Actual class PosNeg +TPFN -FPTN Assume that we test a classifier on some test set and we derive at the end the following confusion matrix: P N

6 Metrics for Classifier’s Evaluation Predicted class Actual class PosNeg +TPFN -FPTN Accuracy = (TP+TN)/(P+N) Error = (FP+FN)/(P+N) Precision = TP/(TP+FP) Recall/TP rate = TP/P FP Rate = FP/N P N

7 How to Estimate the Metrics? We can use: – Training data; – Independent test data; – Hold-out method; – k-fold cross-validation method; – Leave-one-out method; – Bootstrap method; – And many more…

8 Estimation with Training Data The accuracy/error estimates on the training data are not good indicators of performance on future data. – Q: Why? – A: Because new data will probably not be exactly the same as the training data! The accuracy/error estimates on the training data measure the degree of classifier’s overfitting. Training set Classifier Training set

9 Estimation with Independent Test Data Estimation with independent test data is used when we have plenty of data and there is a natural way to forming training and test data. For example: Quinlan in 1987 reported experiments in a medical domain for which the classifiers were trained on data from 1985 and tested on data from 1986. Training set Classifier Test set

10 Hold-out Method The hold-out method splits the data into training data and test data (usually 2/3 for train, 1/3 for test). Then we build a classifier using the train data and test it using the test data. used with thousands of instances, including plenty from each class. Training set Classifier Test set Data

11 Classification: Train, Validation, Test Split Data Predictions Y N Results Known Training set Validation set + + - - + Classifier Builder Evaluate +-+-+-+- Classifier Final Test Set +-+-+-+- Final Evaluation Model Builder The test data can’t be used for parameter tuning!

12 Making the Most of the Data Once evaluation is complete, all the data can be used to build the final classifier. Generally, the larger the training data the better the classifier (but returns diminish). The larger the test data the more accurate the error estimate.

13 Stratification The holdout method reserves a certain amount for testing and uses the remainder for training. – Usually: one third for testing, the rest for training. For “unbalanced” datasets, samples might not be representative. – Few or none instances of some classes. Stratified sampling: advanced version of balancing the data. – Make sure that each class is represented with approximately equal proportions in both subsets.

14 Repeated Holdout Method In general, estimates can be made more reliable by repeated sampling – Each iteration, a certain proportion is randomly selected for training (possibly with stratification). – The error rates on the different iterations are averaged to yield an overall error rate. This is called the repeated holdout method.

15 Repeated Holdout Method, 2 Random sampling ≠ optimal – the different test sets overlap – we would like all our instances from the data to be tested at least once Can we prevent overlapping?

16 k-Fold Cross-Validation k-fold cross-validation avoids overlapping test sets: – First step: data is split into k subsets of equal size; – Second step: each subset in turn is used for testing and the remainder for training. The subsets are stratified before the cross-validation. The estimates are averaged to yield an overall estimate. Classifier Data train testtraintesttraintesttrain

17 More on Cross-Validation Standard method for evaluation: stratified 10-fold cross- validation. Why 10? Extensive experiments have shown that this is the best choice to get an accurate estimate. Stratification reduces the estimate’s variance. Even better: repeated stratified cross-validation: – E.g. ten-fold cross-validation is repeated ten times and results are averaged (reduces the variance).

18 Leave-One-Out Cross-Validation Leave-One-Out is a particular form of cross- validation: – Set number of folds to number of training instances; – I.e., for n training instances, build classifier n times. Makes best use of the data. Involves no random sub-sampling. Very computationally expensive.

19 Leave-One-Out Cross-Validation and Stratification A disadvantage of Leave-One-Out-CV is that stratification is not possible: – It guarantees a non-stratified sample because there is only one instance in the test set! Extreme example - random dataset split equally into two classes: – Best inducer predicts majority class; – 50% accuracy on fresh data; – Leave-One-Out-CV estimate is 100% error!

20 Bootstrap Method Cross validation uses sampling without replacement: – The same instance, once selected, can not be selected again for a particular training/test set The bootstrap uses sampling with replacement to form the training set: – Sample a dataset of n instances n times with replacement to form a new dataset of n instances; – Use this data as the training set; – Use the instances from the original dataset that don’t occur in the new training set for testing.

21 Bootstrap Method The bootstrap method is also called the 0.632 bootstrap: – A particular instance has a probability of 1–1/n of not being picked; – Thus its probability of ending up in the test data is: – This means the training data will contain approximately 63.2% of the instances and the test data will contain approximately 36.8% of the instances.

22 Estimating Error with the Bootstrap Method The error estimate on the test data will be very pessimistic because the classifier is trained on approx. 63% of the instances. – Therefore, combine it with the training error: – The training error gets less weight than the error on the test data. – Repeat process several times with different replacement samples; average the results.

23 Confidence Intervals for Performance Assume that the error error S (h) of the classifier h estimated by the 10-fold cross validation is 25%. How close is the estimated error error S (h) to the true error error D (h) ?

24 Confidence intervals (2) If test data contain n examples, drawn independently of each other, n  30 Then with approximately N% probability, error D (h) lies in the interval where N%:50%68%80%90%95%98%99% z N :0.671.001.281.641.962.332.58

25 Comparison of hypotheses Given two hypotheses, which one has lower true error? Statistical hypothesis test: – claim that both are equally good – if claim rejected, accept that 1 is better 2 cases: – compare 2 hypotheses on possibly different test sets – compare 2 hypotheses on same test set

26 Different Test Sets To compare h 1 and h 2, estimate p 1 -p 2 from samples S 1 (p’ 1 ) and S 2 (p’ 2 ) – if “very likely” p 1 -p 2 > 0 (i.e., confidence interval is entirely to the right of 0) : h 1 is better – similarly, < 0 : h 2 is better – otherwise, no difference demonstrated Formula for confidence interval of difference:

27 Same Test Set When comparing hypotheses on the same data set, more powerful procedure possible – uses more information from test – possible influence of easy/difficult examples removed  More informative method: – for each single example, compare h 1 and h 2 – how often was h 1 correct and h 2 wrong on the same example, vs. the other way around? – McNemar’s test

28 McNemar's test Consider table: If h 1 is equally good as h 2 : – for each instance where h 1 and h 2 differ, probability 0.5 that either is correct – hence we expect B  C  (B+C)/2 – B and C follow binomial (+/- normal) distribution Reject equality if B deviates too much from (B+C)/2

29 - h2 clearly better than h1 - might not be discovered using "conservative" comparison Example comparison Consider table below Method with independent test sets: – 55-45 in favour of h 2 (out of 100) – not very convincing Method with same test set: – much more convincing: 10-0 in favour of h 2

30 Metric Evaluation Summary: 1.Use test sets and the hold-out method for “large” data; 2.Use the cross-validation method for “middle-sized” data; 3.Use the leave-one-out and bootstrap methods for small data; Don’t use test data for parameter tuning - use separate validation data. Comparing two classifiers to each other can use more advanced statistics: t-test, McNemar, …

31 Drawbacks of Accuracy Evaluation based on accuracy is not always appropriate Shortcomings: – can sometimes be misleading – unstable when class distribution may change – assumes symmetric misclassification costs

32 1: Accuracy can be misleading E.g., "99% correct prediction": is this good? – Yes, if 50% "+" and 50% "-" – No, if 1% "+" and 99% "-" always predicting "neg" gives 99% accuracy Accuracy is a relative measure – Should be compared with "base accuracy" of always predicting the majority class base accuracy in table = max{P,N} / T – Even then, it may be misleading...

33 ++ IF false THEN pos 96% correct IF green area THEN pos: 92% correct Assume all examples -, except blue region (+) Which of these classifiers is best? Classifier 1Classifier 2

34 An alternative measure: correlation – e.g., correlation  = (ad-bc) /  T pos T min T + T - close to 1: high correlation predictions - classes close to 0: no correlation (close to -1: predicting the opposite) – Avoids the unintuitive results just mentioned prediction actual value note: +/- are actual values pos/neg are predictions +-Sum PosabT pos NegcdT neg SumT+T+ T-T- T

35 2: Accuracy is sensitive to class distributions If class distribution in test set differs from that in training set, accuracy will also differ E.g.: – Suppose a classifier has TP = 0.8, TN = 0.6 – Tested on test set with T + /T = 0.5, T - /T = 0.5: Acc = 0.7 – Employed in environment with T + /T = 0.3, T - /T = 0.7: Acc = 0.66

36 3: Accuracy ignores misclassification costs Accuracy ignores possibility of different misclassification costs – sometimes, incorrectly predicting "pos" costs more/less than incorrectly predicting "neg” E.g.: not treating an ill patient vs. treating a healthy patient refusing credit to client who would have paid back vs. assigning credit to client who won't pay back Need to distinguish probability of making different types of errors

37 Misclassification Costs Solution: distinguish “predictive accuracy” for different classes – Acc: probability that some instance is classified correctly – Decomposed into TP: “true positive” rate, (estimated) probability that a positive instance is classified correctly TN: “true negative” rate, (estimated) probability that a negative instance is classified correctly – We also define FP = 1-TN: “false positive rate”: estimated probability that a negative is classified as positive analogously FN = 1-TP

38 Misclassification Costs (2) Consider costs C FP and C FN = cost of false positive resp. false negative  Expected cost of a single prediction: C = C FP P(pos|-) P(-) + C FN P(neg|+) P(+) – estimated by C = C FP FP T - /T + C FN FN T + /T Note : – Acc is weighted average of TP and TN Acc = TP T + /T + TN T - /T – C is not computable from Acc alone

39 Cost Sensitive Learning Simple methods for cost sensitive learning: Resampling of instances according to costs Weighting of instances according to costs In Weka Cost Sensitive Classification and Learning can be applied for any classifier using the meta scheme: CostSensitiveClassifier.

40 Lift Charts In practice, decisions are usually made by comparing possible scenarios taking into account different costs. E.g. - Promotional mailout to 1,000,000 households. If we mail to all households, we get 0.1% respond (1000). - Data mining tool identifies (a) subset of 100,000 households with 0.4% respond (400); or (b) subset of 400,000 households with 0.2% respond (800); - Depending on the costs we can make final decision using lift charts! - A lift chart allows a visual comparison.

41 Generating a Lift Chart Instances are sorted according to their predicted probability of being a true positive: RankPredicted probabilityActual class 10.95Pos 20.93Pos 30.93Neg 40.88Pos ……… In the lift chart, x axis is sample size and y axis is number of true positives.

42 Hypothetical Lift Chart

43 ROC diagrams ROC = "Receiver operating characteristic" Allows to see – how well a classifier will perform given certain misclassification costs and class distribution – in which environments one classifier is better than another Explicitly aims at solving problems 2 and 3 mentioned before

44 ROC diagram (2) ROC diagram plots TP-rate versus FP-rate From confusion matrix: – TP = a/(a+c) = a/T + – FP = b/(b+d) = b/T - prediction actual value +-Sum PosabT pos NegcdT neg SumT+T+ T-T- T

45 Classifier in ROC diagram 1 classifier = 1 point on ROC diagram 01 1 random prediction FP TP perfect prediction no negatives returned no positives forgotten Tru e Predicted posneg +6040 -2080 Tru e Predicted posneg +8020 -50 Tru e Predicted posneg +4060 -3070 if true then pos if false then pos

46 Dominance in the ROC Space Classifier A dominates classifier B if and only if TPr A > TPr B and FPr A < FPr B.

47 ROC Convex Hull (ROCCH) Determined by the dominant classifiers  Classifiers below ROCCH are always sub-optimal.  Any point of the line segment connecting two classifiers can be achieved by randomly choosing between them;  The classifiers on ROCCH can be combined to form a hybrid.

48 Rank classifiers Rank classifiers assign a rank to their predictions some predictions are more certain than others -> higher rank E.g. (1) decision trees: – use purity of leaf used for prediction to rank it – E.g. leaf with 90% positives is ranked higher than leaf with 80% positives (2) neural nets: – criterion: =0.5 = pos – but 0.9 is more certainly positive than 0.51 – raise/lower threshold of 0.5: TP and FP go down or up

49 Rank classifiers yield a ROC curve each specific threshold = 1 point on that curve 01 1 FP TP Ranker Ranker with high threshold: worse than Red Ranker with low threshold: better than Blue

50 ROC for one Classifier Good separation between the classes, convex curve. Reasonable separation between the classes, mostly convex. Fairly poor separation between the classes, mostly convex. Poor separation between the classes, large and small concavities. Random performance.

51 The AUC Metric The area under ROC curve (AUC) assesses the ranking in terms of separation of the classes. AUC estimates that randomly chosen positive instance will be ranked before randomly chosen negative instances.

52 Note To generate ROC curves or Lift charts we need to use some evaluation methods considered in this lecture. ROC curves and Lift charts can be used for internal optimization of classifiers.

53 Costs in ROC diagram Given misclassification costs: – C FP : cost of a false positive – C FN : cost of a false negative (undetected "+") Average cost is – C = C FP * FP * T - /T + C FN * (1-TP) * T + /T – Lines of equal cost can be drawn in ROC diagram (straight lines) Slope of such a line : (C FP * T - /T) / (C FN * T + /T)

54 01 1 FP TP increasing cost

55 high cost of false positive: Red is better low cost of false positive: Ranker with low threshold is better Blue and Green are never better than the Ranker or Red 01 1 FP TP

56 Iso-Accuracy Lines Remember: Accuracy is weighted average of TP and TN Acc = TP T + /T + TN T - /T = TP T + /T + (1-FP) T - /T TP = N/P FP + C te Higher iso-accuracy lines are better.

57 Example For uniform class distribution, C4.5 is optimal and achieves about 82% accuracy. With 4 times as many positives as negatives SVM is optimal and achieves about 84% accuracy. With 4 times as many negatives as positives CN2 is optimal and achieves about 86% accuracy.

58 Summary Metrics for Classifier’s Evaluation Methods for Classifier’s Evaluation & Comparison Costs in Data Mining – Cost-Sensitive Classification and Learning – Lift Charts – ROC Curves

59 Evaluation of regression models Predicting numbers: no "right or wrong" approach Possible measures: – Sum of squared errors SSE Is an absolute measure – Relative error RE: measures improvement over trivial model RE = SSE(hypothesis) / SSE(trivial hypothesis) Trivial hypothesis: e.g. always predict mean RE normally between 0 and 1 – Spearman correlation r measures how well predictions and actual values correlate less sensitive to actual errors


Download ppt "Evaluation of learned models Kurt Driessens again with slides stolen from Evgueni Smirnov and Hendrik Blockeel."

Similar presentations


Ads by Google