Presentation is loading. Please wait.

Presentation is loading. Please wait.

CpSc 810: Machine Learning Evaluation of Classifier.

Similar presentations


Presentation on theme: "CpSc 810: Machine Learning Evaluation of Classifier."— Presentation transcript:

1 CpSc 810: Machine Learning Evaluation of Classifier

2 2 Copy Right Notice Most slides in this presentation are adopted from slides of text book and various sources. The Copyright belong to the original authors. Thanks!

3 3 Classifier Accuracy Measures C1C1 C2C2 C1C1 True positive (TP)False negative (FN) C2C2 False positive (FP)True negative (TN) classesbuy_computer = yesbuy_computer = no total buy_computer = yes6954467000 buy_computer = no41225883000 total7366263410000

4 4 Classifier Accuracy Measures The sensitivity: the percentage of correctly predicted positive data over the total number of positive data The specificity: the percentage of correctly identified negative data over the total number of negative data. The accuracy: the percentage of correctly predicted positive and negative data over the sum of positive and negative data

5 5 Classifier Accuracy Measures The precision: the percentage of correctly predicted positive data over the total number of predicted positive data. The F-measure is also called F-score. As a weighted average of the precision and recall, it considers both the precision and recall of the test to computer the score.

6 6 ROC curves ROC = Receiver Operating Characteristic Started in electronic signal detection theory (1940s - 1950s) Has become very popular method used in machine learning applications to assess classifiers

7 7 ROC curves: simplest case Consider diagnostic test for a disease Test has 2 possible outcomes: ‘positive’ = suggesting presence of disease ‘negative’ = suggesting un-presence of disease An individual can test either positive or negative for the disease

8 8 Hypothesis testing refresher 2 ‘competing theories’ regarding a population parameter: NULL hypothesis H ALTERNATIVE hypothesis A H: NO DIFFERENCE any observed deviation from what we expect to see is due to chance variability A: THE DIFFERENCE IS REAL

9 9 Test Statistic Measure how far the observed data are from what is expected assuming the NULL H by computing the value of a test statistic (TS) from the data The particular TS computed depends on the parameter For example, to test the population mean , the TS is the sample mean (or standardized sample mean) The NULL is rejected if the TS falls in a user- specified ‘rejection region’

10 10 True disease state vs. Test result not rejected rejected No disease (D = 0) specificity X Type I error (False +)  Disease (D = 1) X Type II error (False -)  Power 1 -  ; sensitivity Disease Test

11 11 Specific Example Test Result Pts with disease Pts without the disease

12 12 Test Result Call these patients “negative”Call these patients “positive” Threshold

13 13 Test Result Call these patients “negative”Call these patients “positive” without the diseasewith the disease True Positives

14 14 Test Result Call these patients “negative”Call these patients “positive” False Positives

15 15 Test Result Call these patients “negative”Call these patients “positive ” True negatives

16 16 Test Result Call these patients “negative”Call these patients “positive” False negatives

17 17 Test Result ‘‘-’’‘‘+’’ Moving the Threshold: right

18 18 Test Result ‘‘-’’‘‘+’’ Moving the Threshold: left

19 19 ROC curve True Positive Rate (sensitivity) 0%0% 100% False Positive Rate (1-specificity) 0%0% 100 %

20 20 True Positive Rate 0%0% 100% False Positive Rate 0%0% 100% True Positive Rate 0%0% 100% False Positive Rate 0%0% 100% A good test: A poor test: ROC curve comparison

21 21 Best Test: Worst test: True Positive Rate 0%0% 100% False Positive Rate 0%0% 100 % True Positive Rate 0%0% 100% False Positive Rate 0%0% 100 % The distributions don’t overlap at all The distributions overlap completely ROC curve extremes

22 22 Area under ROC curve (AUC) Overall measure of test performance Comparisons between two tests based on differences between (estimated) AUC For continuous data, AUC equivalent to Mann- Whitney U-statistic (nonparametric test of difference in location between two populations)

23 23 True Positive Rate 0%0% 100% False Positive Rate 0%0% 100 % True Positive Rate 0%0% 100% False Positive Rate 0%0% 100 % True Positive Rate 0%0% 100% False Positive Rate 0%0% 100 % AUC = 50% AUC = 90% AUC = 65% AUC = 100% True Positive Rate 0%0% 100% False Positive Rate 0%0% 100 % AUC for ROC curves

24 24 Interpretation of AUC AUC can be interpreted as the probability that the test result from a randomly chosen diseased individual is more indicative of disease than that from a randomly chosen nondiseased individual: P(X i  X j | D i = 1, D j = 0) So can think of this as a nonparametric distance between disease/nondisease test results

25 25 Predictor Error Measures Measure predictor accuracy: measure how far off the predicted value is from the actual known value Loss function: measures the error betw. y i and the predicted value y i ’ Absolute error: | y i – y i ’| Squared error: (y i – y i ’) 2

26 26 Predictor Error Measures Test error (generalization error): the average loss over the test set Mean absolute error: Mean squared error: Relative absolute error: Relative squared error: The mean squared-error exaggerates the presence of outliers Popularly use (square) root mean-square error, similarly, root relative squared error

27 27 Evaluating the Accuracy of a Classifier or Predictor (I) Holdout method Given data is randomly partitioned into two independent sets Training set (e.g., 2/3) for model construction Test set (e.g., 1/3) for accuracy estimation Random sampling: a variation of holdout Repeat holdout k times, accuracy = avg. of the accuracies obtained Cross-validation (k-fold, where k = 10 is most popular) Randomly partition the data into k mutually exclusive subsets, each approximately equal size At i-th iteration, use D i as test set and others as training set Leave-one-out: k folds where k = # of examples, for small sized data Stratified cross-validation: folds are stratified so that class distribution in each fold is approximately the same as that in the initial data

28 28 Evaluating the Accuracy of a Classifier or Predictor (II) Bootstrap Works well with small data sets Samples the given training examples uniformly with replacement i.e., each time a example is selected, it is equally likely to be selected again and re-added to the training set Several bootstrap methods, and a common one is.632 bootstrap Suppose we are given a data set of d examples. The data set is sampled d times, with replacement, resulting in a training set of d samples. The data examples that did not make it into the training set end up forming the test set. About 63.2% of the original data will end up in the bootstrap, and the remaining 36.8% will form the test set (since (1 – 1/d) d ≈ e -1 = 0.368) Repeat the sampling procedure k times, overall accuracy of the model:

29 29 More About Bootstrap The bootstrap method attempts to determine the probability distribution from the data itself, without recourse to CLT (. The bootstrap method attempts to determine the probability distribution from the data itself, without recourse to CLT (Central Limit Theorem). The bootstrap method is not a way of reducing the error ! It only tries to estimate it. The bootstrap method is not a way of reducing the error ! It only tries to estimate it. Basic idea of Bootstrap Originally, from some list of data, one computes an object. Create an artificial list by randomly drawing elements from that list. Some elements will be picked more than once. Compute a new object. Repeat 100-1000 times and look at the distribution of these objects.

30 30 More About Bootstrap How many bootstraps ? No clear answer to this. Lots of theorems on asymptotic convergence, but no real estimates ! Rule of thumb: try it 100 times, then 1000 times, and see if your answers have changed by much. Anyway have N N possible subsamples Is it reliable ? A very good question ! Jury still out on how far it can be applied, but for now nobody is going to shoot you down for using it. Good agreement for Normal (Gaussian) distributions, skewed distributions tend to more problematic, particularly for the tails, (boot strap underestimates the errors).

31 31 Sampling Sampling is the main technique employed for data selection. It is often used for both the preliminary investigation of the data and the final data analysis. Statisticians sample because obtaining the entire set of data of interest is too expensive or time consuming. Sampling is used in data mining because processing the entire set of data of interest is too expensive or time consuming.

32 32 Sampling … The key principle for effective sampling is the following: Using a sample will work almost as well as using the entire data sets, if the sample is representative A sample is representative if it has approximately the same property (of interest) as the original set of data

33 33 Sample Size 8000 points 2000 Points500 Points

34 34 Types of Sampling Simple Random Sampling There is an equal probability of selecting any particular item Sampling without replacement As each item is selected, it is removed from the population Sampling with replacement Objects are not removed from the population as they are selected for the sample. In sampling with replacement, the same object can be picked up more than once Stratified sampling Split the data into several partitions; then draw random samples from each partition


Download ppt "CpSc 810: Machine Learning Evaluation of Classifier."

Similar presentations


Ads by Google