Outliers Split-sample Validation

Slides:



Advertisements
Similar presentations
General Linear Models The theory of general linear models posits that many statistical tests can be solved as a regression analysis, including t-tests.
Advertisements

ANALYSIS OF VARIANCE (ONE WAY)
The SPSS Sample Problem
Analysis of variance (ANOVA)-the General Linear Model (GLM)
Bivariate Analysis Cross-tabulation and chi-square.
One-sample T-Test of a Population Mean
5/15/2015Slide 1 SOLVING THE PROBLEM The one sample t-test compares two values for the population mean of a single variable. The two-sample test of a population.
Strategy for Complete Regression Analysis
Assumption of normality
Chapter 11 Contingency Table Analysis. Nonparametric Systems Another method of examining the relationship between independent (X) and dependant (Y) variables.
Detecting univariate outliers Detecting multivariate outliers
Chi-square Test of Independence
Outliers Split-sample Validation
Discriminant Analysis – Basic Relationships
Multiple Regression – Assumptions and Outliers
Strategy for Complete Discriminant Analysis
Multiple Regression – Basic Relationships
Multinomial Logistic Regression Basic Relationships
Assumption of Homoscedasticity
Standard Binary Logistic Regression
SW388R6 Data Analysis and Computers I Slide 1 One-sample T-test of a Population Mean Confidence Intervals for a Population Mean.
Slide 1 Detecting Outliers Outliers are cases that have an atypical score either for a single variable (univariate outliers) or for a combination of variables.
Logistic Regression – Basic Relationships
Logistic Regression – Complete Problems
8/9/2015Slide 1 The standard deviation statistic is challenging to present to our audiences. Statisticians often resort to the “empirical rule” to describe.
SW388R7 Data Analysis & Computers II Slide 1 Assumption of normality Transformations Assumption of normality script Practice problems.
SW388R7 Data Analysis & Computers II Slide 1 Multiple Regression – Basic Relationships Purpose of multiple regression Different types of multiple regression.
Correlation Question 1 This question asks you to use the Pearson correlation coefficient to measure the association between [educ4] and [empstat]. However,
Multinomial Logistic Regression: A Problem in Personnel Classification
SW388R7 Data Analysis & Computers II Slide 1 Multiple Regression – Split Sample Validation General criteria for split sample validation Sample problems.
SW388R7 Data Analysis & Computers II Slide 1 Analyzing Missing Data Introduction Problems Using Scripts.
SW388R7 Data Analysis & Computers II Slide 1 Discriminant Analysis – Basic Relationships Discriminant Functions and Scores Describing Relationships Classification.
SW388R6 Data Analysis and Computers I Slide 1 Chi-square Test of Goodness-of-Fit Key Points for the Statistical Test Sample Homework Problem Solving the.
8/15/2015Slide 1 The only legitimate mathematical operation that we can use with a variable that we treat as categorical is to count the number of cases.
Stepwise Binary Logistic Regression
Sampling Distribution of the Mean Problem - 1
8/20/2015Slide 1 SOLVING THE PROBLEM The two-sample t-test compare the means for two groups on a single variable. the The paired t-test compares the means.
SW388R7 Data Analysis & Computers II Slide 1 Logistic Regression – Hierarchical Entry of Variables Sample Problem Steps in Solving Problems.
8/23/2015Slide 1 The introductory statement in the question indicates: The data set to use: GSS2000R.SAV The task to accomplish: a one-sample test of a.
SW388R7 Data Analysis & Computers II Slide 1 Assumption of Homoscedasticity Homoscedasticity (aka homogeneity or uniformity of variance) Transformations.
Hierarchical Binary Logistic Regression
9/23/2015Slide 1 Published reports of research usually contain a section which describes key characteristics of the sample included in the study. The “key”
SW388R6 Data Analysis and Computers I Slide 1 Central Tendency and Variability Sample Homework Problem Solving the Problem with SPSS Logic for Central.
Multinomial Logistic Regression Basic Relationships
Using SPSS for Windows Part II Jie Chen Ph.D. Phone: /6/20151.
Stepwise Multiple Regression
SW388R7 Data Analysis & Computers II Slide 1 Multinomial Logistic Regression: Complete Problems Outliers and Influential Cases Split-sample Validation.
Slide 1 The SPSS Sample Problem To demonstrate these concepts, we will work the sample problem for logistic regression in SPSS Professional Statistics.
Slide 1 SOLVING THE HOMEWORK PROBLEMS Pearson's r correlation coefficient measures the strength of the linear relationship between the distributions of.
SW388R7 Data Analysis & Computers II Slide 1 Logistic Regression – Hierarchical Entry of Variables Sample Problem Steps in Solving Problems Homework Problems.
SW388R6 Data Analysis and Computers I Slide 1 Independent Samples T-Test of Population Means Key Points about Statistical Test Sample Homework Problem.
SW388R7 Data Analysis & Computers II Slide 1 Hierarchical Multiple Regression Differences between hierarchical and standard multiple regression Sample.
6/4/2016Slide 1 The one sample t-test compares two values for the population mean of a single variable. The two-sample t-test of population means (aka.
SW388R6 Data Analysis and Computers I Slide 1 Multiple Regression Key Points about Multiple Regression Sample Homework Problem Solving the Problem with.
Chi-square Test of Independence
SW388R7 Data Analysis & Computers II Slide 1 Hierarchical Multiple Regression Differences between hierarchical and standard multiple regression Sample.
Slide 1 The Kleinbaum Sample Problem This problem comes from an example in the text: David G. Kleinbaum. Logistic Regression: A Self-Learning Text. New.
SW388R6 Data Analysis and Computers I Slide 1 Percentiles and Standard Scores Sample Percentile Homework Problem Solving the Percentile Problem with SPSS.
SW388R7 Data Analysis & Computers II Slide 1 Detecting Outliers Detecting univariate outliers Detecting multivariate outliers.
12/23/2015Slide 1 The chi-square test of independence is one of the most frequently used hypothesis tests in the social sciences because it can be used.
1/5/2016Slide 1 We will use a one-sample test of proportions to test whether or not our sample proportion supports the population proportion from which.
SW388R6 Data Analysis and Computers I Slide 1 Comparing Central Tendency and Variability across Groups Impact of Missing Data on Group Comparisons Sample.
2/24/2016Slide 1 The standard deviation statistic is challenging to present to our audiences. Statisticians often resort to the “empirical rule” to describe.
SW388R7 Data Analysis & Computers II Slide 1 Principal component analysis Strategy for solving problems Sample problem Steps in principal component analysis.
(Slides not created solely by me – the internet is a wonderful tool) SW388R7 Data Analysis & Compute rs II Slide 1.
SW388R7 Data Analysis & Computers II Slide 1 Strategy for Complete discriminant Analysis Assumption of normality, linearity, and homogeneity Outliers Multicollinearity.
BINARY LOGISTIC REGRESSION
Multiple Regression – Split Sample Validation
Multinomial Logistic Regression: Complete Problems
Presentation transcript:

Outliers Split-sample Validation Multinomial Logistic Regression: Detecting Outliers and Validating Analysis Outliers Split-sample Validation

Outliers Multinomial logistic regression in SPSS does not compute any diagnostic statistics. In the absence of diagnostic statistics, SPSS recommends using the Logistic Regression procedure to calculate and examine diagnostic measures. A multinomial logistic regression for three groups compares group 1 to group 3 and group 2 to group 3. To test for outliers, we will run two binary logistic regressions, using case selection to compare group 1 to group 3 and group 2 to group 3. From both of these analyses we will identify a list of cases with studentized residuals greater than ± 2.0, and test the multinomial solution without these cases. If the accuracy rate of this model is less than 2% more accurate, we will interpret the model that includes all cases.

Example To demonstrate the process for detecting outliers, we will examine the relationship between the independent variables "age" [age],"highest year of school completed" [educ] and "confidence in banks and financial institutions" [confinan] and the dependent variable "opinion about spending on social security" [natsoc]. Opinion about spending on social security contains three categories: 1 too little 2 about right 3 too much With all cases, including those that might be identified as outliers, the accuracy rate was 63.7%. We note this to compare with the classification accuracy after removing outliers to determine which model we will interpret.

Request multinomial logistic regression for baseline model Select the Regression | Multinomial Logistic… command from the Analyze menu.

Selecting the dependent variable Second, click on the right arrow button to move the dependent variable to the Dependent text box. First, highlight the dependent variable natsoc in the list of variables.

Selecting metric independent variables Metric independent variables are specified as covariates in multinomial logistic regression. Metric variables can be either interval or, by convention, ordinal. Move the metric independent variables, age, educ and confinan to the Covariate(s) list box.

Specifying statistics to include in the output While we will accept most of the SPSS defaults for the analysis, we need to specifically request the classification table. Click on the Statistics… button to make a request.

Requesting the classification table Third, click on the Continue button to complete the request. First, keep the SPSS defaults for Model and Parameters. Second, mark the checkbox for the Classification table.

Completing the multinomial logistic regression request Click on the OK button to request the output for the multinomial logistic regression. The multinomial logistic procedure supports additional commands to specify the model computed for the relationships (we will use the default main effects model), additional specifications for computing the regression, and saving classification results. We will not make use of these options.

Classification accuracy for all cases With all cases, including those that might be identified as outliers, the accuracy rate was 63.7%. We will compare the classification accuracy of the model with all cases to the classification accuracy of the model excluding outliers.

Outliers for the comparison of groups 1 and 3 Since multinomial logistic regression does not identify outliers, we will use binary logistic regressions to identify them. Choose the Select Cases… command from the Data menu to include only groups 1 and 3 in the analysis.

Selecting groups 1 and 3 First, mark the If condition is satisfied option button. Second, click on the IF… button to specify the condition.

Formula for selecting groups 1 and 3 To include only groups 1 and 3 in the analysis, we enter the formula to include cases that had a value of 1 for natsoc or a value of 3 for natsoc. After completing the formula, click on the Continue button to close the dialog box.

Completing the selection of groups 1 and 3 To activate the selection, click on the OK button.

Binary logistic regression comparing groups 1 and 3 Select the Regression | Binary Logistic… command from the Analyze menu.

Dependent and independent variables for the comparison of groups 1 and 3 First, move the dependent variable natsoc to the Dependent variable text box. Second, move the independent variables age, educ, and confinan to the Covariates list box. Third, click on the Save… button to request the inclusion of standardized residuals in the data set.

Including studentized residuals in the comparison of groups 1 and 3 First, mark the checkbox for Studentized residuals in the Residuals panel. Second, click on the Continue button to complete the specifications.

Outliers for the comparison of groups 1 and 3 Click on the OK button to request the output for the logistic regression.

Locating the case ids for outliers for groups 1 and 3 In order to exclude outliers from the multinomial logistic regression, we must identify their case ids. Choose the Select Cases… command from the Data menu to identify cases that are outliers.

Replace the selection criteria To replace the formula that selected cases in group 1 and 3 for the dependent variable, click on the IF… button.

Formula for identifying outliers Type in the formula for including outliers. Note that we are including outliers because we want to identify them. This is different that previous procedures where we included cases that were not outliers in the analysis. Click on the Continue button to close the dialog box.

Completing the selection of outliers To activate the selection, click on the OK button.

Locating the outliers in the data editor We used Select cases to specify a criteria for including cases that were outliers. Select cases will assign a 1 (true) to the filter_$ variable if a cases satisfies the criteria. To locate the cases that have a filter_$ value of 1, we can sort the data set in descending order of the values for the filter variable. Click on the column header for filter_$ and select Sort Descending from the drop down menu.

The outliers in the data editor At the top of the sorted column for filter_$, we see four 1’s indicating that 4 cases met the criteria for being considered an outlier.

Outliers for the comparison of groups 2 and 3 The process for identifying outliers is repeated for the other comparison done by the multinomial logistic regression, group 2 versus group 3. Since multinomial logistic regression does not identify outliers, we will use binary logistic regressions to identify them. Choose the Select Cases… command from the Data menu to include only groups 2 and 3 in the analysis.

Selecting groups 2 and 3 First, mark the If condition is satisfied option button. Second, click on the IF… button to change the condition.

Formula for selecting groups 2 and 3 To include only groups 2 and 3 in the analysis, we enter the formula to include cases that had a value of 2 for natsoc or a value of 3 for natsoc. After completing the formula, click on the Continue button to close the dialog box.

Completing the selection of groups 2 and 3 To activate the selection, click on the OK button.

Binary logistic regression comparing groups 2 and 3 Select the Regression | Binary Logistic… command from the Analyze menu.

Outliers for the comparison of groups 2 and 3 The specifications for the analysis are the same as the ones we used for detecting outliers for groups 1 and 3. Click on the OK button to request the output for the logistic regression.

Locating the case ids for outliers for groups 2 and 3 In order to exclude outliers from the multinomial logistic regression, we must identify their case ids. Choose the Select Cases… command from the Data menu to identify cases that are outliers.

Replace the selection criteria To replace the formula that selected cases in group 2 and 3 for the dependent variable, click on the IF… button.

Formula for identifying outliers Type in the formula for including outliers. Note that we use the second version of the standardized residual, sre_2. Click on the Continue button to close the dialog box.

Completing the selection of outliers To activate the selection, click on the OK button.

Locating the outliers in the data editor We used Select cases to specify a criteria for including cases that were outliers. Select cases will assign a 1 (true) to the filter_$ variable if a cases satisfies the criteria. To locate the cases that have a filter_$ value of 1, we can sort the data set in descending order of the values for the filter variable. Click on the column header for filter_$ and select Sort Descending from the drop down menu.

The outliers in the data editor At the top of the sorted column for filter_$, we see that we have two outliers. These two outliers were among outliers for the analysis of groups 1 and 3.

The caseid of the outliers Since the studentized residuals were only calculated for a subset of the cases, the cases not included were assigned missing values and would be excluded from the analysis if the selection criteria were based on standardized residuals. We will use caseid in the selection criteria instead. The case id for the outlier is “20002045”, “20002413”, “20000012”, and “20000816." These are the cases that we will omit from the multinomial logistic regression.

Excluding the outliers from the multinomial logistic regression To exclude the outlier from the analysis, we will use the Select Cases… command again.

Changing the condition for the selection Click on the IF… button to change the condition.

Excluding cases identified as outliers To include all of the cases except the outlier, we set caseid not equal to the subject's id. Note that the subject's id is put in quotation marks because it is string data in this data set. After completing the formula, click on the Continue button to close the dialog box.

Completing the exclusion of the outlier To activate the exclusion, click on the OK button.

Multinomial logistic regression excluding the outlier Select the Regression | Multinomial Logistic… command from the Analyze menu.

Running the multinomial logistic regression without the outlier The specifications for the analysis are the same as the ones we used the multinomial logistic regression with all cases. Click on the OK button to request the output for the logistic regression.

Classification accuracy after omitting outliers With all cases the classification accuracy rate for the multinomial logistic regression model was 63.7%. After omitting the outlier, the accuracy rate improved to 65.3%. Since the amount of the increase in accuracy was less than 2%, the multinomial logistic regression model with all cases will be interpreted.

75/25% Cross-validation Strategy In this validation strategy, the cases are randomly divided into two subsets: a training sample containing 75% of the cases and a holdout sample containing the remaining 25% of the cases. The training sample is used to derive the multinomial logistic regression model. The holdout sample is classified using the coefficients for the training model. The classification accuracy for the holdout sample is used to estimate how well the model based on the training sample will perform for the population represented by the data set. While it is expected that the classification accuracy for the validation sample will be lower than the classification for the training sample, the difference (shrinkage) should be no larger than 2%. In addition to satisfying the classification accuracy, we will require that the significance of the overall relationship and the relationships with individual predictors for the training sample match the significance results for the model using the full data set.

75/25% Cross-validation Strategy SPSS does not classify cases that are not included in the training sample, so we will have to manually compute the classifications for the holdout sample if we want to use this strategy. We will run the analysis for the training sample, use the coefficients from the training sample analysis to compute classification scores (log of the odds) for each group, compute the probabilities that correspond to each group defined by the dependent variable, and classify the case in the group with the highest probability.

Restoring the outlier to the data set To include the outlier back into the analysis, we will use the Select Cases… command again.

Restoring the outliers to the data set Mark the All cases option button to include the outlier back into the data set. To activate the exclusion, click on the OK button.

Re-running the multinomial logistic regression with all cases Select the Regression | Multinomial Logistic… command from the Analyze menu.

Requesting the multinomial logistic regression again The specifications for the analysis are the same as the ones we have been using all along. Click on the OK button to request the output for the multinomial logistic regression.

Overall Relationship The presence of a relationship between the dependent variable and combination of independent variables is based on the statistical significance of the final model chi-square in the SPSS table titled "Model Fitting Information". In this analysis, the probability of the model chi-square (15.515) was p=0.017, less than or equal to the level of significance of 0.05. The null hypothesis that there was no difference between the model without independent variables and the model with independent variables was rejected. The existence of a relationship between the independent variables and the dependent variable was supported.

Individual relationships - 1 The statistical significance of the relationship between confidence in banks and financial institutions and opinion about spending on social security is based on the statistical significance of the chi-square statistic in the SPSS table titled "Likelihood Ratio Tests". For this relationship, the probability of the chi-square statistic (9.445) was p=0.009, less than or equal to the level of significance of 0.05. The null hypothesis that all of the b coefficients associated with confidence in banks and financial institutions were equal to zero was rejected. The existence of a relationship between confidence in banks and financial institutions and opinion about spending on social security was supported.

Individual relationships - 2 In the comparison of survey respondents who thought we spend too little money on social security to survey respondents who thought we spend too much money on social security, the probability of the Wald statistic (6.263) for the variable confidence in banks and financial institutions [confinan] was 0.012. Since the probability was less than or equal to the level of significance of 0.05, the null hypothesis that the b coefficient for confidence in banks and financial institutions was equal to zero for this comparison was rejected. .

Individual relationships - 3 The value of Exp(B) was 0.121 which implies that for each unit increase in confidence in banks and financial institutions the odds decreased by 87.9% (0.121 - 1.0 = -0.879). The relationship stated in the problem is supported. Survey respondents who had more confidence in banks and financial institutions were less likely to be in the group of survey respondents who thought we spend too little money on social security, rather than the group of survey respondents who thought we spend too much money on social security. For each unit increase in confidence in banks and financial institutions, the odds of being in the group of survey respondents who thought we spend too little money on social security decreased by 87.9%.

Individual relationships - 4 In the comparison of survey respondents who thought we spend about the right amount of money on social security to survey respondents who thought we spend too much money on social security, the probability of the Wald statistic (7.276) for the variable confidence in banks and financial institutions [confinan] was 0.007. Since the probability was less than or equal to the level of significance of 0.05, the null hypothesis that the b coefficient for confidence in banks and financial institutions was equal to zero for this comparison was rejected.

Individual relationships - 5 The value of Exp(B) was 0.098 which implies that for each unit increase in confidence in banks and financial institutions the odds decreased by 90.2% (0.098 - 1.0 = -0.902). The relationship stated in the problem is supported. Survey respondents who had more confidence in banks and financial institutions were less likely to be in the group of survey respondents who thought we spend about the right amount of money on social security, rather than the group of survey respondents who thought we spend too much money on social security. For each unit increase in confidence in banks and financial institutions, the odds of being in the group of survey respondents who thought we spend about the right amount of money on social security decreased by 90.2%.

Classification Accuracy - 1 The independent variables could be characterized as useful predictors distinguishing survey respondents who thought we spend too little money on welfare, survey respondents who thought we spend about the right amount of money on welfare and survey respondents who thought we spend too much money on welfare if the classification accuracy rate was substantially higher than the accuracy attainable by chance alone. Operationally, the classification accuracy rate should be 25% or more higher than the proportional by chance accuracy rate. The proportional by chance accuracy rate was computed by calculating the proportion of cases for each group based on the number of cases in each group in the 'Case Processing Summary', and then squaring and summing the proportion of cases in each group (0.614² + 0.345² + 0.041² = 0.498).

Classification Accuracy - 2 The classification accuracy rate was 63.7% which was greater than or equal to the proportional by chance accuracy criteria of 62.2% (1.25 x 49.8% = 62.2%). The criteria for classification accuracy is satisfied.

Validation analysis: set the random number seed To set the random number seed, select the Random Number Seed… command from the Transform menu. If the cases have been sorted in a different order when checking outliers, they should be resorted by caseid, or the assignment of random numbers will not match mine.

Set the random number seed First, click on the Set seed to option button to activate the text box. Second, type in the random seed stated in the problem. For this example, assume it is 892776. Third, click on the OK button to complete the dialog box. Note that SPSS does not provide you with any feedback about the change.

Validation analysis: compute the split variable To enter the formula for the variable that will split the sample in two parts, click on the Compute… command.

The formula for the split variable First, type the name for the new variable, split, into the Target Variable text box. Second, the formula for the value of split is shown in the text box. The uniform(1) function generates a random decimal number between 0 and 1. The random number is compared to the value 0.75. If the random number is less than or equal to 0.75, the value of the formula will be 1, the SPSS numeric equivalent to true. If the random number is larger than 0.75, the formula will return a 0, the SPSS numeric equivalent to false. Third, click on the OK button to complete the dialog box.

Selecting the teaching sample - 1 To select the cases that we will use for the training sample, we will use the Select Cases… command again.

Selecting the teaching sample - 2 First, mark the If condition is satisfied option button. Second, click on the IF… button to specify the condition.

Selecting the teaching sample - 3 To include the cases for the teaching sample, we enter the selection criteria: "split = 1". After completing the formula, click on the Continue button to close the dialog box.

Selecting the teaching sample - 4 To activate the selection, click on the OK button.

Re-running the multinomial logistic regression with the teaching sample Select the Regression | Multinomial Logistic… command from the Analyze menu.

Requesting the multinomial logistic regression again The specifications for the analysis are the same as the ones we have been using all along. Click on the OK button to request the output for the multinomial logistic regression.

Comparing the teaching model to full model - 1 In the cross-validation analysis, the relationship between the independent variables and the dependent variable was statistically significant. The probability for the model chi-square (17.487) testing overall relationship was = 0.008. The significance of the overall relationship between the individual independent variables and the dependent variable supports the interpretation of the model using the full data set.

Comparing the teaching model to full model - 2 The pattern of significance of individual predictors for the teaching model does not match the pattern for the full data set. Age is not significant in either model, and confinan is statistically significant in both. Educ is statistically significant in the teaching sample, but not for the full model. Though we have a reason to declare the question false, we will continue on to demonstrate the statistical method.

Comparing the teaching model to full model - 3 The statistical significance and direction of the relationship between confinan and the dependent variable for the teaching model agrees with the findings for the model using the full data set.

Classification accuracy of the training sample The classification accuracy for the training sample is 66.2%. The final consideration in the validation analysis is to see whether or not the shrinkage in classification accuracy for the holdout sample is less than 2%. Unfortunately, SPSS does not calculate classifications for the cases in the holdout validation sample, so we must manually calculate the values for classification of the cases. The steps and calculations on the following slides are needed to classify the holdout cases and compute classification accuracy in a crosstabs table.

Classification accuracy of the holdout sample SPSS does not calculate classifications for the cases in the holdout validation sample, so we must manually calculate the values for classification of the cases.

The log of the odds for the first group To classify cases, we first calculate the log of the odds for membership in each group, G1, G2, and G3. To calculate the log of the odds for the first group (G1), we multiple the coefficients for the first group from the table of parameter estimates times the variables: COMPUTE G1 = 6.573629842223 + 0.009441308512708 * AGE + 0.155649871298 * EDUC - 2.496600350832 * CONFINAN. To get all of the decimal places for a number, double click on a cell to highlight it and the full number will appear.

The log of the odds for the second group To calculate the log of the odds for the second group (G2), we multiple the coefficients for the second group from the table of parameter estimates times the variables: COMPUTE G2 = 3.664294481189 + 0.02905602322394 * AGE + 0.3303189055983 * EDUC - 2.947458591882 * CONFINAN.

The log of the odds for the third group The third group (G3) is the reference group and does not appear in the table of parameter estimates. By definition, the log of the odds for the reference group is equal to zero (0). We create the variable for G3 with the command: COMPUTE G3 = 0.

The probabilities for each group Having computed the log of the odds for each group, we convert the log of the odds back to a probability value with the following formulas: COMPUTE P1 = EXP(G1) / (EXP(G1) + EXP(G2) + EXP(G3)). COMPUTE P2 = EXP(G2) / (EXP(G1) + EXP(G2) + EXP(G3)). COMPUTE P3 = EXP(G3) / (EXP(G1) + EXP(G2) + EXP(G3)). EXECUTE.

Group classification Each case is predicted to be a member of the group to which it has the highest probability of belonging. We can accomplish this using "IF" statements in SPSS: IF (P1 > P2 AND P1 > P3) PREDGRP = 1. IF (P2 > P1 AND P2 > P3) PREDGRP = 2. IF (P3 > P1 AND P3 > P2) PREDGRP = 3. EXECUTE.

Selecting the holdout sample - 1 Our calculations predicted group membership for all cases in the data set, including the training sample. To compute the classification accuracy for the holdout sample, we will have to explicitly include only the holdout sample in the calculations. To select the cases that we will use to compute classification accuracy for the holdout group , we will use the Select Cases… command again.

Selecting the holdout sample - 2 First, mark the If condition is satisfied option button. Second, click on the IF… button to specify the condition.

Selecting the holdout sample - 3 To include the cases in the 25% holdout sample, we enter the criterion: "split = 0". After completing the formula, click on the Continue button to close the dialog box.

Selecting the holdout sample - 4 To activate the selection, click on the OK button.

The crosstabs classification accuracy table - 1 The classification accuracy table is a table of predicted group membership versus actual group membership. SPSS can create it as a cross-tabulated table. Select the Crosstabs… | Descriptive Statistics command from the Analyze menu.

The crosstabs classification accuracy table - 2 To mimic the appearance of classification tables in SPSS, we will put the original variable, natsoc, in the rows of the table and the predicted group variable, predgrp, in the columns. After specifying the row and column variables, we click on the Cells… button to request percentages.

The crosstabs classification accuracy table - 3 The classification accuracy rate will be the sum of the total percentages on the main diagonal. Second, click on the Continue button to close the dialog box. First, to obtain these percentage, mark the check box for Total on the Percentages panel.

The crosstabs classification accuracy table - 4 To complete the request for the cross-tabulated table, click on the OK button.

The crosstabs classification accuracy table - 5 The classification accuracy rate will be the sum of the total percentages on the main diagonal: 51.2% + 12.2% = 63.4%. The criteria to support the classification accuracy of the model is an accuracy rate for the holdout sample that has no more than 2% shrinkage from the accuracy rate for the training sample. The accuracy rate for the training sample was 66.2%. The shrinkage was 66.2% - 63.4% = 2.8%. The shrinkage in the accuracy rate for the holdout sample does not satisfy the requirement. The classification accuracy for the analysis of the full data set was not supported.