Multiple Regression – Assumptions and Outliers

Slides:



Advertisements
Similar presentations
4/4/2015Slide 1 SOLVING THE PROBLEM A one-sample t-test of a population mean requires that the variable be quantitative. A one-sample test of a population.
Advertisements

SW388R6 Data Analysis and Computers I Slide 1 Testing Assumptions of Linear Regression Detecting Outliers Transforming Variables Logic for testing assumptions.
Principal component analysis
One-sample T-Test of a Population Mean
5/15/2015Slide 1 SOLVING THE PROBLEM The one sample t-test compares two values for the population mean of a single variable. The two-sample test of a population.
Strategy for Complete Regression Analysis
Assumption of normality
Outliers Split-sample Validation
Detecting univariate outliers Detecting multivariate outliers
Chi-square Test of Independence
Outliers Split-sample Validation
Discriminant Analysis – Basic Relationships
Strategy for Complete Discriminant Analysis
Multiple Regression – Basic Relationships
SW388R7 Data Analysis & Computers II Slide 1 Computing Transformations Transforming variables Transformations for normality Transformations for linearity.
Multinomial Logistic Regression Basic Relationships
Regression Analysis We have previously studied the Pearson’s r correlation coefficient and the r2 coefficient of determination as measures of association.
Assumption of Homoscedasticity
SW388R6 Data Analysis and Computers I Slide 1 One-sample T-test of a Population Mean Confidence Intervals for a Population Mean.
Slide 1 Detecting Outliers Outliers are cases that have an atypical score either for a single variable (univariate outliers) or for a combination of variables.
Logistic Regression – Basic Relationships
Logistic Regression – Complete Problems
Slide 1 Testing Multivariate Assumptions The multivariate statistical techniques which we will cover in this class require one or more the following assumptions.
8/9/2015Slide 1 The standard deviation statistic is challenging to present to our audiences. Statisticians often resort to the “empirical rule” to describe.
SW388R7 Data Analysis & Computers II Slide 1 Assumption of normality Transformations Assumption of normality script Practice problems.
SW388R7 Data Analysis & Computers II Slide 1 Multiple Regression – Basic Relationships Purpose of multiple regression Different types of multiple regression.
SW388R7 Data Analysis & Computers II Slide 1 Multiple Regression – Split Sample Validation General criteria for split sample validation Sample problems.
Assumption of linearity
Assumptions of multiple regression
SW388R7 Data Analysis & Computers II Slide 1 Analyzing Missing Data Introduction Problems Using Scripts.
SW388R7 Data Analysis & Computers II Slide 1 Discriminant Analysis – Basic Relationships Discriminant Functions and Scores Describing Relationships Classification.
SW388R6 Data Analysis and Computers I Slide 1 Chi-square Test of Goodness-of-Fit Key Points for the Statistical Test Sample Homework Problem Solving the.
Stepwise Binary Logistic Regression
Sampling Distribution of the Mean Problem - 1
SW318 Social Work Statistics Slide 1 Estimation Practice Problem – 1 This question asks about the best estimate of the mean for the population. Recall.
Slide 1 SOLVING THE HOMEWORK PROBLEMS Simple linear regression is an appropriate model of the relationship between two quantitative variables provided.
8/20/2015Slide 1 SOLVING THE PROBLEM The two-sample t-test compare the means for two groups on a single variable. the The paired t-test compares the means.
SW388R7 Data Analysis & Computers II Slide 1 Logistic Regression – Hierarchical Entry of Variables Sample Problem Steps in Solving Problems.
8/23/2015Slide 1 The introductory statement in the question indicates: The data set to use: GSS2000R.SAV The task to accomplish: a one-sample test of a.
SW388R7 Data Analysis & Computers II Slide 1 Assumption of Homoscedasticity Homoscedasticity (aka homogeneity or uniformity of variance) Transformations.
Hierarchical Binary Logistic Regression
SW388R6 Data Analysis and Computers I Slide 1 Central Tendency and Variability Sample Homework Problem Solving the Problem with SPSS Logic for Central.
Multinomial Logistic Regression Basic Relationships
Stepwise Multiple Regression
SW388R7 Data Analysis & Computers II Slide 1 Multinomial Logistic Regression: Complete Problems Outliers and Influential Cases Split-sample Validation.
Slide 1 SOLVING THE HOMEWORK PROBLEMS Pearson's r correlation coefficient measures the strength of the linear relationship between the distributions of.
Slide 1 Hierarchical Multiple Regression. Slide 2 Differences between standard and hierarchical multiple regression  Standard multiple regression is.
SW388R7 Data Analysis & Computers II Slide 1 Logistic Regression – Hierarchical Entry of Variables Sample Problem Steps in Solving Problems Homework Problems.
6/2/2016Slide 1 To extend the comparison of population means beyond the two groups tested by the independent samples t-test, we use a one-way analysis.
SW388R6 Data Analysis and Computers I Slide 1 Independent Samples T-Test of Population Means Key Points about Statistical Test Sample Homework Problem.
SW388R7 Data Analysis & Computers II Slide 1 Hierarchical Multiple Regression Differences between hierarchical and standard multiple regression Sample.
6/4/2016Slide 1 The one sample t-test compares two values for the population mean of a single variable. The two-sample t-test of population means (aka.
SW388R6 Data Analysis and Computers I Slide 1 Multiple Regression Key Points about Multiple Regression Sample Homework Problem Solving the Problem with.
11/4/2015Slide 1 SOLVING THE PROBLEM Simple linear regression is an appropriate model of the relationship between two quantitative variables provided the.
Chi-square Test of Independence
SW388R7 Data Analysis & Computers II Slide 1 Hierarchical Multiple Regression Differences between hierarchical and standard multiple regression Sample.
SW318 Social Work Statistics Slide 1 One-way Analysis of Variance  1. Satisfy level of measurement requirements  Dependent variable is interval (ordinal)
SW318 Social Work Statistics Slide 1 Percentile Practice Problem (1) This question asks you to use percentile for the variable [marital]. Recall that the.
SW388R6 Data Analysis and Computers I Slide 1 Percentiles and Standard Scores Sample Percentile Homework Problem Solving the Percentile Problem with SPSS.
SW388R7 Data Analysis & Computers II Slide 1 Detecting Outliers Detecting univariate outliers Detecting multivariate outliers.
12/23/2015Slide 1 The chi-square test of independence is one of the most frequently used hypothesis tests in the social sciences because it can be used.
SW388R7 Data Analysis & Computers II Slide 1 Principal component analysis Strategy for solving problems Sample problem Steps in principal component analysis.
(Slides not created solely by me – the internet is a wonderful tool) SW388R7 Data Analysis & Compute rs II Slide 1.
SW388R7 Data Analysis & Computers II Slide 1 Strategy for Complete discriminant Analysis Assumption of normality, linearity, and homogeneity Outliers Multicollinearity.
SW388R7 Data Analysis & Computers II Slide 1 Assumption of linearity Strategy for solving problems Producing outputs for evaluating linearity Assumption.
Assumption of normality
Strategy for Complete discriminant Analysis
Multiple Regression – Split Sample Validation
Multinomial Logistic Regression: Complete Problems
Presentation transcript:

Multiple Regression – Assumptions and Outliers Multiple Regression and Assumptions Multiple Regression and Outliers Strategy for Solving Problems Practice Problems

Multiple Regression and Assumptions Multiple regression is most effect at identifying relationship between a dependent variable and a combination of independent variables when its underlying assumptions are satisfied: each of the metric variables are normally distributed, the relationships between metric variables are linear, and the relationship between metric and dichotomous variables is homoscedastic. Failing to satisfy the assumptions does not mean that our answer is wrong. It means that our solution may under-report the strength of the relationships.

Multiple Regression and Outliers Outliers can distort the regression results. When an outlier is included in the analysis, it pulls the regression line towards itself. This can result in a solution that is more accurate for the outlier, but less accurate for all of the other cases in the data set. We will check for univariate outliers on the dependent variable and multivariate outliers on the independent variables.

Relationship between assumptions and outliers The problems of satisfying assumptions and detecting outliers are intertwined. For example, if a case has a value on the dependent variable that is an outlier, it will affect the skew, and hence, the normality of the distribution. Removing an outlier may improve the distribution of a variable. Transforming a variable may reduce the likelihood that the value for a case will be characterized as an outlier.

Order of analysis is important The order in which we check assumptions and detect outliers will affect our results because we may get a different subset of cases in the final analysis. In order to maximize the number of cases available to the analysis, we will evaluate assumptions first. We will substitute any transformations of variable that enable us to satisfy the assumptions. We will use any transformed variables that are required in our analysis to detect outliers.

Strategy for solving problems Our strategy for solving problems about violations of assumptions and outliers will include the following steps: Run type of regression specified in problem statement on variables using full data set. Test the dependent variable for normality. If it does not satisfy the criteria for normality unless transformed, substitute the transformed variable in the remaining tests that call for the use of the dependent variable. Test for normality, linearity, homoscedasticity using scripts. Decide which transformations should be used. Substitute transformations and run regression entering all independent variables, saving studentized residuals and Mahalanobis distance scores. Compute probabilities for D². Remove the outliers (studentized residual greater than 3 or Mahalanobis D² with p <= 0.001), and run regression with the method and variables specified in the problem. Compare R² for analysis using transformed variables and omitting outliers (step 5) to R² obtained for model using all data and original variables (step 1).

Transforming dependent variables We will use the following logic to transform variables: If dependent variable is not normally distributed: Try log, square root, and inverse transformation. Use first transformed variable that satisfies normality criteria. If no transformation satisfies normality criteria, use untransformed variable and add caution for violation of assumption. If a transformation satisfies normality, use the transformed variable in the tests of the independent variables.

Transforming independent variables - 1 If independent variable is normally distributed and linearly related to dependent variable, use as is. If independent variable is normally distributed but not linearly related to dependent variable: Try log, square root, square, and inverse transformation. Use first transformed variable that satisfies linearity criteria and does not violate normality criteria If no transformation satisfies linearity criteria and does not violate normality criteria, use untransformed variable and add caution for violation of assumption

Transforming independent variables - 2 If independent variable is linearly related to dependent variable but not normally distributed: Try log, square root, and inverse transformation. Use first transformed variable that satisfies normality criteria and does not reduce correlation. Try log, square root, and inverse transformation. Use first transformed variable that satisfies normality criteria and has significant correlation. If no transformation satisfies normality criteria with a significant correlation, use untransformed variable and add caution for violation of assumption

Transforming independent variables - 3 If independent variable is not linearly related to dependent variable and not normally distributed: Try log, square root, square, and inverse transformation. Use first transformed variable that satisfies normality criteria and has significant correlation. If no transformation satisfies normality criteria with a significant correlation, used untransformed variable and add caution for violation of assumption

Impact of transformations and omitting outliers We evaluate the regression assumptions and detect outliers with a view toward strengthening the relationship. This may not happen. The regression may be the same, it may be weaker, and it may be stronger. We cannot be certain of the impact until we run the regression again. In the end, we may opt not to exclude outliers and not to employ transformations; the analysis informs us of the consequences of doing either.

Notes Whenever you start a new problem, make sure you have removed variables created for previous analysis and have included all cases back into the data set. I have added the square transformation to the checkboxes for transformations in the normality script. Since this is an option for linearity, we need to be able to evaluate its impact on normality. If you change the options for output in pivot tables from labels to names, you will get an error message when you use the linearity script. To solve the problem, change the option for output in pivot tables back to labels.

Problem 1 In the dataset GSS2000.sav, is the following statement true, false, or an incorrect application of a statistic? Assume that there is no problem with missing data. Use a level of significance of 0.01 for the regression analysis. Use a level of significance of 0.01 for evaluating assumptions. The research question requires us to identify the best subset of predictors of "total family income" [income98] from the list: "sex" [sex], "how many in family earned money" [earnrs], and "income" [rincom98]. After substituting transformed variables to satisfy regression assumptions and removing outliers, the total proportion of variance explained by the regression analysis increased by 10.8%. 1. True 2. True with caution 3. False 4. Inappropriate application of a statistic

Dissecting problem 1 - 1 In the dataset GSS2000.sav, is the following statement true, false, or an incorrect application of a statistic? Assume that there is no problem with missing data. Use a level of significance of 0.01 for the regression analysis. Use a level of significance of 0.01 for evaluating assumptions. The research question requires us to identify the best subset of predictors of "total family income" [income98] from the list: "sex" [sex], "how many in family earned money" [earnrs], and "income" [rincom98]. After substituting transformed variables to satisfy regression assumptions and removing outliers, the total proportion of variance explained by the regression analysis increased by 10.8%. 1. True 2. True with caution 3. False 4. Inappropriate application of a statistic The problem may give us different levels of significance for the analysis. In this problem, we are told to use 0.01 as alpha for the regression analysis as well as for testing assumptions.

Dissecting problem 1 - 2 In the dataset GSS2000.sav, is the following statement true, false, or an incorrect application of a statistic? Assume that there is no problem with missing data. Use a level of significance of 0.01 for the regression analysis. Use a level of significance of 0.01 for evaluating assumptions. The research question requires us to identify the best subset of predictors of "total family income" [income98] from the list: "sex" [sex], "how many in family earned money" [earnrs], and "income" [rincom98]. After substituting transformed variables to satisfy regression assumptions and removing outliers, the total proportion of variance explained by the regression analysis increased by 10.8%. 1. True 2. True with caution 3. False 4. Inappropriate application of a statistic The method for selecting variables is derived from the research question. In this problem we are asked to idnetify the best subset of predicotrs, so we do a stepwise multiple regression.

Dissecting problem 1 - 3 In the dataset GSS2000.sav, is the following statement true, false, or an incorrect application of a statistic? Assume that there is no problem with missing data. Use a level of significance of 0.01 for the regression analysis. Use a level of significance of 0.01 for evaluating assumptions. The research question requires us to identify the best subset of predictors of "total family income" [income98] from the list: "sex" [sex], "how many in family earned money" [earnrs], and "income" [rincom98]. After substituting transformed variables to satisfy regression assumptions and removing outliers, the total proportion of variance explained by the regression analysis increased by 10.8%. 1. True 2. True with caution 3. False 4. Inappropriate application of a statistic The purpose of testing for assumptions and outliers is to identify a stronger model. The main question to be answered in this problem is whether or not the use transformed variables to satisfy assumptions and the removal of outliers improves the overall relationship between the independent variables and the dependent variable, as measured by R². Specifically, the question asks whether or not the R² for a regression analysis after substituting transformed variables and eliminating outliers is 10.8% higher than a regression analysis using the original format for all variables and including all cases.

R² before transformations or removing outliers To start out, we run a stepwise multiple regression analysis with income98 as the dependent variable and sex, earnrs, and rincom98 as the independent variables. We select stepwise as the method to select the best subset of predictors.

R² before transformations or removing outliers Prior to any transformations of variables to satisfy the assumptions of multiple regression or removal of outliers, the proportion of variance in the dependent variable explained by the independent variables (R²) was 51.1%. This is the benchmark that we will use to evaluate the utility of transformations and the elimination of outliers.

R² before transformations or removing outliers For this particular question, we are not interested in the statistical significance of the overall relationship prior to transformations and removing outliers. In fact, it is possible that the relationship is not statistically significant due to variables that are not normal, relationships that are not linear, and the inclusion of outliers.

Normality of the dependent variable: total family income In evaluating assumptions, the first step is to examine the normality of the dependent variable. If it is not normally distributed, or cannot be normalized with a transformation, it can affect the relationships with all other variables. To test the normality of the dependent variable, run the script: NormalityAssumptionAndTransformations.SBS First, move the dependent variable INCOME98 to the list box of variables to test. Second, click on the OK button to produce the output.

Normality of the dependent variable: total family income The dependent variable "total family income" [income98] satisfies the criteria for a normal distribution. The skewness (-0.628) and kurtosis (-0.248) were both between -1.0 and +1.0. No transformation is necessary.

Linearity and independent variable: how many in family earned money First, move the dependent variable INCOME98 to the text box for the dependent variable. To evaluate the linearity of the relationship between number of earners and total family income, run the script for the assumption of linearity: LinearityAssumptionAndTransformations.SBS Second, move the independent variable, EARNRS, to the list box for independent variables. Third, click on the OK button to produce the output.

Linearity and independent variable: how many in family earned money The independent variable "how many in family earned money" [earnrs] satisfies the criteria for the assumption of linearity with the dependent variable "total family income" [income98], but does not satisfy the assumption of normality. The evidence of linearity in the relationship between the independent variable "how many in family earned money" [earnrs] and the dependent variable "total family income" [income98] was the statistical significance of the correlation coefficient (r = 0.505). The probability for the correlation coefficient was <0.001, less than or equal to the level of significance of 0.01. We reject the null hypothesis that r = 0 and conclude that there is a linear relationship between the variables.

Normality of independent variable: how many in family earned money After evaluating the dependent variable, we examine the normality of each metric variable and linearity of its relationship with the dependent variable. To test the normality of number of earners in family, run the script: NormalityAssumptionAndTransformations.SBS First, move the independent variable EARNRS to the list box of variables to test. Second, click on the OK button to produce the output.

Normality of independent variable: how many in family earned money The independent variable "how many in family earned money" [earnrs] satisfies the criteria for the assumption of linearity with the dependent variable "total family income" [income98], but does not satisfy the assumption of normality. In evaluating normality, the skewness (0.742) was between -1.0 and +1.0, but the kurtosis (1.324) was outside the range from -1.0 to +1.0.

Normality of independent variable: how many in family earned money The logarithmic transformation improves the normality of "how many in family earned money" [earnrs] without a reduction in the strength of the relationship to "total family income" [income98]. In evaluating normality, the skewness (-0.483) and kurtosis (-0.309) were both within the range of acceptable values from -1.0 to +1.0. The correlation coefficient for the transformed variable is 0.536. The square root transformation also has values of skewness and kurtosis in the acceptable range. However, by our order of preference for which transformation to use, the logarithm is preferred.

Transformation for how many in family earned money The independent variable, how many in family earned money, had a linear relationship to the dependent variable, total family income. The logarithmic transformation improves the normality of "how many in family earned money" [earnrs] without a reduction in the strength of the relationship to "total family income" [income98]. We will substitute the logarithmic transformation of how many in family earned money in the regression analysis.

Normality of independent variable: respondent’s income After evaluating the dependent variable, we examine the normality of each metric variable and linearity of its relationship with the dependent variable. To test the normality of respondent’s in family, run the script: NormalityAssumptionAndTransformations.SBS First, move the independent variable RINCOM89 to the list box of variables to test. Second, click on the OK button to produce the output.

Normality of independent variable: respondent’s income The independent variable "income" [rincom98] satisfies the criteria for both the assumption of normality and the assumption of linearity with the dependent variable "total family income" [income98]. In evaluating normality, the skewness (-0.686) and kurtosis (-0.253) were both within the range of acceptable values from -1.0 to +1.0.

Linearity and independent variable: respondent’s income First, move the dependent variable INCOME98 to the text box for the dependent variable. To evaluate the linearity of the relationship between respondent’s income and total family income, run the script for the assumption of linearity: LinearityAssumptionAndTransformations.SBS Second, move the independent variable, RINCOM89, to the list box for independent variables. Third, click on the OK button to produce the output.

Linearity and independent variable: respondent’s income The evidence of linearity in the relationship between the independent variable "income" [rincom98] and the dependent variable "total family income" [income98] was the statistical significance of the correlation coefficient (r = 0.577). The probability for the correlation coefficient was <0.001, less than or equal to the level of significance of 0.01. We reject the null hypothesis that r = 0 and conclude that there is a linear relationship between the variables.

Homoscedasticity: sex First, move the dependent variable INCOME98 to the text box for the dependent variable. To evaluate the homoscedasticity of the relationship between sex and total family income, run the script for the assumption of homogeneity of variance: HomoscedasticityAssumptionAnd Transformations.SBS Second, move the independent variable, SEX, to the list box for independent variables. Third, click on the OK button to produce the output.

Homoscedasticity: sex Based on the Levene Test, the variance in "total family income" [income98] is homogeneous for the categories of "sex" [sex]. The probability associated with the Levene Statistic (0.031) is greater than the level of significance, so we fail to reject the null hypothesis and conclude that the homoscedasticity assumption is satisfied.

Adding a transformed variable Even though we do not need a transformation for any of the variables in this analysis, we will demonstrate how to use a script, such as the normality script, to add a transformed variable to the data set, e.g. a logarithmic transformation for highest year of school. First, move the variable that we want to transform to the list box of variables to test. Second, mark the checkbox for the transformation we want to add to the data set, and clear the other checkboxes. Third, clear the checkbox for Delete transformed variables from the data. This will save the transformed variable. Fourth, click on the OK button to produce the output.

The transformed variable in the data editor If we scroll to the extreme right in the data editor, we see that the transformed variable has been added to the data set. Whenever we add transformed variables to the data set, we should be sure to delete them before starting another analysis.

The regression to identify outliers We use the regression procedure to identify both univariate and multivariate outliers. We start with the same dialog we used for the last analysis, in which income98 as the dependent variable and sex, earnrs, and rincom98 were the independent variables. First, we substitute the logarithmic transformation of earnrs, logearn, into the list of independent variables. Second, we change the method of entry from Stepwise to Enter so that all variables will be included in the detection of outliers. Third, we want to save the calculated values of the outlier statistics to the data set. Click on the Save… button to specify what we want to save.

Saving the measures of outliers First, mark the checkbox for Studentized residuals in the Residuals panel. Studentized residuals are z-scores computed for a case based on the data for all other cases in the data set. Third, click on the OK button to complete the specifications. Second, mark the checkbox for Mahalanobis in the Distances panel. This will compute Mahalanobis distances for the set of independent variables.

The variables for identifying outliers The variables for identifying univariate outliers for the dependent variable are in a column which SPSS has names sre_1. The variables for identifying multivariate outliers for the independent variables are in a column which SPSS has names mah_1.

Computing the probability for Mahalanobis D² To compute the probability of D², we will use an SPSS function in a Compute command. First, select the Compute… command from the Transform menu.

Formula for probability for Mahalanobis D² First, in the target variable text box, type the name "p_mah_1" as an acronym for the probability of the mah_1, the Mahalanobis D² score. Second, to complete the specifications for the CDF.CHISQ function, type the name of the variable containing the D² scores, mah_1, followed by a comma, followed by the number of variables used in the calculations, 3. Since the CDF function (cumulative density function) computes the cumulative probability from the left end of the distribution up through a given value, we subtract it from 1 to obtain the probability in the upper tail of the distribution. Third, click on the OK button to signal completion of the computer variable dialog.

Multivariate outliers Using the probabilities computed in p_mah_1 to identify outliers, scroll down through the list of case to see if we can find cases with a probability less than 0.001. There are no outliers for the set of independent variables.

Univariate outliers Similarly, we can scroll down the values of sre_1, the studentized residual to see the one outlier with a value larger than ± 3.0. Based on these criteria, there are 4 outliers.There are 4 cases that have a score on the dependent variable that is sufficiently unusual to be considered outliers (case 20000357: studentized residual=3.08; case 20000416: studentized residual=3.57; case 20001379: studentized residual=3.27; case 20002702: studentized residual=-3.23).

Omitting the outliers To omit the outliers from the analysis, we select in the cases that are not outliers. First, select the Select Cases… command from the Transform menu.

Specifying the condition to omit outliers First, mark the If condition is satisfied option button to indicate that we will enter a specific condition for including cases. Second, click on the If… button to specify the criteria for inclusion in the analysis.

The formula for omitting outliers To eliminate the outliers, we request the cases that are not outliers. The formula specifies that we should include cases if the studentized residual (regardless of sign) if less than 3 and the probability for Mahalanobis D² is higher than the level of significance, 0.001. After typing in the formula, click on the Continue button to close the dialog box,

Completing the request for the selection To complete the request, we click on the OK button.

The omitted multivariate outlier SPSS identifies the excluded cases by drawing a slash mark through the case number. Most of the slashes are for cases with missing data, but we also see that the case with the low probability for Mahalanobis distance is included in those that will be omitted.

Running the regression without outliers We run the regression again, excluding the outliers. Select the Regression | Linear command from the Analyze menu.

Opening the save options dialog We specify the dependent and independent variables, substituting any transformed variables required by assumptions. When we used regression to detect outliers, we entered all variables. Now we are testing the relationship specified in the problem, so we change the method to Stepwise. On our last run, we instructed SPSS to save studentized residuals and Mahalanobis distance. To prevent these values from being calculated again, click on the Save… button.

Clearing the request to save outlier data First, clear the checkbox for Studentized residuals. Third, click on the OK button to complete the specifications. Second, clear the checkbox form Mahalanobis distance.

Opening the statistics options dialog Once we have removed outliers, we need to check the sample size requirement for regression. Since we will need the descriptive statistics for this, click on the Statistics… button.

Requesting descriptive statistics First, mark the checkbox for Descriptives. Second, click on the Continue button to complete the specifications.

Requesting the output Having specified the output needed for the analysis, we click on the OK button to obtain the regression output.

Sample size requirement The minimum ratio of valid cases to independent variables for stepwise multiple regression is 5 to 1. After removing 4 outliers, there are 159 valid cases and 3 independent variables. The ratio of cases to independent variables for this analysis is 53.0 to 1, which satisfies the minimum requirement. In addition, the ratio of 53.0 to 1 satisfies the preferred ratio of 50 to 1.

Significance of regression relationship The probability of the F statistic (84.107) for the regression relationship which includes these variables is <0.001, less than or equal to the level of significance of 0.01. We reject the null hypothesis that there is no relationship between the best subset of independent variables and the dependent variable (R² = 0). We support the research hypothesis that there is a statistically significant relationship between the best subset of independent variables and the dependent variable.

Increase in proportion of variance Prior to any transformations of variables to satisfy the assumptions of multiple regression or removal of outliers, the proportion of variance in the dependent variable explained by the independent variables (R²) was 51.1%. After transformed variables were substituted to satisfy assumptions and outliers were removed from the sample, the proportion of variance explained by the regression analysis was 61.9%, a difference of 10.8%. The answer to the question is true with caution. A caution is added because of the inclusion of ordinal level variables.

Problem 2 In the dataset GSS2000.sav, is the following statement true, false, or an incorrect application of a statistic? Assume that there is no problem with missing data. Use a level of significance of 0.05 for the regression analysis. Use a level of significance of 0.01 for evaluating assumptions. The research question requires us to examine the relationship of "age" [age], "highest year of school completed" [educ], and "sex" [sex] to the dependent variable "occupational prestige score" [prestg80]. After substituting transformed variables to satisfy regression assumptions and removing outliers, the proportion of variance explained by the regression analysis increased by 3.6%. 1. True 2. True with caution 3. False 4. Inappropriate application of a statistic

Dissecting problem 2 - 1 In the dataset GSS2000.sav, is the following statement true, false, or an incorrect application of a statistic? Assume that there is no problem with missing data. Use a level of significance of 0.05 for the regression analysis. Use a level of significance of 0.01 for evaluating assumptions. The research question requires us to examine the relationship of "age" [age], "highest year of school completed" [educ], and "sex" [sex] to the dependent variable "occupational prestige score" [prestg80]. After substituting transformed variables to satisfy regression assumptions and removing outliers, the proportion of variance explained by the regression analysis increased by 3.6%. 1. True 2. True with caution 3. False 4. Inappropriate application of a statistic The problem may give us different levels of significance for the analysis. In this problem, we are told to use 0.05 as alpha for the regression analysis and the more conservative 0.01 as the alpha in testing assumptions.

Dissecting problem 2 - 2 In the dataset GSS2000.sav, is the following statement true, false, or an incorrect application of a statistic? Assume that there is no problem with missing data. Use a level of significance of 0.05 for the regression analysis. Use a level of significance of 0.01 for evaluating assumptions. The research question requires us to examine the relationship of "age" [age], "highest year of school completed" [educ], and "sex" [sex] to the dependent variable "occupational prestige score" [prestg80]. After substituting transformed variables to satisfy regression assumptions and removing outliers, the proportion of variance explained by the regression analysis increased by 3.6%. 1. True 2. True with caution 3. False 4. Inappropriate application of a statistic The method for selecting variables is derived from the research question. If we are asked to examine a relationship without any statement about control variables or the best subset of variables, we do a standard multiple regression.

Dissecting problem 2 - 3 In the dataset GSS2000.sav, is the following statement true, false, or an incorrect application of a statistic? Assume that there is no problem with missing data. Use a level of significance of 0.05 for the regression analysis. Use a level of significance of 0.01 for evaluating assumptions. The research question requires us to examine the relationship of "age" [age], "highest year of school completed" [educ], and "sex" [sex] to the dependent variable "occupational prestige score" [prestg80]. After substituting transformed variables to satisfy regression assumptions and removing outliers, the proportion of variance explained by the regression analysis increased by 3.6%. 1. True 2. True with caution 3. False 4. Inappropriate application of a statistic The purpose of testing for assumptions and outliers is to identify a stronger model. The main question to be answered in this problem is whether or not the use transformed variables to satisfy assumptions and the removal of outliers improves the overall relationship between the independent variables and the dependent variable, as measured by R². Specifically, the question asks whether or not the R² for a regression analysis after substituting transformed variables and eliminating outliers is 3.6% higher than a regression analysis using the original format for all variables and including all cases.

R² before transformations or removing outliers To start out, we run a standard multiple regression analysis with prestg80 as the dependent variable and age, educ, and sex as the independent variables.

R² before transformations or removing outliers Prior to any transformations of variables to satisfy the assumptions of multiple regression or removal of outliers, the proportion of variance in the dependent variable explained by the independent variables (R²) was 27.1%. This is the benchmark that we will use to evaluate the utility of transformations and the elimination of outliers. For this particular question, we are not interested in the statistical significance the overall relationship prior to transformations and removing outliers. In fact, it is possible that the relationship is not statistically significant due to variables that are not normal, relationships that are not linear, and the inclusion of outliers.

Normality of the dependent variable In evaluating assumptions, the first step is to examine the normality of the dependent variable. If it is not normally distributed, or cannot be normalized with a transformation, it can affect the relationships with all other variables. To test the normality of the dependent variable, run the script: NormalityAssumptionAndTransformations.SBS First, move the dependent variable PRESTG80 to the list box of variables to test. Second, click on the OK button to produce the output.

Normality of the dependent variable The dependent variable "occupational prestige score" [prestg80] satisfies the criteria for a normal distribution. The skewness (0.401) and kurtosis (-0.630) were both between -1.0 and +1.0. No transformation is necessary.

Normality of independent variable: Age After evaluating the dependent variable, we examine the normality of each metric variable and linearity of its relationship with the dependent variable. To test the normality of age, run the script: NormalityAssumptionAndTransformations.SBS First, move the independent variable AGE to the list box of variables to test. Second, click on the OK button to produce the output.

Normality of independent variable: Age The independent variable "age" [age] satisfies the criteria for the assumption of normality, but does not satisfy the assumption of linearity with the dependent variable "occupational prestige score" [prestg80]. In evaluating normality, the skewness (0.595) and kurtosis (-0.351) were both within the range of acceptable values from -1.0 to +1.0.

Linearity and independent variable: Age First, move the dependent variable PRESTG80 to the text box for the dependent variable. To evaluate the linearity of the relationship between age and occupational prestige, run the script for the assumption of linearity: LinearityAssumptionAndTransformations.SBS Second, move the independent variable, AGE, to the list box for independent variables. Third, click on the OK button to produce the output.

Linearity and independent variable: Age The evidence of nonlinearity in the relationship between the independent variable "age" [age] and the dependent variable "occupational prestige score" [prestg80] was the lack of statistical significance of the correlation coefficient (r = 0.024). The probability for the correlation coefficient was 0.706, greater than the level of significance of 0.01. We cannot reject the null hypothesis that r = 0, and cannot conclude that there is a linear relationship between the variables. Since none of the transformations to improve linearity were successful, it is an indication that the problem may be a weak relationship, rather than a curvilinear relationship correctable by using a transformation. A weak relationship is not a violation of the assumption of linearity, and does not require a caution.

Transformation for Age The independent variable age satisfied the criteria for normality. The independent variable age did not have a linear relationship to the dependent variable occupational prestige. However, none of the transformations linearized the relationship. No transformation will be used - it would not help linearity and is not needed for normality.

Linearity and independent variable: Highest year of school completed First, move the dependent variable PRESTG80 to the text box for the dependent variable. To evaluate the linearity of the relationship between highest year of school and occupational prestige, run the script for the assumption of linearity: LinearityAssumptionAndTransformations.SBS Second, move the independent variable, EDUC, to the list box for independent variables. Third, click on the OK button to produce the output.

Linearity and independent variable: Highest year of school completed The independent variable "highest year of school completed" [educ] satisfies the criteria for the assumption of linearity with the dependent variable "occupational prestige score" [prestg80], but does not satisfy the assumption of normality. The evidence of linearity in the relationship between the independent variable "highest year of school completed" [educ] and the dependent variable "occupational prestige score" [prestg80] was the statistical significance of the correlation coefficient (r = 0.495). The probability for the correlation coefficient was <0.001, less than or equal to the level of significance of 0.01. We reject the null hypothesis that r = 0 and conclude that there is a linear relationship between the variables.

Normality of independent variable: Highest year of school completed To test the normality of EDUC, Highest year of school completed, run the script: NormalityAssumptionAndTransformations.SBS First, move the dependent variable EDUC to the list box of variables to test. Second, click on the OK button to produce the output.

Normality of independent variable: Highest year of school completed In evaluating normality, the skewness (-0.137) was between -1.0 and +1.0, but the kurtosis (1.246) was outside the range from -1.0 to +1.0. None of the transformations for normalizing the distribution of "highest year of school completed" [educ] were effective.

Transformation for highest year of school The independent variable, highest year of school, had a linear relationship to the dependent variable, occupational prestige. The independent variable, highest year of school, did not satisfy the criteria for normality. None of the transformations for normalizing the distribution of "highest year of school completed" [educ] were effective. No transformation will be used - it would not help normality and is not needed for linearity. A caution should be added to any findings.

Homoscedasticity: sex First, move the dependent variable PRESTG80 to the text box for the dependent variable. To evaluate the homoscedasticity of the relationship between sex and occupational prestige, run the script for the assumption of homogeneity of variance: HomoscedasticityAssumptionAnd Transformations.SBS Second, move the independent variable, SEX, to the list box for independent variables. Third, click on the OK button to produce the output.

Homoscedasticity: sex Based on the Levene Test, the variance in "occupational prestige score" [prestg80] is homogeneous for the categories of "sex" [sex]. The probability associated with the Levene Statistic (0.808) is greater than the level of significance, so we fail to reject the null hypothesis and conclude that the homoscedasticity assumption is satisfied. Even if we violate the assumption, we would not do a transformation since it could impact the relationships of the other independent variables with the dependent variable.

Adding a transformed variable Even though we do not need a transformation for any of the variables in this analysis, we will demonstrate how to use a script, such as the normality script, to add a transformed variable to the data set, e.g. a logarithmic transformation for highest year of school. First, move the variable that we want to transform to the list box of variables to test. Second, mark the checkbox for the transformation we want to add to the data set, and clear the other checkboxes. Third, clear the checkbox for Delete transformed variables from the data. This will save the transformed variable. Fourth, click on the OK button to produce the output.

The transformed variable in the data editor If we scroll to the extreme right in the data editor, we see that the transformed variable has been added to the data set. Whenever we add transformed variables to the data set, we should be sure to delete them before starting another analysis.

The regression to identify outliers We can use the regression procedure to identify both univariate and multivariate outliers. We start with the same dialog we used for the last analysis, in which prestg90 as the dependent variable and age, educ, and sex were the independent variables. If we need to use any transformed variables, we would substitute them now. We will save the calculated values of the outlier statistics to the data set. Click on the Save… button to specify what we want to save.

Saving the measures of outliers First, mark the checkbox for Studentized residuals in the Residuals panel. Studentized residuals are z-scores computed for a case based on the data for all other cases in the data set. Third, click on the OK button to complete the specifications. Second, mark the checkbox for Mahalanobis in the Distances panel. This will compute Mahalanobis distances for the set of independent variables.

The variables for identifying outliers The variables for identifying univariate outliers for the dependent variable are in a column which SPSS has names sre_1. The variables for identifying multivariate outliers for the independent variables are in a column which SPSS has names mah_1.

Computing the probability for Mahalanobis D² To compute the probability of D², we will use an SPSS function in a Compute command. First, select the Compute… command from the Transform menu.

Formula for probability for Mahalanobis D² First, in the target variable text box, type the name "p_mah_1" as an acronym for the probability of the mah_1, the Mahalanobis D² score. Second, to complete the specifications for the CDF.CHISQ function, type the name of the variable containing the D² scores, mah_1, followed by a comma, followed by the number of variables used in the calculations, 3. Since the CDF function (cumulative density function) computes the cumulative probability from the left end of the distribution up through a given value, we subtract it from 1 to obtain the probability in the upper tail of the distribution. Third, click on the OK button to signal completion of the computer variable dialog.

The multivariate outlier Using the probabilities computed in p_mah_1 to identify outliers, scroll down through the list of case to see the one case with a probability less than 0.001. There is 1 case that has a combination of scores on the independent variables that is sufficiently unusual to be considered an outlier (case 20001984: Mahalanobis D²=16.97, p=0.0007).

The univariate outlier Similarly, we can scroll down the values of sre_1, the studentized residual to see the one outlier with a value larger than 3.0. There is 1 case that has a score on the dependent variable that is sufficiently unusual to be considered an outlier (case 20000391: studentized residual=4.14).

Omitting the outliers To omit the outliers from the analysis, we select in the cases that are not outliers. First, select the Select Cases… command from the Transform menu.

Specifying the condition to omit outliers First, mark the If condition is satisfied option button to indicate that we will enter a specific condition for including cases. Second, click on the If… button to specify the criteria for inclusion in the analysis.

The formula for omitting outliers To eliminate the outliers, we request the cases that are not outliers. The formula specifies that we should include cases if the studentized residual (regardless of sign) if less than 3 and the probability for Mahalanobis D² is higher than the level of significance, 0.001. After typing in the formula, click on the Continue button to close the dialog box,

Completing the request for the selection To complete the request, we click on the OK button.

The omitted multivariate outlier SPSS identifies the excluded cases by drawing a slash mark through the case number. Most of the slashes are for cases with missing data, but we also see that the case with the low probability for Mahalanobis distance is included in those that will be omitted.

Running the regression without outliers We run the regression again, excluding the outliers. Select the Regression | Linear command from the Analyze menu.

Opening the save options dialog If specify the dependent an independent variables. If we wanted to use any transformed variables we would substitute them now. On our last run, we instructed SPSS to save studentized residuals and Mahalanobis distance. To prevent these values from being calculated again, click on the Save… button.

Clearing the request to save outlier data First, clear the checkbox for Studentized residuals. Third, click on the OK button to complete the specifications. Second, clear the checkbox form Mahalanobis distance.

Opening the statistics options dialog Once we have removed outliers, we need to check the sample size requirement for regression. Since we will need the descriptive statistics for this, click on the Statistics… button.

Requesting descriptive statistics First, mark the checkbox for Descriptives. Second, click on the Continue button to complete the specifications.

Requesting the output Having specified the output needed for the analysis, we click on the OK button to obtain the regression output.

Sample size requirement The minimum ratio of valid cases to independent variables for multiple regression is 5 to 1. After removing 2 outliers, there are 252 valid cases and 3 independent variables. The ratio of cases to independent variables for this analysis is 84.0 to 1, which satisfies the minimum requirement. In addition, the ratio of 84.0 to 1 satisfies the preferred ratio of 15 to 1.

Significance of regression relationship The probability of the F statistic (36.639) for the overall regression relationship is <0.001, less than or equal to the level of significance of 0.05. We reject the null hypothesis that there is no relationship between the set of independent variables and the dependent variable (R² = 0). We support the research hypothesis that there is a statistically significant relationship between the set of independent variables and the dependent variable.

Increase in proportion of variance Prior to any transformations of variables to satisfy the assumptions of multiple regression or removal of outliers, the proportion of variance in the dependent variable explained by the independent variables (R²) was 27.1%. No transformed variables were substituted to satisfy assumptions, but outliers were removed from the sample. The proportion of variance explained by the regression analysis after removing outliers was 30.7%, a difference of 3.6%. The answer to the question is true with caution. A caution is added because of a violation of regression assumptions.

Impact of assumptions and outliers - 1 The following is a guide to the decision process for answering problems about the impact of assumptions and outliers on analysis: Dependent variable metric? Independent variables metric or dichotomous? No Inappropriate application of a statistic Yes Ratio of cases to independent variables at least 5 to 1? No Inappropriate application of a statistic Yes Yes Run baseline regression and record R² for future reference, using method for including variables identified in the research question.

Impact of assumptions and outliers - 2 Is the dependent variable normally distributed? Try: 1. Logarithmic transformation 2. Square root transformation 3. Inverse transformation If unsuccessful, add caution No Yes Metric IV’s normally distributed and linearly related to DV Try: 1. Logarithmic transformation 2. Square root transformation (3. Square transformation) 4. Inverse transformation If unsuccessful, add caution No Yes DV is homoscedastic for categories of dichotomous IV’s? No Add caution Yes

Impact of assumptions and outliers - 3 Substituting any transformed variables, run regression using direct entry to include all variables to request statistics for detecting outliers Are there univariate outliers (DV) or multivariate outliers (IVs)? Yes Remove outliers from data No Ratio of cases to independent variables at least 5 to 1? No Inappropriate application of a statistic Yes Run regression again using transformed variables and eliminating outliers

Impact of assumptions and outliers - 4 Yes Probability of ANOVA test of regression less than/equal to level of significance? No False Yes Increase in R² correct? No False Yes Satisfies ratio for preferred sample size: 15 to 1 (stepwise: 50 to 1) No True with caution Yes Yes

Impact of assumptions and outliers - 5 Yes Other cautions added for ordinal variables or violation of assumptions? No True Yes Yes True with caution