Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multiple Linear Regression An introduction, some assumptions, and then model reduction 1.

Similar presentations


Presentation on theme: "Multiple Linear Regression An introduction, some assumptions, and then model reduction 1."— Presentation transcript:

1

2 Multiple Linear Regression An introduction, some assumptions, and then model reduction 1

3 First, what is multiple linear regression?  First, some terminology…these 3 equations all say the same thing…   0,  1 and so on are called beta coefficients Y’ = a + bX Y’ = mx + b Y’ =  0 +  1 X a = b =  0 = INTERCEPT b = m =  1 = SLOPE 1

4 First, what is multiple linear regression?  Simple linear regression uses just one predictor or independent variable  Multiple linear regression just adds more IV’s (or predictors)  Each IV or predictor brings another beta coefficient with it… Y’ =  0 +  1 X 1 1 +  2 X 2 2

5 Now, an example… R 2 =.65  So, now we can add the sex variable to our prediction equation from last week  Here is the one with just height in the model… Note R 2 =.65 for the simple model 1

6  But if we add sex…  The slope of each line is the same, but it now fits both values of sex by adjusting the height of the line Now, an example… R 2 =.99 Nice improvement in R 2 ! 1

7 Now, an example…  In terms of the equation, this is achieved by… When sex = 1 (female): Y’ =  0 When sex = 2 (male): Y’ =  0 +  1 X 1 1 2 3 4 +  1 X 1 + (  2 *1) + (  2 *2)

8 Now, an example…  This is called “dummy coding” when the second variable is dichotomous (as sex is)  The principle is similar when the second variable is continuous  Adding more variables simply captures more variance on the dependent variable (potentially, of course) 1

9 Note on graphs/charts for MLR  I showed you the example in 2D, but with multiple regression an accurate chart is only possible in the number of dimensions equal to the total number of variables in the model (dependent plus independent) Y’ =  0 +  1 X 1 +  2 X 2 So, three dimensions would be needed here 1 2

10 0,0,0 0,1, 0 0,1, 1 X1X1 X2X2 Y’ regression surface Y’ =1, x 1 = 1, x 2 = 0 Y’ =.5, x 1 = 0, x 2 = 0 Y’ =1.5, x 1 = 0, x 2 = 1 Y’ = 2, x 1 = 1, x 2 = 1 1 2 3 4 4

11 Assumptions of MLR  Four assumptions of MLR (known by acronym “LINE”)  Linearity: the residuals (differences between the obtained and predicted DV scores) should have a straight-line relationship with predicted DV scores  Independence: the observations on the DV are uncorrelated with each other  Normality: the observations on the DV are normally distributed for each combination of values for the IV’s  Equality of variance: the variance of the residuals about predicted DV scores should be the same for all predicted scores  (homoscedasticity…remember the cone shaped pattern?)  We will not test MLR assumptions in this class (enough that you do them for SLR) 1

12 Items to consider - 1  Sample size & # predictors:  A crucial aspect of the worth of a prediction equation is whether it will generalize to other samples  With multiple regression (based on multiple correlation) minimizing the prediction errors of the regression line is like maximizing the correlation for that sample  So one would expect that on another sample, the correlation (and thus R 2 ) would shrink 1 2 3

13 Items to consider - 1  Sample size & # predictors:  Our problem is reducing the risk of shrinkage  Two most important factors:  Sample size (n)  Number of predictors (independent variables) (k)  Expect big shrinkage with ratios less than 5:1 (n:k)  Guttman (1941): 136 subjects, 84 predictors, obtained multiple r =.73  On new independent sample, r =.04!  Stevens (1986): n:k should be 15:1 or greater in social science research  Tabachnick & Fidell (1996): n > 50 + 8k 1

14 Items to consider - 1  Sample size & # predictors:  What to do if you violate these rules:  Report Adjusted R 2 in addition to R 2 when your sample size is too small, or close to it…small samples (and/or too many predictors) tend to result in overestimating r (and consequently R 2 ) 1

15 3 Items to consider - 2  List data – check for errors, outliers, influential data points  MLR is very sensitive to outliers, but outliers and influential points are not necessarily the same thing  Need to sort out whether an outlier is influential  Check for outliers in the initial data screening process (scatterplot), or via Cook’s distance (see regression options in SPSS)  Cook’s distance  A measure of change in regression coefficients that would occur if the case was omitted…reveals the cases most influential for the regression equation  CD of 1 is generally thought to be large 1 2

16 Items to consider - 2  Outliers  What to do with outliers, when found, is a highly controversial topic:  Leave  Delete  Transform 1 2


Download ppt "Multiple Linear Regression An introduction, some assumptions, and then model reduction 1."

Similar presentations


Ads by Google