Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multiple Linear Regression

Similar presentations


Presentation on theme: "Multiple Linear Regression"— Presentation transcript:

1 Multiple Linear Regression
Review, January, 2018

2 Squared Correlation Coefficients

3 Squared Semipartial Correlation
the proportion of all the variance in Y that is associated with one predictor but not with any of the other predictors. the decrease in R2 that results from removing a predictor from the model

4 sri Predict X1 from X2 sri is the simple correlation between ALL of Y and that part of X1 that is not related to any of the other predictors

5 Squared Partial Correlation
Of the variance in Y that is not associated with any other predictors, what proportion is associated with the variance in Xi

6 sr2 Related to pr2

7 pri Predict Y from X2 Predict X1 from X2
is the r between Y partialled for all other predictors and Xi partialled for all other predictors.

8 A Demonstration Partial.sas – run this SAS program to obtain an illustration of the partial nature of the coefficients obtained in a multiple regression analysis.

9 More Details Multiple R2 and Partial Correlation/Regression Coefficients

10 Relative Weights Analysis
Partial regression coefficients exclude variance that is shared among predictors. It is possible to have a large R2 but none of the predictors have substantial partial coefficients. There are now methods by which one can partition the R2 into pseudo-orthogonal portions, each portion representing the relative contribution of one predictor variable.

11 Proportions of Variance
Predictor r2 sr2 Raw Relative Weight Rescaled Teach .646* .183* .344* .456 Knowledge .465* .071* .238* .316 Exam .355* .004 .124* .164 Grade .090* .007 .027 .035 Enroll .057 .010 .022 .029

12 If the predictors were orthogonal, the sum of r2 would be equal to R2, and
The values of r2 would be identical to the values of sr2. The sr2 here is .275, and R2 = .755, so = 48% of the variance in Overall is excluded from the squared semipartials due to redundancy.

13 Notice That The sum of the raw relative weights = .755 = the value of R2. The sum of the rescaled relative weights is 100%. The sr2 for Exam is not significant, but its raw relative weight is significant.

14 Predictors Independent of Each Other
b X1 X2 a c Y b = error

15 Redundancy For each X, sri and i will be smaller than ryi, and the sum of the squared semipartial r’s (a + c) will be less than the multiple R2. (a + b + c)

16 Classical Suppression
ry1 = .38, ry2 = 0, r12 = .45. the sign of  and sr for the classical suppressor variable will be opposite that of its non-zero zero-order r12. Notice also that for both predictor variables the absolute value of  exceeds that of the predictor’s r with Y. Y X2 X1

17 Net Suppression ry1 = .65, ry2 = .25, and r12 = .70.
X1 X2 Note that 2 has a sign opposite that of ry2. It is always the X which has the smaller ryi which ends up with a  of opposite sign. Each  falls outside of the range 0  ryi, which is always true with any sort of suppression.

18 Net Suppression If X1 and X2 were independent,

19 Reversal Paradox Aka, Simpson’s Paradox
treating severity of fire as the covariate, when we control for severity of fire, the more fire fighters we send, the less the amount of damage suffered in the fire. That is, for the conditional distributions (where severity of fire is held constant at some set value), sending more fire fighters reduces the amount of damage.

20 Cooperative Suppression
Two X’s correlate negatively with one another but positively with Y (or positively with one another and negatively with Y) Each predictor suppresses variance in the other that is irrelevant to Y both predictor’s , pr, and sr increase in absolute magnitude (and retain the same sign as ryi).

21 Cooperative Suppression
Y = how much the students in an introductory psychology class will learn Subjects are graduate teaching assistants X1 is a measure of the graduate student’s level of mastery of general psychology. X2 is an SOIS rating of how well the teacher presents simple easy to understand explanations.

22 Cooperative Suppression
ry1 = .30, ry2 = .25, and r12 = 0.35. If X1 and X2 were independent,

23 Summary When i falls outside the range of 0  ryi, suppression is taking place If one ryi is zero or close to zero, it is classic suppression, and the sign of the  for the X with a nearly zero ryi may be opposite the sign of ryi.

24 Summary When neither X has ryi close to zero but one has a  opposite in sign from its ryi and the other a  greater in absolute magnitude but of the same sign as its ryi, net suppression is taking place. If both X’s have absolute i > ryi, but of the same sign as ryi, then cooperative suppression is taking place.

25 Psychologist Investigating Suppressor Effects in a Five Predictor Model


Download ppt "Multiple Linear Regression"

Similar presentations


Ads by Google