Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 G89.2229 Lect 4M Interpreting multiple regression weights: suppression and spuriousness. Partial and semi-partial correlations Multiple regression in.

Similar presentations


Presentation on theme: "1 G89.2229 Lect 4M Interpreting multiple regression weights: suppression and spuriousness. Partial and semi-partial correlations Multiple regression in."— Presentation transcript:

1 1 G89.2229 Lect 4M Interpreting multiple regression weights: suppression and spuriousness. Partial and semi-partial correlations Multiple regression in matrix terms G89.2229 Multiple Regression Week 4 (Monday)

2 2 G89.2229 Lect 4M Suppression Sometimes the semipartial effect for X 1 (i.e. b 1 ) in Y = b 0 + b 1 X 1 + b 2 X 2 + e is larger in absolute magnitude than the bivariate effect in Y = b 0 + b 1 X 1 + e This has been called suppression Example: »X 1 is stress »Y is distress »X 2 is coping Classic pattern is when one of the three correlations is negative.

3 3 G89.2229 Lect 4M Spurious effect Consider a path model that resembles the mediation model. Suppose that there is a bivariate association between X and Y, but when W is considered, the semipartial effect b is zero. The original association is often said to be "spurious". It is explained by the common cause, W. Y eyey W X c b exex a

4 4 G89.2229 Lect 4M Interpreting Fitted Variance in Multiple Regression Let the V(Y)=a+b+c+e=1 Squared multiple correlation R 2 Y12 =a+b+c The squared correlations are r 2 Y1 =a+c & r 2 Y2 =b+c The squared semipartial correlations are sr 2 Y12 = a sr 2 Y21 = b The squared partial correlations are pr 2 Y12 = a/(a+e) pr 2 Y21 = b/(b+e) X1X1 X2X2 Y a c b e

5 5 G89.2229 Lect 4M Regression with many explanatory (X) variables Example: »Okazaki (1997) surveyed UCLA students and found that Asian- American students had higher levels of depression than Anglo-Americans »She built a regression model with multiple explanatory variables Gender (in case sample selection produced a spurious effect) Independent and interdependent self-construal (a mediation hypothesis) Fear of negative evaluation, and social avoidance (to see if depression effect was specific or generalized) »Equation has five X variables

6 6 G89.2229 Lect 4M Multiple regression equations get complicated With five predictors: or: While the prediction equation is fairly easy to write, it is much more complicated to write equations for: »Estimates of b coefficients »Estimates of standard error of b coefficients

7 7 G89.2229 Lect 4M Matrix notation actually simplifies equations Instead of Consider Or »X' is called a row vector »B is a column vector »The same notation is used regardless of the number of Xs

8 8 G89.2229 Lect 4M Math tools: Vectors and Matrices It is often convenient to use lists of numbers for each person (or each variable) »Lists are called vectors »Lists of vectors are arrays called matrices E.g. The list of predictor variables for person i is: The X matrix includes all n observations:

9 9 G89.2229 Lect 4M Vector definition & operations Definition: A vector is an ordered list of numbers: a T = [a 1 a 2... a p ] Transpose »If a is a vector with p elements in a column, then a T is a vector with the same elements arranged in a row. Vector Addition »If a and b are two vectors with p elements, (a i, b i ), then a+b is a new vector with elements given the the respective element sums. »[a+b] i = [a i + b i ]

10 10 G89.2229 Lect 4M Vector Operations, Continued Vector Multiplication »If a and b are two vectors with p elements, (a i, b i ) then a T b =  a i b i = a 1 b 1 +a 2 b 2 +... + a p b p Example »a T = [0 0 –1 1] »b = »Then, a T b = b 3 – b 2 This is an example of a contrast vector

11 11 G89.2229 Lect 4M Matrix operations Matrix definitions »A matrix can be viewed as a collection of vectors E.g.. a data matrix is made up of n rows of p variables The transpose of a matrix makes rows columns and vice versa Matrix Addition »[A+B] ij = [a ij + b ij ] Matrix Subtraction »[A-B] ij = [a ij - b ij ]

12 12 G89.2229 Lect 4M Matrix Multiplication An Identity matrix, I, is a square matrix with ones on the diagonal and zeros on the off diagonal »A*I = A If a matrix A is square and full rank (nonsingular), then its inverse A -1 exists such that »A*A -1 = I

13 13 G89.2229 Lect 4M Matrix Inverse In general, the elements of an inverse are not intuitive. As the number of rows/columns gets larger, they are more complex. For 2x2 matrix A E.G.

14 14 G89.2229 Lect 4M Some facts about matrix multiplication In general, AB ~= BA »Commutative principle does not hold When A and B are square and full rank »(A*B) -1 = B -1 *A -1 Distributive principle holds »A(B+C) = AB + AC A matrix A can be multiplied by a single number, called a scalar, which sets the unit of the new matrix: »kA = [kA ij ]

15 15 G89.2229 Lect 4M Regression equations in Matrix Terms Basic Regression equation »For randomly chosen observation Y = x T B + e »For sample of n subjects Y = XB + e e = Y  XB Y = [1 X 1 X 2 ] B 0 + e B 1 B 2 Y 1 1 X 11 X 12 B 0 e 1 Y 2 = 1 X 21 X 22 B 1 + e 2 Y 3 1 X 31 X 32 B 2 e 3

16 16 G89.2229 Lect 4M Least Squares Estimates of B The OLS estimates of B make e T e as small as possible. »This happens when the geometric representation of e is shortest. »e will be shortest when it is orthogonal to the predictors, X X T e = X T (Y- XB) = 0 X T (Y  XB) = X T Y  X T XB = 0 X T Y  X T XB When (X T X) -1 exists:


Download ppt "1 G89.2229 Lect 4M Interpreting multiple regression weights: suppression and spuriousness. Partial and semi-partial correlations Multiple regression in."

Similar presentations


Ads by Google