Presentation is loading. Please wait.

Presentation is loading. Please wait.

3-variable Regression Derive OLS estimators of 3-variable regression

Similar presentations


Presentation on theme: "3-variable Regression Derive OLS estimators of 3-variable regression"— Presentation transcript:

1 3-variable Regression Derive OLS estimators of 3-variable regression
Properties of 3-variable OLS estimators

2 Derive OLS estimators of multiple regression
Y = 0 + 1X1 + 2X  ^  = Y - 0 - 1X1 - 2X2 OLS is to minimize the SSR( 2) ^ min. RSS = min.  2 = min. (Y - 0 - 1X1 - 2X2)2 RSS 0 =2  ( Y - 0- 1X1 - 2X2)(-1) = 0 1 =2  ( Y - 0- 1X1 - 2X2)(-X1) = 0 2 =2  ( Y - 0- 1X1 - 2X2)(-X2) = 0

3  rearranging three equations: n0 + 1 X1 + 2  X2 = Y ^
1 X1 + 1 X12 + 2  X1X2 = X1Y 0 X2 + 1 X1X2 + 2  X22 = X2Y rewrite in matrix form: n X X2 X X X1X2 X X1X2 X22 0 1 2 ^ = Y  X1Y X2Y 2-variables Case 3-variables Case (X’X) ^ = X’Y Matrix notation

4 Cramer’s rule: n Y X2 X1 X1Y X1X2 X X2Y X22 n X X2 X X X1X2 X X1X2 X22 = 1 ^ (yx1)(x22) - (yx2)(x1x2) (x12)(x22) - (x1x2)2 n X Y X X X1Y X2 X1X1 X2Y n X X2 X1 X X1X2 X2 X1X2 X22 = 2 ^ (yx2)(x12) - (yx1)(x1x2) (x12)(x22) - (x1x2)2 0 = Y - 1X1 - 2X2 ^ _

5 or in matrix form: ^ (X’X) X’Y = 3x3 3x1 3x1 ==> ^ = (X’X)-1 (X’Y) 3x3 3x1 Var-cov() = 2 (X’X) and 2 = ^   2 n-3 Variance-Covariance matrix Var-cov() = ^ Var(0) Cov(0 1) Cov(0 2) Cov (1 0) Var(1) Cov(1 2) Cov (2 0) Cov(2 1) Var(2) = 2(X’X)-1

6 n X X2 X X X1X2 X X2X X22 = 2 ^ -1 2 = ^ u2 n-3 and = 2 n- k -1 k=2 # of independent variables ( not including the constant term)

7 Properties of multiple OLS estimators
1. The regression line(surface)passes through the mean of Y1, X1, X2 _ i.e., Y = 0 + 1X1 + 2X2 ^ ==> 0 = Y - 1X1 - 2X2 Linear in parameters Regression through the mean 2. Y = Y + 1x1 + 2x2 ^ _ y = 1x1 + 2x2 or Unbiased: E(i) = i 3. =0 ^ Zero mean of error X1 = X2 = 0 ^ 4. (Xk=0 ) constant Var() = 2 Y=0 ^ 5. random sample

8 Properties of multiple OLS estimators
6. As X1 and X2 are closely related ==> var(1) and var(2) become large and infinite. Therefore the true values of 1 and 2 are difficult to know. ^ All the normality assumptions in the two-variables case regression are also applied to the multiple variable regression. But one addition assumption is No exact linear relationship among the independent variables. (No perfect collinearity, i.e., Xk  Xj ) 7. The greater the variation in the sample values of X1 or X2, the smaller variance of 1 and 2 , and the estimations are more precisely. ^ 8. BLUE (Gauss-Markov Theorem)

9 Note: Don’t misuse the adjusted R2, read Studenmund(2001) pp. 53-55
The adjusted R2 (R2) as one of indicator of the overall fitness R2 = ESS TSS = 1 - RSS 2 y2 ^ 2 / (n-k) y2 / (n-1) R2 = 1 - _ ^ k : # of independent variables plus the constant term. n : # of obs. R2 = 1 - _ 2 SY2 ^ R2 = 1 - _ 2 y2 ^ (n-1) (n-k-1) n-1 R2 = 1 - (1-R2) _ n-k-1 R2  R2 Adjusted R2 can be negative: R2  0 0 < R2 < 1 Note: Don’t misuse the adjusted R2, read Studenmund(2001) pp

10 The meaning of partial regression coefficients
Y = 0 + 1X1 + 2X2 +  (suppose this is a true model) Y X1 = 1 : 1 measures the change in the mean values of Y, per unit change in X1, holding X2 constant. or The ‘direct’ effect of a unit change in X1 on the mean value of Y, net of X2 Y X2 = 2 holding X1 constant, the direct effect of a unit change in X2 on the mean value of Y. Holding constant: To assess the true contribution of X1 to the change in Y, we control the influence of X2.

11 Y Y  ^ C X1 X2 Y = 0 + 1 X1 + 2 X2 +  TSS n-1

12 Suppose X3 is not an explanatory
Variable but is included in regression

13 Partial effect : holding other variables constant
Unemployment rate(%) Y X1 = 1 = Y = 0 + 1X1 + 1 ^ X1 Direct effect from X1 Y expected inflation rate (%) Actual inflation rate(%) Y = 0 + 2X2 + 2 ^ X2 Y X2 = 2= Diret effect from X2 X2 = b20 + b21X1 + 12 X1 = b1 + b12X2 + 12 X2 X1 = b21 = Indirect effect from X2

14 Total effect from X1:  2 * b21 = ( )( ) ^ ‘direct’ + ‘indirect’ = = Y X1 = 1’ = ^ X1 Y Y = 0’ + 1’ X1 +  Implicitly reflects the hidden true model is including the X2

15 C X1 Y = 0 + ’1 X1 + ’ “’” includes X2

16 C X1 X2 X2 = b20 + b21 X1 + ’’

17 Total effect from X2: 2 + 1 * b12 = ( ) ( ) ^ ‘direct’ + ‘indirect’ = = Y X2 = 2’ = ^ X2 Y Y = 0’ + 2’ X2 +  Implicitly reflects the hidden true model is including the X1

18 C X2 X1 X1 = b10 + b12 X2 + ’’’

19 C X2 Y = 0 + ’2 X2 + ’’’’ ’”’ includes X1


Download ppt "3-variable Regression Derive OLS estimators of 3-variable regression"

Similar presentations


Ads by Google