Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Multiple Regression Model.

Similar presentations


Presentation on theme: "The Multiple Regression Model."— Presentation transcript:

1 The Multiple Regression Model

2 Two Explanatory Variables
yt = 1 + 2xt2 + 3xt3 + εt xt affect yt separately yt xt2 = 2 xt3 yt = 3 But least squares estimation of 2 now depends upon both xt2 and xt3 .

3 Correlated Variables yt = 1 + 2xt2 + 3xt3 + εt yt = output
xt2 = capital xt3 = labor Always 5 workers per machine. If number of workers per machine is never varied, it becomes impossible to tell if the machines or the workers are responsible for changes in output.

4 The General Model yt = 1 + 2xt2 + 3xt3 +. . .+ KxtK + εt
The parameter 1 is the intercept (constant) term. The variable attached to 1 is xt1= 1. Usually, the number of explanatory variables is said to be K1 (ignoring xt1= 1), while the number of parameters is K. (Namely:  K).

5 Statistical Properties of εt
1. E(εt) = 0 2. var(εt) = 2 covεt , εs= for t  s 4. εt ~ N(0, 2)

6 Statistical Properties of yt
1. E (yt) = 1 + 2xt KxtK 2. var(yt) = var(εt) = 2 cov(yt ,ys) = cov(εt , εs)= 0 t  s 4. yt ~ N(1+2xt KxtK, 2)

7 Assumptions 1. yt = 1 + 2xt2 +. . .+ KxtK + εt
2. E (yt) = 1 + 2xt KxtK 3. var(yt) = var(εt) = 2 cov(yt ,ys) = cov(εt , εs) = 0 t  s 5. The values of xtk are not random 6. yt ~ N(1+2xt KxtK, 2)

8 Least Squares Estimation
yt = 1 + 2xt2 + 3xt3 + εt T S  S(1, 2, 3) = yt12xt23xt3 t=1 yt = yt  y * Define: xt2 = xt2  x2 * xt3 = xt3  x3 *

9 Least Squares Estimators
b1 = y – b2x2 – b3x3 b2 = yt xt2xt3 yt xt3xt2xt3 * 2 xt2 xt3 xt2xt3 b3 = yt xt3xt2 yt xt2xt3xt2 * 2 xt2 xt3 xt2xt3

10 Dangers of Extrapolation
Statistical models generally are good only within the relevant range. This means that extending them to extreme data values outside the range of the original data often leads to poor and sometimes ridiculous results. If height is normally distributed and the normal ranges from minus infinity to plus infinity, pity the man minus three feet tall.

11 Interpretation of Coefficients
bj represents an estimate of the mean change in y responding to a one-unit change in xj when all other independent variables are held constant. Hence, bj is called the partial coefficient. Note that regression analysis cannot be interpreted as a procedure for establishing a cause-and-effect relationship between variables.

12 Universal Set B x3 / x2 x2 / x3 x2  x3

13 Error Variance Estimation
Unbiased estimator of the error variance: 2 ^ =  εt Transform to a chi-square distribution: 2 2 2 ^  

14 Gauss-Markov Theorem Under the first of five assumptions of the multiple regression model, the ordinary least squares estimators have the smallest variance of all linear and unbiased estimators. This means that the least squares estimators are the Best Linear U nbiased Estimators (BLUE).

15 (xt2  x2)(xt3  x3) (xt2  x2)2 (xt3  x3)2 Variances 2 2
yt = 1 + 2xt2 + 3xt3 + εt var(b2) = (1 r23)(xt2  x2)2 2 2 When r23 = 0 these reduce to the simple regression formulas. var(b3) = (1 r23)(xt3  x3)2 2 2 (xt2  x2)2 (xt3  x3)2 where r23 = (xt2  x2)(xt3  x3)

16 Variance Decomposition
The variance of an estimator is smaller when: 1. The error variance, 2, is smaller:  2. The sample size, T, is larger: (xt2  x2)2 . 3. The variable values are more spread out: (xt2  x2)2 . 4. The correlation is close to zero: r 2 t = 1 T

17 (xt2  x2)(xt3  x3) (xt2  x2)2 (xt3  x3)2 Covariances
yt = 1 + 2xt2 + 3xt3 + εt (1 r23) (xt2  x2)2 (xt3  x3)2 cov(b2,b3) = 2  r23 2 where r23 = (xt2  x2)2 (xt3  x3)2 (xt2  x2)(xt3  x3)

18 Covariance Decomposition
The covariance between any two estimators is larger in absolute value when: 1. The error variance, 2, is larger. 2. The sample size, T, is smaller. 3. The values of the variables are less spread out. 4. The correlation, r23, is high.

19 Var-Cov Matrix yt = 1 + 2xt2 + 3xt3 + εt
The least squares estimators b1, b2, and b3 have covariance matrix: var(b1) cov(b1,b2) cov(b1,b3) cov(b1,b2,b3) = cov(b1,b2) var(b2) cov(b2,b3) cov(b1,b3) cov(b2,b3) var(b3)

20 Normal yt = 1 + 2x2t + 3x3t +. . .+ KxKt + εt
yt ~N (1 + 2x2t + 3x3t KxKt), 2 εt ~ N(0, 2) This implies and is implied by: Since bk is a linear function of the yt: bk ~ N k, var(bk) z = ~ N(0,1) for k = 1,2,...,K bk  k var(bk)

21 Student-t ^ ^ t has a Student-t distribution with df=(TK).
Since generally the population variance of bk , var(bk) , is unknown, we estimate it with which uses 2 instead of 2. var(bk) ^ bk  k var(bk) ^ t = = se(bk) t has a Student-t distribution with df=(TK).

22 Interval Estimation tc is critical value for (T-K) degrees of freedom
bk k se(bk) P tc < < tc = 1  tc is critical value for (T-K) degrees of freedom such that P( t > tc ) =  /2. P bk tc se(bk) < k < bk + tc se(bk) = 1  bk tc se(bk) , bk + tc se(bk) Interval endpoints:

23 Student - t Test yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt H0: 1 = 0
Student-t tests can be used to test any linear combination of the regression coefficients: H0: 1 = 0 H0: 2 + 3 + 4 = 1 H0: 32  73 = 21 H0: 2  3 < 5 Every such t-test has exactly TK degrees of freedom where K = # of coefficients estimated(including the intercept).

24 One Tail Test yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt H0: 3 < 0 b3
se(b3) t = ~ t (TK) df = TK = T4  tc

25 Two Tail Test yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt H0: 2 = 0 b2
se(b2) t = ~ t (TK) df = TK = T4    -tc tc

26 (yt y)2 R2 = = 0 < R2 < 1 Goodness - of - Fit ^ SSR SST
Coefficient of Determination SST R2 = = (yt y)2 t = 1 T ^ SSR 0 < R2 < 1

27 R2 = = 1  R2 = 1  Adjusted R-Squared SSR SSE SST SSE/(TK) SST/(T1)
Adjusted Coefficient of Determination Original: SST = 1  SSE R2 = SSR Adjusted: SST/(T1) R2 = 1  SSE/(TK)

28 Computer Output b2 t = = se(b2) 6.642 2.081 3.191
Table Summary of Least Squares Results Variable Coefficient Std Error t-value p-value constant price   advertising b2 se(b2) t = = 6.642 3.191 2.081

29 Reporting Your Results
Reporting standard errors: ^ yt =   Xt2 + Xt3 (6.48) (3.191) (0.167) (s.e.) Reporting t-statistics: ^ yt =   Xt2 + Xt3 (16.17) (-2.081) (17.868) (t)

30 yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt H0: 2 = 0 H1: 2 = 0 H0: yt =  3Xt3 + 4Xt4 + εt H1: yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt H0: Restricted Model H1: Unrestricted Model

31 Single Restriction F-Test
yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt H0: 2 = 0 H1: 2 = 0 Under H0 (SSER  SSEU)/J SSEU/(TK) F = ~ FJ, T-K (  )/1 /(52 3) = dfn = J = 1 = dfd = TK = 49 By definition this is the t-statistic squared: t =  2.081 F = t2 = 

32 yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt H0: 2 = 0, 4 = 0 H1: H0 not true H0: yt =  3Xt εt H1: yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt H0: Restricted Model H1: Unrestricted Model

33 Multiple Restriction F-Test
yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt (SSER  SSEU)/J SSEU/(TK) F = H0: 2 = 0, 4 = 0 H1: H0 not true Under H0 ~ F J, T-K First run the restricted regression by dropping Xt2 and Xt4 to get SSER. dfn = J = 2 (J: The number of hypothesis) dfd = TK = 49 Next run unrestricted regression to get SSEU .

34 F-Tests f(F) F = ~ F J, T-K   F
F-Tests of this type are always right-tailed, even for left-sided or two-sided hypotheses, because any deviation from the null will make the F value bigger (move rightward). f(F) (SSER  SSEU)/J SSEU/(TK) F = ~ F J, T-K  Fc F

35 F-Test of Entire Equation
yt = 1 + 2Xt2 + 3Xt3 + εt We ignore 1. Why? H0: 2 = 3 = 0 H1: H0 not true (SSER  SSEU)/J SSEU/(TK) F = dfn = J = 2 (  )/2 /(52 3) = dfd = TK = 49 = 0.05 = Reject H0! F2, 49, = 3.187

36 R2 = = ANOVA Table SSR SST 0.867 Table 8.3 Analysis of Variance Table
Sum of Mean Source DF Squares Square F-Value Regression Error Total p-value: SST R2 = = SSR 0.867

37 Nonsample Information
A certain production process is known to be Cobb-Douglas with constant returns to scale. ln(yt) = 1 + 2 ln(Xt2) + 3 ln(Xt3) + 4 ln(Xt4) + εt where 2 + 3 + 4 = 1 4 = (1  2  3) ln(yt /Xt4) = 1 + 2 ln(Xt2/Xt4) + 3 ln(Xt3 /Xt4) + εt yt = 1 + 2 Xt2 + 3 Xt3 + εt * Run least squares on the transformed model. Interpret coefficients same as in original model.

38 Collinear Variables The term independent variables means
an explanatory variable is independent of of the error term, but not necessarily independent of other explanatory variables. Since economists typically have no control over the implicit experimental design, explanatory variables tend to move together which often makes sorting out their separate influences rather problematic.

39 Effects of Collinearity
A high degree of collinearity will produce: 1. no least squares output when collinearity is exact. 2. large standard errors and wide confidence intervals. 3. insignificant t-values even with high R2 and a significant F-value. 4. estimates sensitive to deletion or addition of a few observations or insignificant variables. 5.The OLS estimators retain all their desired properties (BLUE and consistency), but the problem is that the influential procedure may be uninformative.

40 Identifying Collinearity
Evidence of high collinearity include: 1. a high pairwise correlation between two explanatory variables (greater than .8 or .9). a high R-squared (called Rj2) when regressing one explanatory variable (Xj) on the other explanatory variables. Variance inflation factor (VIF): VIF (bj) = 1 / (1  Rj2) ( > 10) 3. high R2 and a statistically significant F-value when the t-values are statistically insignificant.

41 Mitigating Collinearity
High collinearity is not a violation of any least squares assumption, but rather a lack of adequate information in the sample: 1. Collect more data with better information. 2. Impose economic restrictions as appropriate. Impose statistical restrictions when justified. Delete the variable which is highly collinear with other explanatory variables.

42 Prediction yt = 1 + 2Xt2 + 3Xt3 + εt y0 = b1 + b2X02 + b3X03
Given a set of values for the explanatory variables, (1 X02 X03), the best linear unbiased predictor of y is given by: y0 = b1 + b2X02 + b3X03 ^ This predictor is unbiased in the sense that the average value of the forecast error is zero.


Download ppt "The Multiple Regression Model."

Similar presentations


Ads by Google