2Two Explanatory Variables yt = 1 + 2xt2 + 3xt3 + εtxt affect ytseparatelyytxt2= 2xt3yt= 3But least squares estimation of 2now depends upon both xt2 and xt3 .
3Correlated Variables yt = 1 + 2xt2 + 3xt3 + εt yt = output xt2 = capitalxt3 = laborAlways 5 workers per machine.If number of workers per machineis never varied, it becomes impossibleto tell if the machines or the workersare responsible for changes in output.
4The General Model yt = 1 + 2xt2 + 3xt3 +. . .+ KxtK + εt The parameter 1 is the intercept (constant) term.The variable attached to 1 is xt1= 1.Usually, the number of explanatory variablesis said to be K1 (ignoring xt1= 1), while thenumber of parameters is K. (Namely: K).
5Statistical Properties of εt 1. E(εt) = 02. var(εt) = 2covεt , εs= for t s4. εt ~ N(0, 2)
10Dangers of Extrapolation Statistical models generally are good onlywithin the relevant range. This meansthat extending them to extreme data valuesoutside the range of the original data oftenleads to poor and sometimes ridiculous results.If height is normally distributed and thenormal ranges from minus infinity to plusinfinity, pity the man minus three feet tall.
11Interpretation of Coefficients bj represents an estimate of the mean change in y responding to a one-unit change in xj when all other independent variables are held constant. Hence, bj is called the partial coefficient.Note that regression analysis cannot be interpreted as a procedure for establishing a cause-and-effect relationship between variables.
13Error Variance Estimation Unbiased estimator of the error variance:2^=εtTransform to a chi-square distribution:222^
14Gauss-Markov TheoremUnder the first of five assumptions of the multiple regression model, the ordinary least squares estimators have the smallest variance of all linear and unbiased estimators. This means that the least squares estimators are the Best Linear U nbiased Estimators (BLUE).
16Variance Decomposition The variance of an estimator is smaller when:1. The error variance, 2, is smaller: 2. The sample size, T, is larger: (xt2 x2)2 .3. The variable values are more spread out: (xt2 x2)2 .4. The correlation is close to zero: r2t = 1T
18Covariance Decomposition The covariance between any two estimatorsis larger in absolute value when:1. The error variance, 2, is larger.2. The sample size, T, is smaller.3. The values of the variables are less spread out.4. The correlation, r23, is high.
19Var-Cov Matrix yt = 1 + 2xt2 + 3xt3 + εt The least squares estimators b1, b2, and b3have covariance matrix:var(b1) cov(b1,b2) cov(b1,b3)cov(b1,b2,b3) = cov(b1,b2) var(b2) cov(b2,b3)cov(b1,b3) cov(b2,b3) var(b3)
20Normal yt = 1 + 2x2t + 3x3t +. . .+ KxKt + εt yt ~N (1 + 2x2t + 3x3t KxKt), 2εt ~ N(0, 2)This implies and is implied by:Since bk is a linearfunction of the yt:bk ~ N k, var(bk)z = ~ N(0,1) for k = 1,2,...,Kbk kvar(bk)
21Student-t ^ ^ t has a Student-t distribution with df=(TK). Since generally the population varianceof bk , var(bk) , is unknown, we estimateit with which uses 2 instead of 2.var(bk)^bk kvar(bk)^t = =se(bk)t has a Student-t distribution with df=(TK).
22Interval Estimation tc is critical value for (T-K) degrees of freedom bk kse(bk)P tc < < tc = 1 tc is critical value for (T-K) degrees of freedomsuch that P( t > tc ) = /2.P bk tc se(bk) < k < bk + tc se(bk) = 1 bk tc se(bk) , bk + tc se(bk)Interval endpoints:
23Student - t Test yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt H0: 1 = 0 Student-t tests can be used to test any linearcombination of the regression coefficients:H0: 1 = 0H0: 2 + 3 + 4 = 1H0: 32 73 = 21H0: 2 3 < 5Every such t-test has exactly TK degrees of freedom where K = # of coefficients estimated(including the intercept).
33Multiple Restriction F-Test yt = 1 + 2Xt2 + 3Xt3 + 4Xt4 + εt(SSER SSEU)/JSSEU/(TK)F =H0: 2 = 0, 4 = 0H1: H0 not trueUnder H0~ F J, T-KFirst run the restrictedregression by droppingXt2 and Xt4 to get SSER.dfn = J = 2(J: The number of hypothesis)dfd = TK = 49Next run unrestricted regression to get SSEU .
34F-Tests f(F) F = ~ F J, T-K F F-Tests of this type are always right-tailed,even for left-sided or two-sided hypotheses,because any deviation from the null willmake the F value bigger (move rightward).f(F)(SSER SSEU)/JSSEU/(TK)F =~ F J, T-KFcF
36R2 = = ANOVA Table SSR SST 0.867 Table 8.3 Analysis of Variance Table Sum of MeanSource DF Squares Square F-ValueRegressionErrorTotal p-value:SSTR2 ==SSR0.867
37Nonsample Information A certain production process is known to beCobb-Douglas with constant returns to scale.ln(yt) = 1 + 2 ln(Xt2) + 3 ln(Xt3) + 4 ln(Xt4) + εtwhere2 + 3 + 4 = 14 = (1 2 3)ln(yt /Xt4) = 1 + 2 ln(Xt2/Xt4) + 3 ln(Xt3 /Xt4) + εtyt = 1 + 2 Xt2 + 3 Xt3 + εt*Run least squares on the transformed model.Interpret coefficients same as in original model.
38Collinear Variables The term independent variables means an explanatory variable is independent ofof the error term, but not necessarilyindependent of other explanatory variables.Since economists typically have no controlover the implicit experimental design,explanatory variables tend to movetogether which often makes sorting outtheir separate influences rather problematic.
39Effects of Collinearity A high degree of collinearity will produce:1. no least squares output when collinearity is exact.2. large standard errors and wide confidence intervals.3. insignificant t-values even with high R2 and a significant F-value.4. estimates sensitive to deletion or addition of a few observations or insignificant variables.5.The OLS estimators retain all their desired properties (BLUE and consistency), but the problem is that the influential procedure may be uninformative.
40Identifying Collinearity Evidence of high collinearity include:1. a high pairwise correlation between two explanatory variables (greater than .8 or .9).a high R-squared (called Rj2) when regressing one explanatory variable (Xj) on the other explanatory variables. Variance inflation factor (VIF): VIF (bj) = 1 / (1 Rj2) ( > 10)3. high R2 and a statistically significant F-value when the t-values are statistically insignificant.
41Mitigating Collinearity High collinearity is not a violation ofany least squares assumption, but rather alack of adequate information in the sample:1. Collect more data with better information.2. Impose economic restrictions as appropriate.Impose statistical restrictions when justified.Delete the variable which is highly collinear with other explanatory variables.
42Prediction yt = 1 + 2Xt2 + 3Xt3 + εt y0 = b1 + b2X02 + b3X03 Given a set of values for the explanatoryvariables, (1 X02 X03), the best linearunbiased predictor of y is given by:y0 = b1 + b2X02 + b3X03^This predictor is unbiased in the sensethat the average value of the forecasterror is zero.