Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used.

Similar presentations


Presentation on theme: "1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used."— Presentation transcript:

1 1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used to compare their goodness of fit.

2 However, when the dependent variable is different, this is not legitimate. 2 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS

3 In the case of the linear model, R 2 measures the proportion of the variance in Y explained by the model. In the case of the semilogarithmic model, it measures the proportion of the variance of the logarithm of Y explained by the model. 3 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS

4 Clearly these are related, but they are not the same and direct comparisons are not valid. 4 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS

5 However, the goodness of fit of models with linear and logarithmic versions of the same dependent variable can be compared indirectly by subjecting the dependent variable to the Box–Cox transformation and fitting the model shown. 5 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Box–Cox transformation:

6 This is a family of specifications that depend on the parameter. The determination of is an empirical matter, like the determination of the other parameters. 6 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Box–Cox transformation:

7 The model is nonlinear in parameters and so a nonlinear regression method should be used. In practice, maximum likelihood estimation is used. 7 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Box–Cox transformation:

8 8 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS The reason that this transformation is of interest in the present context is that specifications with linear and logarithmic dependent variables are special cases. when Box–Cox transformation:

9 9 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Putting = 1 gives the linear model. The dependent variable is then Y – 1, rather than Y, but subtracting a constant from the dependent variable does not affect the regression results, except for the estimate of the intercept. when Box–Cox transformation:

10 10 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Putting = 0 gives the (semi–)logarithmic model. Of course, one cannot talk about putting exactly equal to 0, because then the dependent variable becomes zero divided by zero. We are talking about the limiting form as tends to zero and we have used L'Hôpital's rule. when Box–Cox transformation:

11 11 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS So one could fit the general model and see whether  is close to 0 or close to 1. Of course. 'close' has no meaning in econometrics. To approach this issue technically, one should test the hypotheses = 0 and = 1. when Box–Cox transformation:

12 The outcome might be that one is rejected and the other not rejected, but of course it is possible that neither might be rejected or both might be rejected, given your chosen significance level. 12 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS when Box–Cox transformation:

13 when 13 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS Box–Cox transformation: If you are interested only in comparing the fits of the linear and logarithmic specifications, there is a short-cut procedure that involves only standard least squares regressions.

14 The first step is to divide the observations on the dependent variable by their geometric mean. We will call the transformed variable Y*. 14 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS geometric mean of Y

15 You now regress Y* and log e Y*, leaving the right side of the equation unchanged. (The parameters have been given prime marks to emphasize that the coefficients will not be estimates of the original  1 and  2.) 15 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS geometric mean of Y

16 The residual sums of squares are now directly comparable. The specification with the smaller RSS therefore provides the better fit. 16 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS geometric mean of Y

17 We will use the transformation to compare the fits of the linear and semilogarithmic versions of a simple earnings function, using EAEF Data Set 21. 17 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS

18 The first step is to calculate the geometric mean of the dependent variable. The easiest way to do this is to take the exponential of the mean of the log of the dependent variable. 18 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS

19 The sum of the logarithms of Y is equal to the logarithm of the products of Y. 19 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS

20 Now we use the rule that alog X is the same as log X a. 20 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS

21 And finally we use the fact that the exponential of the logarithm of X reduces to X. 21 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS

22 LGEARN has already been defined as the logarithm of EARNINGS. We find its mean. In Stata this is done with the ‘sum’ command. 22 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS. sum LGEARN Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- LGEARN | 540 2.791993.5885545.7561221 4.789074

23 We then define EARNSTAR, dividing EARNINGS by the exponential of the mean of LGEARN. 23 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS. sum LGEARN Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- LGEARN | 540 2.791993.5885545.7561221 4.789074. gen EARNSTAR = EARNINGS/exp(2.79)

24 We also define LGEARNST, the logarithm of EARNSTAR. 24 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS. sum LGEARN Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- LGEARN | 540 2.791993.5885545.7561221 4.789074. gen EARNSTAR = EARNINGS/exp(2.79). gen LGEARNST = ln(EARNSTAR)

25 . reg EARNSTAR S EXP Source | SS df MS Number of obs = 540 -------------+------------------------------ F( 2, 537) = 67.54 Model | 84.5951503 2 42.2975751 Prob > F = 0.0000 Residual | 336.283886 537.626226977 R-squared = 0.2010 -------------+------------------------------ Adj R-squared = 0.1980 Total | 420.879037 539.780851645 Root MSE =.79135 ------------------------------------------------------------------------------ EARNSTAR | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- S |.164165.0143224 11.46 0.000.1360303.1922998 EXP |.0344763.0078777 4.38 0.000.0190014.0499511 _cons | -1.623491.2618984 -6.20 0.000 -2.137 962 -1.10902 ------------------------------------------------------------------------------ Here is the regression of EARNSTAR on S and EXP. The residual sum of squares is 336.3. 25 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS

26 We run the parallel regression for LGEARNST. The residual sum of squares is 135.7. Thus we conclude that the semilogarithmic version gives a better fit. 26 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS. reg LGEARNST S EXP Source | SS df MS Number of obs = 540 -------------+------------------------------ F( 2, 537) = 100.86 Model | 50.9842592 2 25.4921296 Prob > F = 0.0000 Residual | 135.723388 537.252743739 R-squared = 0.2731 -------------+------------------------------ Adj R-squared = 0.2704 Total | 186.707647 539.346396377 Root MSE =.50274 ------------------------------------------------------------------------------ LGEARNST | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- S |.1235911.0090989 13.58 0.000.1057173.141465 EXP |.0350826.0050046 7.01 0.000.0252515.0449137 _cons | -2.28268.1663823 -13.72 0.000 -2.60952 -1.95584 ------------------------------------------------------------------------------

27 . boxcox EARNINGS S EXP Number of obs = 540 LR chi2(2) = 172.45 Log likelihood = -1897.7671 Prob > chi2 = 0.000 ------------------------------------------------------------------------------ EARNINGS | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- /theta | -.129785.0506665 -2.56 0.010 -.2290894 -.0304805 ------------------------------------------------------------------------------ --------------------------------------------------------- Test Restricted LR statistic P-value H0: log likelihood chi2 Prob > chi2 --------------------------------------------------------- theta = -1 -2048.3423 301.15 0.000 theta = 0 -1901.0466 6.56 0.010 theta = 1 -2146.0312 496.53 0.000 --------------------------------------------------------- Here is the output for the full Box–Cox regression. The parameter that we have denoted (lambda) is called theta by Stata. It is estimated at –0.13. Since it is closer to 0 than to 1, it indicates that the dependent variable should be logarithmic rather than linear. 27 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS

28 However, even the value of 0 does not lie in the 95 percent confidence interval. (The log likelihood tests will be explained in Chapter 10.) 28 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS. boxcox EARNINGS S EXP Number of obs = 540 LR chi2(2) = 172.45 Log likelihood = -1897.7671 Prob > chi2 = 0.000 ------------------------------------------------------------------------------ EARNINGS | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- /theta | -.129785.0506665 -2.56 0.010 -.2290894 -.0304805 ------------------------------------------------------------------------------ --------------------------------------------------------- Test Restricted LR statistic P-value H0: log likelihood chi2 Prob > chi2 --------------------------------------------------------- theta = -1 -2048.3423 301.15 0.000 theta = 0 -1901.0466 6.56 0.010 theta = 1 -2146.0312 496.53 0.000 ---------------------------------------------------------

29 Copyright Christopher Dougherty 2012. These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 4.2 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre http://www.oup.com/uk/orc/bin/9780199567089/http://www.oup.com/uk/orc/bin/9780199567089/. Individuals studying econometrics on their own who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx or the University of London International Programmes distance learning course EC2020 Elements of Econometrics www.londoninternational.ac.uk/lsewww.londoninternational.ac.uk/lse. 2012.11.03


Download ppt "1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used."

Similar presentations


Ads by Google