Presentation is loading. Please wait.

Presentation is loading. Please wait.

Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: prediction Original citation: Dougherty, C. (2012) EC220 - Introduction.

Similar presentations


Presentation on theme: "Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: prediction Original citation: Dougherty, C. (2012) EC220 - Introduction."— Presentation transcript:

1 Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: prediction Original citation: Dougherty, C. (2012) EC220 - Introduction to econometrics (chapter 3). [Teaching Resource] © 2012 The Author This version available at: http://learningresources.lse.ac.uk/129/http://learningresources.lse.ac.uk/129/ Available in LSE Learning Resources Online: May 2012 This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. This license allows the user to remix, tweak, and build upon the work even for commercial purposes, as long as the user credits the author and licenses their new creations under the identical terms. http://creativecommons.org/licenses/by-sa/3.0/ http://creativecommons.org/licenses/by-sa/3.0/ http://learningresources.lse.ac.uk/

2 1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence, we discuss the properties of such predictions.

3 2 PREDICTION Suppose that, given a sample of n observations, we have fitted a pricing model with k – 1 characteristics, as shown.

4 3 PREDICTION Suppose now that one encounters a new variety of the good with characteristics {X 2 *, X 3 *,..., X k * }. Given the sample regression result, it is natural to predict that the price of the new variety should be given by the third equation.

5 4 PREDICTION What can one say about the properties of this prediction? First, it is natural to ask whether it is fair, in the sense of not systematically overestimating or underestimating the actual price. Second, we will be concerned about the likely accuracy of the prediction.

6 5 PREDICTION We will start by supposing that the good has only one relevant characteristic and that we have fitted the simple regression model shown. Hence, given a new variety of the good with characteristic {X * }, the model gives us the predicted price.

7 6 PREDICTION We will define the prediction error of the model, PE, as the difference between the actual price and the predicted price.

8 7 PREDICTION We will assume that the model applies to the new good and therefore the actual price is generated as shown, where u* is the value of the disturbance term for the new good.

9 8 PREDICTION Then the prediction error is as shown.

10 9 PREDICTION We take expectations.

11 10 PREDICTION  1 and  2 are assumed to be fixed parameters, so they are not affected by taking expectations. Likewise, X* is assumed to be a fixed quantity and unaffected by taking expectations. However, u*, b 1 and b 2 are random variables.

12 11 PREDICTION E(u*) = 0 because u* is randomly drawn from the distribution for u, which we have assumed as zero population mean. Under the usual OLS assumptions, b 1 will be an unbiased estimator of  1 and b 2 an unbiased estimator of  2.

13 12 PREDICTION Hence the expectation of the prediction error is zero. The result generalizes easily to the case where there are multiple characteristics and the new good embodies a new combination of them.

14 13 PREDICTION The population variance of the prediction error is given by the expression shown. Unsurprisingly, this implies that, the further is the value of from the sample mean, the larger will be the population variance of the prediction error.

15 14 PREDICTION The population variance of the prediction error is given by the expression shown. Unsurprisingly, this implies that, the further is the value of from the sample mean, the larger will be the population variance of the prediction error.

16 15 PREDICTION It also implies, again unsurprisingly, that, the larger is the sample, the smaller will be the population variance of the prediction error, with a lower limit of  u 2.

17 16 PREDICTION Provided that the regression model assumptions are valid, b 1 and b 2 will tend to their true values as the sample becomes large, so the only source of error in the prediction will be u*, and by definition this has population variance  u 2.

18 17 PREDICTION The standard error of the prediction error is calculated using the square root of the expression for the population variance, replacing the variance of u with the estimate obtained when fitting the model in the sample period.

19 18 PREDICTION Hence we are able to construct a confidence interval for a prediction. t crit is the critical level of t, given the significance level selected and the number of degrees of freedom, and s.e. is the standard error of the prediction.

20 19 PREDICTION The confidence interval has been drawn as a function of X *. As we noted from the mathematical expression, it becomes wider, the greater the distance from X * to the sample mean.

21 20 PREDICTION With multiple explanatory variables, the expression for the prediction variance becomes complex. One point to note is that multicollinearity may not have an adverse effect on prediction precision, even though the estimates of the coefficients have large variances.

22 21 PREDICTION Suppose X 2 and X 3 are positively correlated, and that  2 and  3 are both positive. Then it can be shown that cov(b 2, b 3 ) < 0. So if b 2 is an overestimate, b 3 is likely to compensate by being an underestimate, and (b 2 X 2 * + b 3 X 3 * ) may be a relatively good estimate of (  2 X 2 * +  3 X 3 * ). Similarly, for other combinations. For simplicity, suppose that there are two explanatory variables, that both have positive true coefficients, and that they are positively correlated, the model being as shown, and that we are predicting the value of Y *, given values X 2 * and X 3 *.

23 22 PREDICTION Then if the effect of X 2 is overestimated, so that b 2 >  2, the effect of X 3 will almost certainly be underestimated, with b 3 <  3. As a consequence, the effects of the errors may to some extent cancel out, with the result that the linear combination may be close to (  2 X 2 * +  3 X 3 * ). Suppose X 2 and X 3 are positively correlated, and that  2 and  3 are both positive. Then it can be shown that cov(b 2, b 3 ) < 0. So if b 2 is an overestimate, b 3 is likely to compensate by being an underestimate, and (b 2 X 2 * + b 3 X 3 * ) may be a relatively good estimate of (  2 X 2 * +  3 X 3 * ). Similarly, for other combinations.

24 23 PREDICTION This will be illustrated with a simulation, with the model and data shown. We fit the model and make the prediction Y * = b 1 + b 2 X 2 * + b 3 X 3 *. Simulation

25 24 PREDICTION Since X 2 and X 3 are virtually identical, this may be approximated as Y * = b 1 + (b 2 + b 3 )X 2 *. Thus the predictive accuracy depends on how close (b 2 + b 3 ) is to (b 2 + b 3 ), that is, to 5. Simulation

26 25 PREDICTION Simulation The figure shows the distributions of b 2 and b 3 for 10 million samples. Their distributions have relatively wide variances around their true values, as should be expected, given the multicollinearity. The actual standard deviations of their distributions is 0.45. standard deviations 0.45 standard deviation 0.04

27 26 PREDICTION The figure also shows the distribution of their sum. As anticipated, it is distributed around 5, but with a much lower standard deviation, 0.04, despite the multicollinearity affecting the point estimates of the individual coefficients. Simulation standard deviations 0.45 standard deviation 0.04

28 Copyright Christopher Dougherty 2011. These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 3.6 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre http://www.oup.com/uk/orc/bin/9780199567089/http://www.oup.com/uk/orc/bin/9780199567089/. Individuals studying econometrics on their own and who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx or the University of London International Programmes distance learning course 20 Elements of Econometrics www.londoninternational.ac.uk/lsewww.londoninternational.ac.uk/lse. 11.07.25


Download ppt "Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: prediction Original citation: Dougherty, C. (2012) EC220 - Introduction."

Similar presentations


Ads by Google