Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence,

Similar presentations


Presentation on theme: "1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence,"— Presentation transcript:

1 1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence, we discuss the properties of such predictions. True model Fitted model

2 2 Suppose that, given a sample of n observations, we have fitted a pricing model with k – 1 characteristics, as shown. PREDICTION True model Fitted model

3 3 Suppose now that one encounters a new variety of the good with characteristics {X 2 *, X 3 *,..., X k * }. Given the sample regression result, it is natural to predict that the price of the new variety should be given by the third equation. PREDICTION Prediction conditional on True model Fitted model

4 4 What can one say about the properties of this prediction? First, it is natural to ask whether it is fair, in the sense of not systematically overestimating or underestimating the actual price. Second, we will be concerned about the likely accuracy of the prediction. PREDICTION Prediction conditional on True model Fitted model

5 5 PREDICTION Prediction conditional on True model Fitted model We will consider the case where the good has only one relevant characteristic and suppose that we have fitted the simple regression model shown. Hence, given a new variety of the good with characteristic X = X *, the model gives us the predicted price.

6 6 PREDICTION Prediction conditional on True model Fitted model We will assume that the model applies to the new good and therefore the actual price, conditional on X = X *, is generated as shown, where u* is the value of the disturbance term for the new good. Actual value of

7 7 PREDICTION We will define the prediction error of the model, PE, as the difference between the actual price and the predicted price. Prediction conditional on True model Fitted model Actual value of

8 8 Substituting for the actual and predicted prices, the prediction error is as shown. PREDICTION Prediction conditional on True model Fitted model Actual value of

9 9 PREDICTION We take expectations. Prediction conditional on True model Fitted model Actual value of

10 10 PREDICTION  1 and  2 are assumed to be fixed parameters, so they are not affected by taking expectations. Likewise, X * is assumed to be a fixed quantity and unaffected by taking expectations. However, u*, b 1 and b 2 are random variables. Prediction conditional on True model Fitted model Actual value of

11 11 PREDICTION E(u*) = 0 because u* is randomly drawn from the distribution for u, which we have assumed as zero population mean. Under the usual OLS assumptions, b 1 will be an unbiased estimator of  1 and b 2 an unbiased estimator of  2. Prediction conditional on True model Fitted model Actual value of

12 12 PREDICTION Hence the expectation of the prediction error is zero. The result generalizes easily to the case where there are multiple characteristics and the new good embodies a new combination of them. Prediction conditional on True model Fitted model Actual value of

13 13 The population variance of the prediction error is given by the expression shown. Unsurprisingly, this implies that, the further is the value of X * from the sample mean, the larger will be the population variance of the prediction error. PREDICTION Variance of prediction error

14 14 It also implies, again unsurprisingly, that, the larger is the sample, the smaller will be the population variance of the prediction error, with a lower limit of  u 2. PREDICTION Variance of prediction error

15 15 Provided that the regression model assumptions are valid, b 1 and b 2 will tend to their true values as the sample becomes large, so the only source of error in the prediction will be u*, and by definition this has population variance  u 2. PREDICTION Variance of prediction error

16 16 The standard error of the prediction error is calculated using the square root of the expression for the population variance, replacing the variance of u with the estimate obtained when fitting the model in the sample period. PREDICTION Standard error Variance of prediction error

17 17 Hence we are able to construct a confidence interval for a prediction. t crit is the critical level of t, given the significance level selected and the number of degrees of freedom, and s.e. is the standard error of the prediction. PREDICTION P XX* upper limit of confidence interval for P*

18 18 The confidence interval has been drawn as a function of X *. As we noted from the mathematical expression, it becomes wider, the greater the distance from X * to the sample mean. PREDICTION P XX* upper limit of confidence interval for P* lower limit of confidence interval for P*

19 19 PREDICTION P XX* upper limit of confidence interval for P* lower limit of confidence interval for P* With multiple explanatory variables, the expression for the prediction variance becomes complex. One point to note is that multicollinearity may not have an adverse effect on prediction precision, even if the estimates of the coefficients have large variances.

20 20 PREDICTION For simplicity, suppose that there are two explanatory variables, that both have positive true coefficients, and that they are positively correlated, the model being as shown, and that we are predicting the value of Y *, given values X 2 * and X 3 *. Suppose X 2 and X 3 are positively correlated,  2 > 0,  3 > 0. Then cov(b 2, b 3 ) < 0. If b 2 is overestimated, b 3 is likely to be underestimated. (b 2 X 2 * + b 3 X 3 * ) may be a good estimate of (  2 X 2 * +  3 X 3 * ). Similarly, for other combinations.

21 21 Then if the effect of X 2 is overestimated, so that b 2 >  2, the effect of X 3 is likely to be underestimated, with b 3 <  3. As a consequence, the effects of the errors may to some extent cancel out, with the result that the linear combination may be close to (  2 X 2 * +  3 X 3 * ). PREDICTION Suppose X 2 and X 3 are positively correlated,  2 > 0,  3 > 0. Then cov(b 2, b 3 ) < 0. If b 2 is overestimated, b 3 is likely to be underestimated. (b 2 X 2 * + b 3 X 3 * ) may be a good estimate of (  2 X 2 * +  3 X 3 * ). Similarly, for other combinations.

22 22 This will be illustrated with a simulation, with the model and data shown. We fit the model and make the prediction Y * = b 1 + b 2 X 2 * + b 3 X 3 *. Simulation PREDICTION

23 23 Since X 2 and X 3 are virtually identical, this may be approximated as Y * = b 1 + (b 2 + b 3 )X 2 *. Thus the predictive accuracy depends on how close (b 2 + b 3 ) is to (  2 +  3 ), that is, to 5. PREDICTION Simulation

24 24 The figure shows the distributions of b 2 and b 3 for 10 million samples. Their distributions have relatively wide variances around their true values, as should be expected, given the multicollinearity. The actual standard deviations of their distributions is 0.45. PREDICTION standard deviations 0.45 standard deviation 0.04

25 25 The figure also shows the distribution of their sum. As anticipated, it is distributed around 5, but with a much lower standard deviation, 0.04, despite the multicollinearity affecting the point estimates of the individual coefficients. PREDICTION standard deviations 0.45 standard deviation 0.04

26 Copyright Christopher Dougherty 2012. These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 3.6 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre http://www.oup.com/uk/orc/bin/9780199567089/http://www.oup.com/uk/orc/bin/9780199567089/. Individuals studying econometrics on their own who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx or the University of London International Programmes distance learning course EC2020 Elements of Econometrics www.londoninternational.ac.uk/lsewww.londoninternational.ac.uk/lse. 2012.12.03


Download ppt "1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence,"

Similar presentations


Ads by Google