Presentation is loading. Please wait.

Presentation is loading. Please wait.

Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:

Similar presentations


Presentation on theme: "Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:"— Presentation transcript:

1 Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation: Dougherty, C. (2012) EC220 - Introduction to econometrics (chapter 10). [Teaching Resource] © 2012 The Author This version available at: http://learningresources.lse.ac.uk/136/http://learningresources.lse.ac.uk/136/ Available in LSE Learning Resources Online: May 2012 This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. This license allows the user to remix, tweak, and build upon the work even for commercial purposes, as long as the user credits the author and licenses their new creations under the identical terms. http://creativecommons.org/licenses/by-sa/3.0/ http://creativecommons.org/licenses/by-sa/3.0/ http://learningresources.lse.ac.uk/

2 1 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS X Y XiXi 11  1  +  2 X i Y =  1  +  2 X We will now apply the maximum likelihood principle to regression analysis, using the simple linear model Y =  1 +  2 X + u.

3 2 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS The black marker shows the value that Y would have if X were equal to X i and if there were no disturbance term. X Y XiXi 11  1  +  2 X i Y =  1  +  2 X

4 3 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS However we will assume that there is a disturbance term in the model and that it has a normal distribution as shown. X Y XiXi 11  1  +  2 X i Y =  1  +  2 X

5 4 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS Relative to the black marker, the curve represents the ex ante distribution for u, that is, its potential distribution before the observation is generated. Ex post, of course, it is fixed at some specific value. X Y XiXi 11  1  +  2 X i Y =  1  +  2 X

6 5 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS Relative to the horizontal axis, the curve also represents the ex ante distribution for Y for that observation, that is, conditional on X = X i. X Y XiXi 11  1  +  2 X i Y =  1  +  2 X

7 6 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS Potential values of Y close to  1 +  2 X i will have relatively large densities... X Y XiXi 11  1  +  2 X i Y =  1  +  2 X

8 X Y XiXi 11  1  +  2 X i Y =  1  +  2 X 7 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS... while potential values of Y relatively far from  1 +  2 X i will have small ones.

9 8 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS The mean value of the distribution of Y i is  1 +  2 X i. Its standard deviation is , the standard deviation of the disturbance term. X Y XiXi 11  1  +  2 X i Y =  1  +  2 X

10 9 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS Hence the density function for the ex ante distribution of Y i is as shown. X Y XiXi 11  1  +  2 X i Y =  1  +  2 X

11 10 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS The joint density function for the observations on Y is the product of their individual densities.

12 11 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS Now, taking  1,  2 and  as our choice variables, and taking the data on Y and X as given, we can re-interpret this function as the likelihood function for  1,  2, and .

13 12 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS We will choose  1,  2, and  so as to maximize the likelihood, given the data on Y and X. As usual, it is easier to do this indirectly, maximizing the log-likelihood instead.

14 13 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS As usual, the first step is to decompose the expression as the sum of the logarithms of the factors.

15 14 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS Then we split the logarithm of each factor into two components. The first component is the same in each case.

16 15 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS Hence the log-likelihood simplifies as shown.

17 16 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS To maximize the log-likelihood, we need to minimize Z. But choosing estimators of  1 and  2 to minimize Z is exactly what we did when we derived the least squares regression coefficients.

18 17 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS Thus, for this regression model, the maximum likelihood estimators of  1 and  2 are identical to the least squares estimators.

19 18 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS As a consequence, Z will be the sum of the squares of the least squares residuals.

20 19 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS To obtain the maximum likelihood estimator of , it is convenient to rearrange the log- likelihood function as shown.

21 20 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS Differentiating it with respect to , we obtain the expression shown.

22 21 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS The first order condition for a maximum requires this to be equal to zero. Hence the maximum likelihood estimator of the variance is the sum of the squares of the residuals divided by n.

23 22 MAXIMUM LIKELIHOOD ESTIMATION OF REGRESSION COEFFICIENTS Note that this is biased for finite samples. To obtain an unbiased estimator, we should divide by n–k, where k is the number of parameters, in this case 2. However, the bias disappears as the sample size becomes large.

24 Copyright Christopher Dougherty 2011. These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 10.6 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre http://www.oup.com/uk/orc/bin/9780199567089/http://www.oup.com/uk/orc/bin/9780199567089/. Individuals studying econometrics on their own and who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx or the University of London International Programmes distance learning course 20 Elements of Econometrics www.londoninternational.ac.uk/lsewww.londoninternational.ac.uk/lse. 11.07.25


Download ppt "Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:"

Similar presentations


Ads by Google