BINARY CHOICE MODELS: LOGIT ANALYSIS

Slides:



Advertisements
Similar presentations
EC220 - Introduction to econometrics (chapter 10)
Advertisements

Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: slope dummy variables Original citation: Dougherty, C. (2012) EC220 -
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: introduction to maximum likelihood estimation Original citation: Dougherty,
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: interactive explanatory variables Original citation: Dougherty, C. (2012)
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
HETEROSCEDASTICITY-CONSISTENT STANDARD ERRORS 1 Heteroscedasticity causes OLS standard errors to be biased is finite samples. However it can be demonstrated.
1 BINARY CHOICE MODELS: PROBIT ANALYSIS In the case of probit analysis, the sigmoid function F(Z) giving the probability is the cumulative standardized.
1 PROBABILITY DISTRIBUTION EXAMPLE: X IS THE SUM OF TWO DICE red This sequence provides an example of a discrete random variable. Suppose that you.
EC220 - Introduction to econometrics (chapter 2)
In previous lecture, we highlighted 3 shortcomings of the LPM. The most serious one is the unboundedness problem, i.e., the LPM may make the nonsense predictions.
In previous lecture, we dealt with the unboundedness problem of LPM using the logit model. In this lecture, we will consider another alternative, i.e.
00  sd  0 –sd  0 –1.96sd  0 +sd 2.5% CONFIDENCE INTERVALS probability density function of X null hypothesis H 0 :  =  0 In the sequence.
EXPECTED VALUE OF A RANDOM VARIABLE 1 The expected value of a random variable, also known as its population mean, is the weighted average of its possible.
TESTING A HYPOTHESIS RELATING TO THE POPULATION MEAN 1 This sequence describes the testing of a hypothesis at the 5% and 1% significance levels. It also.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: confidence intervals Original citation: Dougherty, C. (2012) EC220.
EC220 - Introduction to econometrics (chapter 1)
1 INTERPRETATION OF A REGRESSION EQUATION The scatter diagram shows hourly earnings in 2002 plotted against years of schooling, defined as highest grade.
TESTING A HYPOTHESIS RELATING TO A REGRESSION COEFFICIENT This sequence describes the testing of a hypotheses relating to regression coefficients. It is.
SLOPE DUMMY VARIABLES 1 The scatter diagram shows the data for the 74 schools in Shanghai and the cost functions derived from a regression of COST on N.
1 In the previous sequence, we were performing what are described as two-sided t tests. These are appropriate when we have no information about the alternative.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: precision of the multiple regression coefficients Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: semilogarithmic models Original citation: Dougherty, C. (2012) EC220.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: nonlinear regression Original citation: Dougherty, C. (2012) EC220 -
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:
TOBIT ANALYSIS Sometimes the dependent variable in a regression model is subject to a lower limit or an upper limit, or both. Suppose that in the absence.
1 In a second variation, we shall consider the model shown above. x is the rate of growth of productivity, assumed to be exogenous. w is now hypothesized.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: dummy variable classification with two categories Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: two sets of dummy variables Original citation: Dougherty, C. (2012) EC220.
1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence,
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: the effects of changing the reference category Original citation: Dougherty,
DUMMY CLASSIFICATION WITH MORE THAN TWO CATEGORIES This sequence explains how to extend the dummy variable technique to handle a qualitative explanatory.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: Tobit models Original citation: Dougherty, C. (2012) EC220 - Introduction.
1 INTERACTIVE EXPLANATORY VARIABLES The model shown above is linear in parameters and it may be fitted using straightforward OLS, provided that the regression.
THE FIXED AND RANDOM COMPONENTS OF A RANDOM VARIABLE 1 In this short sequence we shall decompose a random variable X into its fixed and random components.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: binary choice logit models Original citation: Dougherty, C. (2012) EC220.
1 TWO SETS OF DUMMY VARIABLES The explanatory variables in a regression model may include multiple sets of dummy variables. This sequence provides an example.
Confidence intervals were treated at length in the Review chapter and their application to regression analysis presents no problems. We will not repeat.
ALTERNATIVE EXPRESSION FOR POPULATION VARIANCE 1 This sequence derives an alternative expression for the population variance of a random variable. It provides.
1 PROXY VARIABLES Suppose that a variable Y is hypothesized to depend on a set of explanatory variables X 2,..., X k as shown above, and suppose that for.
1 BINARY CHOICE MODELS: PROBIT ANALYSIS In the case of probit analysis, the sigmoid function is the cumulative standardized normal distribution.
F TEST OF GOODNESS OF FIT FOR THE WHOLE EQUATION 1 This sequence describes two F tests of goodness of fit in a multiple regression model. The first relates.
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE 1 This sequence provides a geometrical interpretation of a multiple regression model with two.
Simple regression model: Y =  1 +  2 X + u 1 We have seen that the regression coefficients b 1 and b 2 are random variables. They provide point estimates.
A.1The model is linear in parameters and correctly specified. PROPERTIES OF THE MULTIPLE REGRESSION COEFFICIENTS 1 Moving from the simple to the multiple.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 9) Slideshow: instrumental variable estimation: variation Original citation: Dougherty,
. reg LGEARN S WEIGHT85 Source | SS df MS Number of obs = F( 2, 537) = Model |
(1)Combine the correlated variables. 1 In this sequence, we look at four possible indirect methods for alleviating a problem of multicollinearity. POSSIBLE.
1 We will continue with a variation on the basic model. We will now hypothesize that p is a function of m, the rate of growth of the money supply, as well.
COST 11 DUMMY VARIABLE CLASSIFICATION WITH TWO CATEGORIES 1 This sequence explains how you can include qualitative explanatory variables in your regression.
Definition of, the expected value of a function of X : 1 EXPECTED VALUE OF A FUNCTION OF A RANDOM VARIABLE To find the expected value of a function of.
RAMSEY’S RESET TEST OF FUNCTIONAL MISSPECIFICATION 1 Ramsey’s RESET test of functional misspecification is intended to provide a simple indicator of evidence.
1 NONLINEAR REGRESSION Suppose you believe that a variable Y depends on a variable X according to the relationship shown and you wish to obtain estimates.
1 ESTIMATORS OF VARIANCE, COVARIANCE, AND CORRELATION We have seen that the variance of a random variable X is given by the expression above. Variance.
1 CHANGES IN THE UNITS OF MEASUREMENT Suppose that the units of measurement of Y or X are changed. How will this affect the regression results? Intuitively,
SEMILOGARITHMIC MODELS 1 This sequence introduces the semilogarithmic model and shows how it may be applied to an earnings function. The dependent variable.
GRAPHING A RELATIONSHIP IN A MULTIPLE REGRESSION MODEL The output above shows the result of regressing EARNINGS, hourly earnings in dollars, on S, years.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 2) Slideshow: confidence intervals Original citation: Dougherty, C. (2012) EC220 -
1 REPARAMETERIZATION OF A MODEL AND t TEST OF A LINEAR RESTRICTION Linear restrictions can also be tested using a t test. This involves the reparameterization.
F TESTS RELATING TO GROUPS OF EXPLANATORY VARIABLES 1 We now come to more general F tests of goodness of fit. This is a test of the joint explanatory power.
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used.
FOOTNOTE: THE COCHRANE–ORCUTT ITERATIVE PROCESS 1 We saw in the previous sequence that AR(1) autocorrelation could be eliminated by a simple manipulation.
VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE In this sequence and the next we will investigate the consequences of misspecifying the regression.
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Presentation transcript:

BINARY CHOICE MODELS: LOGIT ANALYSIS Y, p A 1 1 – b1 – b2Xi b1 +b2Xi b1 b1 + b2Xi B Xi X The linear probability model may make the nonsense predictions that an event will occur with probability greater than 1 or less than 0. 1

BINARY CHOICE MODELS: LOGIT ANALYSIS The usual way of avoiding this problem is to hypothesize that the probability is a sigmoid (S-shaped) function of Z, F(Z), where Z is a function of the explanatory variables. 2

BINARY CHOICE MODELS: LOGIT ANALYSIS Several mathematical functions are sigmoid in character. One is the logistic function shown here. As Z goes to infinity, e–Z goes to 0 and p goes to 1 (but cannot exceed 1). As Z goes to minus infinity, e–Z goes to infinity and p goes to 0 (but cannot be below 0). 3

BINARY CHOICE MODELS: LOGIT ANALYSIS The model implies that, for values of Z less than –2, the probability of the event occurring is low and insensitive to variations in Z. Likewise, for values greater than 2, the probability is high and insensitive to variations in Z. 4

BINARY CHOICE MODELS: LOGIT ANALYSIS To obtain an expression for the sensitivity, we differentiate F(Z) with respect to Z. The box gives the general rule for differentiating a quotient. 5

BINARY CHOICE MODELS: LOGIT ANALYSIS We apply the rule to the expression for F(Z). 6

BINARY CHOICE MODELS: LOGIT ANALYSIS The sensitivity, as measured by the slope, is greatest when Z is 0. The marginal function, f(Z), reaches a maximum at this point. 7

BINARY CHOICE MODELS: LOGIT ANALYSIS For a nonlinear model of this kind, maximum likelihood estimation is much superior to the use of the least squares principle for estimating the parameters. More details concerning its application are given at the end of this sequence. 8

BINARY CHOICE MODELS: LOGIT ANALYSIS We will apply this model to the graduating from high school example described in the linear probability model sequence. We will begin by assuming that ASVABC is the only relevant explanatory variable, so Z is a simple function of it. 9

BINARY CHOICE MODELS: LOGIT ANALYSIS . logit GRAD ASVABC Iteration 0: log likelihood = -118.67769 Iteration 1: log likelihood = -104.45292 Iteration 2: log likelihood = -97.135677 Iteration 3: log likelihood = -96.887294 Iteration 4: log likelihood = -96.886017 The Stata command is logit, followed by the outcome variable and the explanatory variable(s). Maximum likelihood estimation is an iterative process, so the first part of the output will be like that shown. 10

BINARY CHOICE MODELS: LOGIT ANALYSIS . logit GRAD ASVABC Iteration 0: log likelihood = -118.67769 Iteration 1: log likelihood = -104.45292 Iteration 2: log likelihood = -97.135677 Iteration 3: log likelihood = -96.887294 Iteration 4: log likelihood = -96.886017 Logit estimates Number of obs = 540 LR chi2(1) = 43.58 Prob > chi2 = 0.0000 Log likelihood = -96.886017 Pseudo R2 = 0.1836 ------------------------------------------------------------------------------ GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- ASVABC | .1313626 .022428 5.86 0.000 .0874045 .1753206 _cons | -3.240218 .9444844 -3.43 0.001 -5.091373 -1.389063 In this case the coefficients of the Z function are as shown. 11

BINARY CHOICE MODELS: LOGIT ANALYSIS Since there is only one explanatory variable, we can draw the probability function and marginal effect function as functions of ASVABC. 12

BINARY CHOICE MODELS: LOGIT ANALYSIS We see that ASVABC has its greatest effect on graduating when it is below 40, that is, in the lower ability range. Any individual with a score above the average (50) is almost certain to graduate. 13

BINARY CHOICE MODELS: LOGIT ANALYSIS . logit GRAD ASVABC Iteration 0: log likelihood = -118.67769 Iteration 1: log likelihood = -104.45292 Iteration 2: log likelihood = -97.135677 Iteration 3: log likelihood = -96.887294 Iteration 4: log likelihood = -96.886017 Logit estimates Number of obs = 540 LR chi2(1) = 43.58 Prob > chi2 = 0.0000 Log likelihood = -96.886017 Pseudo R2 = 0.1836 ------------------------------------------------------------------------------ GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- ASVABC | .1313626 .022428 5.86 0.000 .0874045 .1753206 _cons | -3.240218 .9444844 -3.43 0.001 -5.091373 -1.389063 The t statistic indicates that the effect of variations in ASVABC on the probability of graduating from high school is highly significant. 14

BINARY CHOICE MODELS: LOGIT ANALYSIS . logit GRAD ASVABC Iteration 0: log likelihood = -118.67769 Iteration 1: log likelihood = -104.45292 Iteration 2: log likelihood = -97.135677 Iteration 3: log likelihood = -96.887294 Iteration 4: log likelihood = -96.886017 Logit estimates Number of obs = 540 LR chi2(1) = 43.58 Prob > chi2 = 0.0000 Log likelihood = -96.886017 Pseudo R2 = 0.1836 ------------------------------------------------------------------------------ GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- ASVABC | .1313626 .022428 5.86 0.000 .0874045 .1753206 _cons | -3.240218 .9444844 -3.43 0.001 -5.091373 -1.389063 Strictly speaking, the t statistic is valid only for large samples, so the normal distribution is the reference distribution. For this reason the statistic is denoted z in the Stata output. This z has nothing to do with our Z function. 15

BINARY CHOICE MODELS: LOGIT ANALYSIS The coefficients of the Z function do not have any direct intuitive interpretation. 16

BINARY CHOICE MODELS: LOGIT ANALYSIS However, we can use them to quantify the marginal effect of a change in ASVABC on the probability of graduating. We will do this theoretically for the general case where Z is a function of several explanatory variables. 17

BINARY CHOICE MODELS: LOGIT ANALYSIS Since p is a function of Z, and Z is a function of the X variables, the marginal effect of Xi on p can be written as the product of the marginal effect of Z on p and the marginal effect of Xi on Z. 18

BINARY CHOICE MODELS: LOGIT ANALYSIS We have already derived an expression for dp/dZ. The marginal effect of Xi on Z is given by its b coefficient. 19

BINARY CHOICE MODELS: LOGIT ANALYSIS Hence we obtain an expression for the marginal effect of Xi on p. 20

BINARY CHOICE MODELS: LOGIT ANALYSIS The marginal effect is not constant because it depends on the value of Z, which in turn depends on the values of the explanatory variables. A common procedure is to evaluate it for the sample means of the explanatory variables. 21

BINARY CHOICE MODELS: LOGIT ANALYSIS . sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- GRAD | 540 .9425926 .2328351 0 1 ASVABC | 540 51.36271 9.567646 25.45931 66.07963 The sample mean of ASVABC in this sample is 51.3627. 22

BINARY CHOICE MODELS: LOGIT ANALYSIS . sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- GRAD | 540 .9425926 .2328351 0 1 ASVABC | 540 51.36271 9.567646 25.45931 66.07963 Logit estimates Number of obs = 540 LR chi2(1) = 43.58 Prob > chi2 = 0.0000 Log likelihood = -96.886017 Pseudo R2 = 0.1836 ------------------------------------------------------------------------------ GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- ASVABC | .1313626 .022428 5.86 0.000 .0874045 .1753206 _cons | -3.240218 .9444844 -3.43 0.001 -5.091373 -1.389063 When evaluated at the mean, Z is equal to 3.5089. 23

BINARY CHOICE MODELS: LOGIT ANALYSIS . sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- GRAD | 540 .9425926 .2328351 0 1 ASVABC | 540 51.36271 9.567646 25.45931 66.07963 e–Z is 0.0299. Hence F(Z) is 0.971. There is 97.1 percent probability that an individual with average ASVABC will graduate from high school. 24

BINARY CHOICE MODELS: LOGIT ANALYSIS . sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- GRAD | 540 .9425926 .2328351 0 1 ASVABC | 540 51.36271 9.567646 25.45931 66.07963 f(Z) is 0.0282. 25

BINARY CHOICE MODELS: LOGIT ANALYSIS . sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- GRAD | 540 .9425926 .2328351 0 1 ASVABC | 540 51.36271 9.567646 25.45931 66.07963 The marginal effect, evaluated at the mean, is therefore 0.004. This implies that a one point increase in ASVABC would increase the probability of graduating from high school by 0.4 percent. 26

BINARY CHOICE MODELS: LOGIT ANALYSIS 0.971 0.004 51.36 In this example, the marginal effect at the mean of ASVABC is very low. The reason is that anyone with an average score is almost certain to graduate anyway. So an increase in the score has little effect. 27

BINARY CHOICE MODELS: LOGIT ANALYSIS . sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- GRAD | 540 .9425926 .2328351 0 1 ASVABC | 540 51.36271 9.567646 25.45931 66.07963 To show that the marginal effect varies, we will also calculate it for ASVABC equal to 30. For this value of ASVABC, the probability of graduating is only 66.9 percent. 28

BINARY CHOICE MODELS: LOGIT ANALYSIS . sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- GRAD | 540 .9425926 .2328351 0 1 ASVABC | 540 51.36271 9.567646 25.45931 66.07963 For ASVABC equal to 30, a one point increase in ASVABC increases the probability of graduating by 2.9 percent. 29

BINARY CHOICE MODELS: LOGIT ANALYSIS 0.029 0.669 For an individual with a score of 30, with only a 67 percent probability of graduating, an increase in the score has a relatively large impact. 30

BINARY CHOICE MODELS: LOGIT ANALYSIS . logit GRAD ASVABC SM SF MALE Iteration 0: log likelihood = -118.67769 Iteration 1: log likelihood = -104.73493 Iteration 2: log likelihood = -97.080528 Iteration 3: log likelihood = -96.806623 Iteration 4: log likelihood = -96.804845 Iteration 5: log likelihood = -96.804844 Logit estimates Number of obs = 540 LR chi2(4) = 43.75 Prob > chi2 = 0.0000 Log likelihood = -96.804844 Pseudo R2 = 0.1843 ------------------------------------------------------------------------------ GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- ASVABC | .1329127 .0245718 5.41 0.000 .0847528 .1810726 SM | -.023178 .0868122 -0.27 0.789 -.1933267 .1469708 SF | .0122663 .0718876 0.17 0.865 -.1286307 .1531634 MALE | .1279654 .3989345 0.32 0.748 -.6539318 .9098627 _cons | -3.252373 1.065524 -3.05 0.002 -5.340761 -1.163985 Here is the output for a model with a somewhat better specification. 31

BINARY CHOICE MODELS: LOGIT ANALYSIS . sum GRAD ASVABC SM SF MALE Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- GRAD | 540 .9425926 .2328351 0 1 ASVABC | 540 51.36271 9.567646 25.45931 66.07963 SM | 540 11.57963 2.816456 0 20 SF | 540 11.83704 3.53715 0 20 MALE | 540 .5 .5004636 0 1 We will estimate the marginal effects, putting all the explanatory variables equal to their sample means. 32

BINARY CHOICE MODELS: LOGIT ANALYSIS Logit: Marginal Effects mean b product ASVABC 51.36 0.133 6.826 SM 11.58 –0.023 –0.269 SF 11.84 0.012 0.146 MALE 0.50 0.128 0.064 constant 1.00 –3.252 –3.252 Total 3.514 The first step is to calculate Z, when the X variables are equal to their sample means. 33

BINARY CHOICE MODELS: LOGIT ANALYSIS Logit: Marginal Effects mean b product ASVABC 51.36 0.133 6.826 SM 11.58 –0.023 –0.269 SF 11.84 0.012 0.146 MALE 0.50 0.128 0.064 constant 1.00 –3.252 –3.252 Total 3.514 We then calculate f(Z). 34

BINARY CHOICE MODELS: LOGIT ANALYSIS Logit: Marginal Effects mean b product f(Z) f(Z)b ASVABC 51.36 0.133 6.826 0.028 0.004 SM 11.58 –0.023 –0.269 0.028 –0.001 SF 11.84 0.012 0.146 0.028 0.000 MALE 0.50 0.128 0.064 0.028 0.004 constant 1.00 –3.252 –3.252 Total 3.514 The estimated marginal effects are f(Z) multiplied by the respective coefficients. We see that the effect of ASVABC is about the same as before. Mother's schooling has negligible effect and father's schooling has no discernible effect at all. 35

BINARY CHOICE MODELS: LOGIT ANALYSIS Logit: Marginal Effects mean b product f(Z) f(Z)b ASVABC 51.36 0.133 6.826 0.028 0.004 SM 11.58 –0.023 –0.269 0.028 –0.001 SF 11.84 0.012 0.146 0.028 0.000 MALE 0.50 0.128 0.064 0.028 0.004 constant 1.00 –3.252 –3.252 Total 3.514 Males have 0.4 percent higher probability of graduating than females. These effects would all have been larger if they had been evaluated at a lower ASVABC score. 36

BINARY CHOICE MODELS: LOGIT ANALYSIS Individuals who graduated: outcome probability This sequence will conclude with an outline explanation of how the model is fitted using maximum likelihood estimation. 37

BINARY CHOICE MODELS: LOGIT ANALYSIS Individuals who graduated: outcome probability In the case of an individual who graduated, the probability of that outcome is F(Z). We will give subscripts 1, ..., s to the individuals who graduated. 38

BINARY CHOICE MODELS: LOGIT ANALYSIS Individuals who graduated: outcome probability Individuals who did not graduate: outcome probability In the case of an individual who did not graduate, the probability of that outcome is 1 – F(Z). We will give subscripts s+1, ..., n to these individuals. 39

BINARY CHOICE MODELS: LOGIT ANALYSIS Maximize Did graduate Did not graduate We choose b1 and b2 so as to maximize the joint probability of the outcomes, that is, F(Z1) x ... x F(Zs) x [1 – F(Zs+1)] x ... x [1 – F(Zn)]. There are no mathematical formulae for b1 and b2. They have to be determined iteratively by a trial-and-error process. 40

Copyright Christopher Dougherty 2012. These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 10.2 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre http://www.oup.com/uk/orc/bin/9780199567089/. Individuals who are studying econometrics on their own who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx or the University of London International Programmes distance learning course EC2020 Elements of Econometrics www.londoninternational.ac.uk/lse. 2012.12.07