Introduction to Econometrics, 5th edition

Slides:



Advertisements
Similar presentations
1 BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Economists are often interested in the factors behind the decision-making of individuals or enterprises,
Advertisements

CHOW TEST AND DUMMY VARIABLE GROUP TEST
EC220 - Introduction to econometrics (chapter 5)
EC220 - Introduction to econometrics (chapter 10)
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: slope dummy variables Original citation: Dougherty, C. (2012) EC220 -
1 THE DISTURBANCE TERM IN LOGARITHMIC MODELS Thus far, nothing has been said about the disturbance term in nonlinear regression models.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: interactive explanatory variables Original citation: Dougherty, C. (2012)
HETEROSCEDASTICITY-CONSISTENT STANDARD ERRORS 1 Heteroscedasticity causes OLS standard errors to be biased is finite samples. However it can be demonstrated.
EC220 - Introduction to econometrics (chapter 7)
1 BINARY CHOICE MODELS: PROBIT ANALYSIS In the case of probit analysis, the sigmoid function F(Z) giving the probability is the cumulative standardized.
Random effects estimation RANDOM EFFECTS REGRESSIONS When the observed variables of interest are constant for each individual, a fixed effects regression.
EC220 - Introduction to econometrics (chapter 2)
So far, we have considered regression models with dummy variables of independent variables. In this lecture, we will study regression models whose dependent.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 6) Slideshow: variable misspecification iii: consequences for diagnostics Original.
TESTING A HYPOTHESIS RELATING TO THE POPULATION MEAN 1 This sequence describes the testing of a hypothesis at the 5% and 1% significance levels. It also.
EC220 - Introduction to econometrics (chapter 1)
1 INTERPRETATION OF A REGRESSION EQUATION The scatter diagram shows hourly earnings in 2002 plotted against years of schooling, defined as highest grade.
TESTING A HYPOTHESIS RELATING TO A REGRESSION COEFFICIENT This sequence describes the testing of a hypotheses relating to regression coefficients. It is.
1 A MONTE CARLO EXPERIMENT In the previous slideshow, we saw that the error term is responsible for the variations of b 2 around its fixed component 
SLOPE DUMMY VARIABLES 1 The scatter diagram shows the data for the 74 schools in Shanghai and the cost functions derived from a regression of COST on N.
BINARY CHOICE MODELS: LOGIT ANALYSIS
Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: precision of the multiple regression coefficients Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: semilogarithmic models Original citation: Dougherty, C. (2012) EC220.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: nonlinear regression Original citation: Dougherty, C. (2012) EC220 -
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: Chow test Original citation: Dougherty, C. (2012) EC220 - Introduction.
TOBIT ANALYSIS Sometimes the dependent variable in a regression model is subject to a lower limit or an upper limit, or both. Suppose that in the absence.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: dummy variable classification with two categories Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: two sets of dummy variables Original citation: Dougherty, C. (2012) EC220.
1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence,
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: the effects of changing the reference category Original citation: Dougherty,
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: dummy classification with more than two categories Original citation:
DUMMY CLASSIFICATION WITH MORE THAN TWO CATEGORIES This sequence explains how to extend the dummy variable technique to handle a qualitative explanatory.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: Tobit models Original citation: Dougherty, C. (2012) EC220 - Introduction.
THE DUMMY VARIABLE TRAP 1 Suppose that you have a regression model with Y depending on a set of ordinary variables X 2,..., X k and a qualitative variable.
1 INTERACTIVE EXPLANATORY VARIABLES The model shown above is linear in parameters and it may be fitted using straightforward OLS, provided that the regression.
THE FIXED AND RANDOM COMPONENTS OF A RANDOM VARIABLE 1 In this short sequence we shall decompose a random variable X into its fixed and random components.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: binary choice logit models Original citation: Dougherty, C. (2012) EC220.
1 TWO SETS OF DUMMY VARIABLES The explanatory variables in a regression model may include multiple sets of dummy variables. This sequence provides an example.
Confidence intervals were treated at length in the Review chapter and their application to regression analysis presents no problems. We will not repeat.
ALTERNATIVE EXPRESSION FOR POPULATION VARIANCE 1 This sequence derives an alternative expression for the population variance of a random variable. It provides.
1 PROXY VARIABLES Suppose that a variable Y is hypothesized to depend on a set of explanatory variables X 2,..., X k as shown above, and suppose that for.
F TEST OF GOODNESS OF FIT FOR THE WHOLE EQUATION 1 This sequence describes two F tests of goodness of fit in a multiple regression model. The first relates.
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE 1 This sequence provides a geometrical interpretation of a multiple regression model with two.
Simple regression model: Y =  1 +  2 X + u 1 We have seen that the regression coefficients b 1 and b 2 are random variables. They provide point estimates.
. reg LGEARN S WEIGHT85 Source | SS df MS Number of obs = F( 2, 537) = Model |
POSSIBLE DIRECT MEASURES FOR ALLEVIATING MULTICOLLINEARITY 1 What can you do about multicollinearity if you encounter it? We will discuss some possible.
(1)Combine the correlated variables. 1 In this sequence, we look at four possible indirect methods for alleviating a problem of multicollinearity. POSSIBLE.
COST 11 DUMMY VARIABLE CLASSIFICATION WITH TWO CATEGORIES 1 This sequence explains how you can include qualitative explanatory variables in your regression.
RAMSEY’S RESET TEST OF FUNCTIONAL MISSPECIFICATION 1 Ramsey’s RESET test of functional misspecification is intended to provide a simple indicator of evidence.
1 NONLINEAR REGRESSION Suppose you believe that a variable Y depends on a variable X according to the relationship shown and you wish to obtain estimates.
1 ESTIMATORS OF VARIANCE, COVARIANCE, AND CORRELATION We have seen that the variance of a random variable X is given by the expression above. Variance.
1 CHANGES IN THE UNITS OF MEASUREMENT Suppose that the units of measurement of Y or X are changed. How will this affect the regression results? Intuitively,
SEMILOGARITHMIC MODELS 1 This sequence introduces the semilogarithmic model and shows how it may be applied to an earnings function. The dependent variable.
GRAPHING A RELATIONSHIP IN A MULTIPLE REGRESSION MODEL The output above shows the result of regressing EARNINGS, hourly earnings in dollars, on S, years.
1 BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Economists are often interested in the factors behind the decision-making of individuals or enterprises,
1 REPARAMETERIZATION OF A MODEL AND t TEST OF A LINEAR RESTRICTION Linear restrictions can also be tested using a t test. This involves the reparameterization.
F TESTS RELATING TO GROUPS OF EXPLANATORY VARIABLES 1 We now come to more general F tests of goodness of fit. This is a test of the joint explanatory power.
WHITE TEST FOR HETEROSCEDASTICITY 1 The White test for heteroscedasticity looks for evidence of an association between the variance of the disturbance.
1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used.
VARIABLE MISSPECIFICATION II: INCLUSION OF AN IRRELEVANT VARIABLE In this sequence we will investigate the consequences of including an irrelevant variable.
VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE In this sequence and the next we will investigate the consequences of misspecifying the regression.
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Presentation transcript:

Introduction to Econometrics, 5th edition Type author name/s here Dougherty Introduction to Econometrics, 5th edition Chapter heading Chapter 10: Binary Choice and Limited Dependent Variable Models, and Maximum Likelihood Estimation © Christopher Dougherty, 2016. All rights reserved.

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Why do some people go to college while others do not? Why do some women enter the labor force while others do not? Why do some people buy houses while others rent? Why do some people migrate while others stay put? Economists are often interested in the factors behind the decision-making of individuals or enterprises, examples being shown above. 1

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Why do some people go to college while others do not? Why do some women enter the labor force while others do not? Why do some people buy houses while others rent? Why do some people migrate while others stay put? The models that have been developed for this purpose are known as qualitative response or binary choice models, with the outcome, which we will denote Y, being assigned a value of 1 if the event occurs and 0 otherwise. 2

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Why do some people go to college while others do not? Why do some women enter the labor force while others do not? Why do some people buy houses while others rent? Why do some people migrate while others stay put? Models with more than two possible outcomes have also been developed, but we will confine our attention to binary choice models. 3

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL The simplest binary choice model is the linear probability model where, as the name implies, the probability of the event occurring, p, is assumed to be a linear function of a set of explanatory variables. 4

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL y, p 1 b1 +b2Xi b1 Xi X Graphically, the relationship is as shown, if there is just one explanatory variable. 5

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Of course p is unobservable. One has data on only the outcome, Y. In the linear probability model this is used like a dummy variable for the dependent variable. 6

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Why do some people graduate from high school while others drop out? As an illustration, we will take the question shown above. We will define a variable GRAD which is equal to 1 if the individual graduated from high school, and 0 otherwise. 7

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL . gen GRAD = 0 . replace GRAD = 1 if S > 11 (452 real changes made) The Stata output above shows the construction of the variable GRAD. It is first set to 0 for all respondents, and then changed to 1 for those who had more than 11 years of schooling. 8

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL . gen GRAD = 0 . replace GRAD = 1 if S > 11 (452 real changes made) . reg GRAD ASVABC ---------------------------------------------------------------------------- Source | SS df MS Number of obs = 500 -----------+------------------------------ F( 1, 498) = 58.01 Model | 4.52721798 1 4.52721798 Prob > F = 0.0000 Residual | 38.864782 498 .078041731 R-squared = 0.1043 -----------+------------------------------ Adj R-squared = 0.1025 Total | 43.392 499 .086957916 Root MSE = .27936 GRAD | Coef. Std. Err. t P>|t| [95% Conf. Interval] -----------+---------------------------------------------------------------- ASVABC | .1060002 .0139173 7.62 0.000 .0786564 .133344 _cons | .87522 .0130523 67.06 0.000 .8495757 .9008643 Here is the result of regressing GRAD on ASVABC. It suggests that an increase of one unit in the ASVABC score increases the probability of graduating by 0.106, that is, 10.6%. ASVABC is scaled so that it has mean 0 and its units are standard deviations. 9

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL . gen GRAD = 0 . replace GRAD = 1 if S > 11 (452 real changes made) . reg GRAD ASVABC ---------------------------------------------------------------------------- Source | SS df MS Number of obs = 500 -----------+------------------------------ F( 1, 498) = 58.01 Model | 4.52721798 1 4.52721798 Prob > F = 0.0000 Residual | 38.864782 498 .078041731 R-squared = 0.1043 -----------+------------------------------ Adj R-squared = 0.1025 Total | 43.392 499 .086957916 Root MSE = .27936 GRAD | Coef. Std. Err. t P>|t| [95% Conf. Interval] -----------+---------------------------------------------------------------- ASVABC | .1060002 .0139173 7.62 0.000 .0786564 .133344 _cons | .87522 .0130523 67.06 0.000 .8495757 .9008643 The intercept implies that an individual with the mean ASVABC score, 0, would have an 88 percent probability of graduating. 10

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Unfortunately, the linear probability model has some serious shortcomings. First, there are problems with the disturbance term. 11

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL As usual, the value of the dependent variable Yi in observation i has a nonstochastic component and a random component. The nonstochastic component depends on Xi and the parameters. The random component is the disturbance term. 12

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL The nonstochastic component in observation i is its expected value in that observation. This is simple to compute, because it can take only two values. It is 1 with probability pi and 0 with probability (1 – pi) The expected value in observation i is therefore b1 + b2Xi. 13

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL This means that we can rewrite the model as shown. 14

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Y, p 1 b1 +b2Xi b1 Xi X The probability function is thus also the nonstochastic component of the relationship between Y and X. 15

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL In observation i, for Yi to be 1, ui must be (1 – b1 – b2Xi). For Yi to be 0, ui must be (– b1 – b2Xi). 16

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Y, p A 1 1 – b1 – b2Xi b1 +b2Xi b1 b1 + b2Xi B Xi X The two possible values, which give rise to the observations A and B, are illustrated in the diagram. Since u does not have a normal distribution, the standard errors and test statistics are invalid. Its distribution is not even continuous. 17

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Y, p A 1 1 – b1 – b2Xi b1 +b2Xi b1 b1 + b2Xi B Xi X Further, it can be shown that the population variance of the disturbance term in observation i is given by (b1 + b2Xi)(1 – b1 – b2Xi). This changes with Xi, and so the distribution is heteroskedastic. 18

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Y, p A 1 1 – b1 – b2Xi b1 +b2Xi b1 b1 + b2Xi B Xi X Yet another shortcoming of the linear probability model is that it may predict probabilities of more than 1, as shown here. It may also predict probabilities less than 0. 19

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL . gen GRAD = 0 . replace GRAD = 1 if S > 11 (452 real changes made) . reg GRAD ASVABC ---------------------------------------------------------------------------- Source | SS df MS Number of obs = 500 -----------+------------------------------ F( 1, 498) = 58.01 Model | 4.52721798 1 4.52721798 Prob > F = 0.0000 Residual | 38.864782 498 .078041731 R-squared = 0.1043 -----------+------------------------------ Adj R-squared = 0.1025 Total | 43.392 499 .086957916 Root MSE = .27936 GRAD | Coef. Std. Err. t P>|t| [95% Conf. Interval] -----------+---------------------------------------------------------------- ASVABC | .1060002 .0139173 7.62 0.000 .0786564 .133344 _cons | .87522 .0130523 67.06 0.000 .8495757 .9008643 . predict PROB The Stata command for saving the fitted values from a regression is predict, followed by the name that you wish to give to the fitted values. We are calling them PROB. 20

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL . tab PROB if PROB > 1 Fitted | values | Freq. Percent Cum. ------------+----------------------------------- 1.001353 | 1 1.30 1.30 1.001776 | 1 1.30 2.60 1.002197 | 1 1.30 3.90 1.004591 | 1 1.30 5.19 ********************************************* 1.13173 | 1 1.30 96.10 1.139848 | 1 1.30 97.40 1.144364 | 1 1.30 98.70 1.155131 | 1 1.30 100.00 Total | 77 100.00 tab is the Stata command for tabulating the values of a variable, and for cross-tabulating two or more variables. We see that there are 77 observations where the fitted value is greater than 1. (The middle rows of the table have been omitted.) 21

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL . tab PROB if PROB > 1 Fitted | values | Freq. Percent Cum. ------------+----------------------------------- 1.001353 | 1 1.30 1.30 1.001776 | 1 1.30 2.60 1.002197 | 1 1.30 3.90 1.004591 | 1 1.30 5.19 ********************************************* 1.13173 | 1 1.30 96.10 1.139848 | 1 1.30 97.40 1.144364 | 1 1.30 98.70 1.155131 | 1 1.30 100.00 Total | 77 100.00 . tab PROB if PROB < 0 no observations In this example there were no fitted values of less than 0. 22

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL . tab PROB if PROB > 1 Fitted | values | Freq. Percent Cum. ------------+----------------------------------- 1.001353 | 1 1.30 1.30 1.001776 | 1 1.30 2.60 1.002197 | 1 1.30 3.90 1.004591 | 1 1.30 5.19 ********************************************* 1.13173 | 1 1.30 96.10 1.139848 | 1 1.30 97.40 1.144364 | 1 1.30 98.70 1.155131 | 1 1.30 100.00 Total | 77 100.00 . tab PROB if PROB < 0 no observations The main advantage of the linear probability model over logit and probit analysis, the alternatives considered in the next two sequences, is that it is much easier to fit. For this reason it used to be recommended for initial, exploratory work. 23

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL . tab PROB if PROB > 1 Fitted | values | Freq. Percent Cum. ------------+----------------------------------- 1.001353 | 1 1.30 1.30 1.001776 | 1 1.30 2.60 1.002197 | 1 1.30 3.90 1.004591 | 1 1.30 5.19 ********************************************* 1.13173 | 1 1.30 96.10 1.139848 | 1 1.30 97.40 1.144364 | 1 1.30 98.70 1.155131 | 1 1.30 100.00 Total | 77 100.00 . tab PROB if PROB < 0 no observations However, this consideration is no longer relevant. logit and probit are now standard features of regression applications. 24

Copyright Christopher Dougherty 2016. These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 10.1 of C. Dougherty, Introduction to Econometrics, fifth edition 2016, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre http://www.oxfordtextbooks.co.uk/orc/dougherty5e/. Individuals studying econometrics on their own who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx or the University of London International Programmes distance learning course 20 Elements of Econometrics www.londoninternational.ac.uk/lse. 2016.05.18