1 BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Economists are often interested in the factors behind the decision-making of individuals or enterprises,

Slides:



Advertisements
Similar presentations
1 BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Economists are often interested in the factors behind the decision-making of individuals or enterprises,
Advertisements

Sociology 601 Class 24: November 19, 2009 (partial) Review –regression results for spurious & intervening effects –care with sample sizes for comparing.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: slope dummy variables Original citation: Dougherty, C. (2012) EC220 -
Christopher Dougherty EC220 - Introduction to econometrics (chapter 1) Slideshow: exercise 1.16 Original citation: Dougherty, C. (2012) EC220 - Introduction.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: interactive explanatory variables Original citation: Dougherty, C. (2012)
ELASTICITIES AND DOUBLE-LOGARITHMIC MODELS
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
Lecture 9 Today: Ch. 3: Multiple Regression Analysis Example with two independent variables Frisch-Waugh-Lovell theorem.
INTERPRETATION OF A REGRESSION EQUATION
Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: exercise 3.5 Original citation: Dougherty, C. (2012) EC220 - Introduction.
EC220 - Introduction to econometrics (chapter 2)
In previous lecture, we highlighted 3 shortcomings of the LPM. The most serious one is the unboundedness problem, i.e., the LPM may make the nonsense predictions.
Introduction to Regression Analysis Straight lines, fitted values, residual values, sums of squares, relation to the analysis of variance.
1 Review of Correlation A correlation coefficient measures the strength of a linear relation between two measurement variables. The measure is based on.
So far, we have considered regression models with dummy variables of independent variables. In this lecture, we will study regression models whose dependent.
In previous lecture, we dealt with the unboundedness problem of LPM using the logit model. In this lecture, we will consider another alternative, i.e.
Sociology 601 Class 23: November 17, 2009 Homework #8 Review –spurious, intervening, & interactions effects –stata regression commands & output F-tests.
© Christopher Dougherty 1999–2006 VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE We will now investigate the consequences of misspecifying.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 2) Slideshow: testing a hypothesis relating to a regression coefficient (2010/2011.
TESTING A HYPOTHESIS RELATING TO A REGRESSION COEFFICIENT This sequence describes the testing of a hypotheses relating to regression coefficients. It is.
Chapter 4 – Nonlinear Models and Transformations of Variables.
SLOPE DUMMY VARIABLES 1 The scatter diagram shows the data for the 74 schools in Shanghai and the cost functions derived from a regression of COST on N.
BINARY CHOICE MODELS: LOGIT ANALYSIS
Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: precision of the multiple regression coefficients Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: semilogarithmic models Original citation: Dougherty, C. (2012) EC220.
EDUC 200C Section 4 – Review Melissa Kemmerle October 19, 2012.
TOBIT ANALYSIS Sometimes the dependent variable in a regression model is subject to a lower limit or an upper limit, or both. Suppose that in the absence.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: dummy variable classification with two categories Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: two sets of dummy variables Original citation: Dougherty, C. (2012) EC220.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: the effects of changing the reference category Original citation: Dougherty,
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: dummy classification with more than two categories Original citation:
DUMMY CLASSIFICATION WITH MORE THAN TWO CATEGORIES This sequence explains how to extend the dummy variable technique to handle a qualitative explanatory.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: Tobit models Original citation: Dougherty, C. (2012) EC220 - Introduction.
1 INTERACTIVE EXPLANATORY VARIABLES The model shown above is linear in parameters and it may be fitted using straightforward OLS, provided that the regression.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: binary choice logit models Original citation: Dougherty, C. (2012) EC220.
1 TWO SETS OF DUMMY VARIABLES The explanatory variables in a regression model may include multiple sets of dummy variables. This sequence provides an example.
Econometrics 1. Lecture 1 Syllabus Introduction of Econometrics: Why we study econometrics? 2.
1 PROXY VARIABLES Suppose that a variable Y is hypothesized to depend on a set of explanatory variables X 2,..., X k as shown above, and suppose that for.
1 BINARY CHOICE MODELS: PROBIT ANALYSIS In the case of probit analysis, the sigmoid function is the cumulative standardized normal distribution.
Returning to Consumption
Addressing Alternative Explanations: Multiple Regression
EDUC 200C Section 3 October 12, Goals Review correlation prediction formula Calculate z y ’ = r xy z x for a new data set Use formula to predict.
What is the MPC?. Learning Objectives 1.Use linear regression to establish the relationship between two variables 2.Show that the line is the line of.
F TEST OF GOODNESS OF FIT FOR THE WHOLE EQUATION 1 This sequence describes two F tests of goodness of fit in a multiple regression model. The first relates.
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE 1 This sequence provides a geometrical interpretation of a multiple regression model with two.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 1) Slideshow: exercise 1.5 Original citation: Dougherty, C. (2012) EC220 - Introduction.
© Christopher Dougherty 1999–2006 The denominator has been rewritten a little more carefully, making it explicit that the summation of the squared deviations.
Simple regression model: Y =  1 +  2 X + u 1 We have seen that the regression coefficients b 1 and b 2 are random variables. They provide point estimates.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: exercise 5.2 Original citation: Dougherty, C. (2012) EC220 - Introduction.
Chapter 5: Dummy Variables. DUMMY VARIABLE CLASSIFICATION WITH TWO CATEGORIES 1 We’ll now examine how you can include qualitative explanatory variables.
POSSIBLE DIRECT MEASURES FOR ALLEVIATING MULTICOLLINEARITY 1 What can you do about multicollinearity if you encounter it? We will discuss some possible.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: exercise 4.5 Original citation: Dougherty, C. (2012) EC220 - Introduction.
(1)Combine the correlated variables. 1 In this sequence, we look at four possible indirect methods for alleviating a problem of multicollinearity. POSSIBLE.
COST 11 DUMMY VARIABLE CLASSIFICATION WITH TWO CATEGORIES 1 This sequence explains how you can include qualitative explanatory variables in your regression.
STAT E100 Section Week 12- Regression. Course Review - Project due Dec 17 th, your TA. - Exam 2 make-up is Dec 5 th, practice tests have been updated.
RAMSEY’S RESET TEST OF FUNCTIONAL MISSPECIFICATION 1 Ramsey’s RESET test of functional misspecification is intended to provide a simple indicator of evidence.
1 CHANGES IN THE UNITS OF MEASUREMENT Suppose that the units of measurement of Y or X are changed. How will this affect the regression results? Intuitively,
SEMILOGARITHMIC MODELS 1 This sequence introduces the semilogarithmic model and shows how it may be applied to an earnings function. The dependent variable.
GRAPHING A RELATIONSHIP IN A MULTIPLE REGRESSION MODEL The output above shows the result of regressing EARNINGS, hourly earnings in dollars, on S, years.
1 REPARAMETERIZATION OF A MODEL AND t TEST OF A LINEAR RESTRICTION Linear restrictions can also be tested using a t test. This involves the reparameterization.
F TESTS RELATING TO GROUPS OF EXPLANATORY VARIABLES 1 We now come to more general F tests of goodness of fit. This is a test of the joint explanatory power.
WHITE TEST FOR HETEROSCEDASTICITY 1 The White test for heteroscedasticity looks for evidence of an association between the variance of the disturbance.
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used.
VARIABLE MISSPECIFICATION II: INCLUSION OF AN IRRELEVANT VARIABLE In this sequence we will investigate the consequences of including an irrelevant variable.
VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE In this sequence and the next we will investigate the consequences of misspecifying the regression.
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Presentation transcript:

1 BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Economists are often interested in the factors behind the decision-making of individuals or enterprises, examples being shown above. Why do some people go to college while others do not? Why do some women enter the labor force while others do not? Why do some people buy houses while others rent? Why do some people migrate while others stay put?

2 The models that have been developed for this purpose are known as qualitative response or binary choice models, with the outcome, which we will denote Y, being assigned a value of 1 if the event occurs and 0 otherwise. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Why do some people go to college while others do not? Why do some women enter the labor force while others do not? Why do some people buy houses while others rent? Why do some people migrate while others stay put?

Why do some people go to college while others do not? Why do some women enter the labor force while others do not? Why do some people buy houses while others rent? Why do some people migrate while others stay put? 3 Models with more than two possible outcomes have also been developed, but we will confine our attention to binary choice models. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

4 The simplest binary choice model is the linear probability model where, as the name implies, the probability of the event occurring, p, is assumed to be a linear function of a set of explanatory variables. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

5 XXiXi 1 0  1 +  2 X i y, p Graphically, the relationship is as shown, if there is just one explanatory variable. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL 11

6 Of course p is unobservable. One has data on only the outcome, Y. In the linear probability model this is used like a dummy variable for the dependent variable. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

Why do some people graduate from high school while others drop out? 7 As an illustration, we will take the question shown above. We will define a variable GRAD which is equal to 1 if the individual graduated from high school, and 0 otherwise. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

. g GRAD = 0. replace GRAD = 1 if S > 11 (509 real changes made). reg GRAD ASVABC Source | SS df MS Number of obs = F( 1, 538) = Model | Prob > F = Residual | R-squared = Adj R-squared = Total | Root MSE = GRAD | Coef. Std. Err. t P>|t| [95% Conf. Interval] ASVABC | _cons | BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL 8 The Stata output above shows the construction of the variable GRAD. It is first set to 0 for all respondents, and then changed to 1 for those who had more than 11 years of schooling.

. g GRAD = 0. replace GRAD = 1 if S > 11 (509 real changes made). reg GRAD ASVABC Source | SS df MS Number of obs = F( 1, 538) = Model | Prob > F = Residual | R-squared = Adj R-squared = Total | Root MSE = GRAD | Coef. Std. Err. t P>|t| [95% Conf. Interval] ASVABC | _cons | Here is the result of regressing GRAD on ASVABC. It suggests that every additional point on the ASVABC score increases the probability of graduating by 0.007, that is, 0.7%. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

. g GRAD = 0. replace GRAD = 1 if S > 11 (509 real changes made). reg GRAD ASVABC Source | SS df MS Number of obs = F( 1, 538) = Model | Prob > F = Residual | R-squared = Adj R-squared = Total | Root MSE = GRAD | Coef. Std. Err. t P>|t| [95% Conf. Interval] ASVABC | _cons | BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL The intercept has no sensible meaning. Literally it suggests that a respondent with a 0 ASVABC score has a 58% probability of graduating. However a score of 0 is not possible.

11 Unfortunately, the linear probability model has some serious shortcomings. First, there are problems with the disturbance term. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

12 As usual, the value of the dependent variable Y i in observation i has a nonstochastic component and a random component. The nonstochastic component depends on X i and the parameters. The random component is the disturbance term. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

13 The nonstochastic component in observation i is its expected value in that observation. This is simple to compute, because it can take only two values. It is 1 with probability p i and 0 with probability (1 – p i ) The expected value in observation i is therefore  1 +  2 X i. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

14 This means that we can rewrite the model as shown. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

XXiXi 1 0  1 +  2 X i Y, p BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL 15 The probability function is thus also the nonstochastic component of the relationship between Y and X. 11

16 In observation i, for Y i to be 1, u i must be (1 –  1 –  2 X i ). For Y i to be 0, u i must be (–  1 –  2 X i ). BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

XXiXi 1 0  1 +  2 X i Y, p BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL 11 17 The two possible values, which give rise to the observations A and B, are illustrated in the diagram. Since u does not have a normal distribution, the standard errors and test statistics are invalid. Its distribution is not even continuous. A B  1 +  2 X i 1 –  1 –  2 X i

XXiXi 1 0  1 +  2 X i Y, p BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL 11 A 1 –  1 –  2 X i B  1 +  2 X i 18 Further, it can be shown that the population variance of the disturbance term in observation i is given by (  1 +  2 X i )(1 –  1 –  2 X i ). This changes with X i, and so the distribution is heteroscedastic.

XXiXi 1 0  1 +  2 X i Y, p BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL 11 A B  1 +  2 X i 19 Yet another shortcoming of the linear probability model is that it may predict probabilities of more than 1, as shown here. It may also predict probabilities less than 0. 1 –  1 –  2 X i

20 The Stata command for saving the fitted values from a regression is predict, followed by the name that you wish to give to the fitted values. We are calling them PROB. BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL. g GRAD = 0. replace GRAD = 1 if S > 11 (509 real changes made). reg GRAD ASVABC Source | SS df MS Number of obs = F( 1, 538) = Model | Prob > F = Residual | R-squared = Adj R-squared = Total | Root MSE = GRAD | Coef. Std. Err. t P>|t| [95% Conf. Interval] ASVABC | _cons | predict PROB

. tab PROB if PROB > 1 Fitted | values | Freq. Percent Cum | | | | ********************************************* | | | | Total | tab is the Stata command for tabulating the values of a variable, and for cross-tabulating two or more variables. We see that there are 126 observations where the fitted value is greater than 1. (The middle rows of the table have been omitted.) BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL

22 In this example there were no fitted values of less than 0.. tab PROB if PROB > 1 Fitted | values | Freq. Percent Cum | | | | ********************************************* | | | | Total | tab PROB if PROB < 0 no observations

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL 23 The main advantage of the linear probability model over logit and probit analysis, the alternatives considered in the next two sequences, is that it is much easier to fit. For this reason it used to be recommended for initial, exploratory work.. tab PROB if PROB > 1 Fitted | values | Freq. Percent Cum | | | | ********************************************* | | | | Total | tab PROB if PROB < 0 no observations

BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL 24 However, this consideration is no longer relevant, now that computers are so fast and powerful, and logit and probit are typically standard features of regression applications.. tab PROB if PROB > 1 Fitted | values | Freq. Percent Cum | | | | ********************************************* | | | | Total | tab PROB if PROB < 0 no observations