1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.

Slides:



Advertisements
Similar presentations
Brief introduction on Logistic Regression
Advertisements

EC220 - Introduction to econometrics (chapter 10)
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: interactive explanatory variables Original citation: Dougherty, C. (2012)
ELASTICITIES AND DOUBLE-LOGARITHMIC MODELS
Lecture 9 Today: Ch. 3: Multiple Regression Analysis Example with two independent variables Frisch-Waugh-Lovell theorem.
INTERPRETATION OF A REGRESSION EQUATION
1 BINARY CHOICE MODELS: PROBIT ANALYSIS In the case of probit analysis, the sigmoid function F(Z) giving the probability is the cumulative standardized.
Sociology 601, Class17: October 27, 2009 Linear relationships. A & F, chapter 9.1 Least squares estimation. A & F 9.2 The linear regression model (9.3)
Lecture 4 This week’s reading: Ch. 1 Today:
Logistic Regression Multivariate Analysis. What is a log and an exponent? Log is the power to which a base of 10 must be raised to produce a given number.
From last time….. Basic Biostats Topics Summary Statistics –mean, median, mode –standard deviation, standard error Confidence Intervals Hypothesis Tests.
Binary Response Lecture 22 Lecture 22.
Chapter 10 Simple Regression.
In previous lecture, we highlighted 3 shortcomings of the LPM. The most serious one is the unboundedness problem, i.e., the LPM may make the nonsense predictions.
QUALITATIVE AND LIMITED DEPENDENT VARIABLE MODELS.
So far, we have considered regression models with dummy variables of independent variables. In this lecture, we will study regression models whose dependent.
In previous lecture, we dealt with the unboundedness problem of LPM using the logit model. In this lecture, we will consider another alternative, i.e.
© Christopher Dougherty 1999–2006 VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE We will now investigate the consequences of misspecifying.
Chapter 4 – Nonlinear Models and Transformations of Variables.
SLOPE DUMMY VARIABLES 1 The scatter diagram shows the data for the 74 schools in Shanghai and the cost functions derived from a regression of COST on N.
BINARY CHOICE MODELS: LOGIT ANALYSIS
Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: precision of the multiple regression coefficients Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: semilogarithmic models Original citation: Dougherty, C. (2012) EC220.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: nonlinear regression Original citation: Dougherty, C. (2012) EC220 -
TOBIT ANALYSIS Sometimes the dependent variable in a regression model is subject to a lower limit or an upper limit, or both. Suppose that in the absence.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 5) Slideshow: dummy classification with more than two categories Original citation:
DUMMY CLASSIFICATION WITH MORE THAN TWO CATEGORIES This sequence explains how to extend the dummy variable technique to handle a qualitative explanatory.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: Tobit models Original citation: Dougherty, C. (2012) EC220 - Introduction.
1 INTERACTIVE EXPLANATORY VARIABLES The model shown above is linear in parameters and it may be fitted using straightforward OLS, provided that the regression.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: binary choice logit models Original citation: Dougherty, C. (2012) EC220.
1 PROXY VARIABLES Suppose that a variable Y is hypothesized to depend on a set of explanatory variables X 2,..., X k as shown above, and suppose that for.
1 BINARY CHOICE MODELS: PROBIT ANALYSIS In the case of probit analysis, the sigmoid function is the cumulative standardized normal distribution.
What is the MPC?. Learning Objectives 1.Use linear regression to establish the relationship between two variables 2.Show that the line is the line of.
F TEST OF GOODNESS OF FIT FOR THE WHOLE EQUATION 1 This sequence describes two F tests of goodness of fit in a multiple regression model. The first relates.
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE 1 This sequence provides a geometrical interpretation of a multiple regression model with two.
LOGISTIC REGRESSION A statistical procedure to relate the probability of an event to explanatory variables Used in epidemiology to describe and evaluate.
Simple regression model: Y =  1 +  2 X + u 1 We have seen that the regression coefficients b 1 and b 2 are random variables. They provide point estimates.
Chapter 5: Dummy Variables. DUMMY VARIABLE CLASSIFICATION WITH TWO CATEGORIES 1 We’ll now examine how you can include qualitative explanatory variables.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: exercise 4.5 Original citation: Dougherty, C. (2012) EC220 - Introduction.
(1)Combine the correlated variables. 1 In this sequence, we look at four possible indirect methods for alleviating a problem of multicollinearity. POSSIBLE.
Qualitative and Limited Dependent Variable Models ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
Analysis of Experimental Data IV Christoph Engel.
1 NONLINEAR REGRESSION Suppose you believe that a variable Y depends on a variable X according to the relationship shown and you wish to obtain estimates.
SEMILOGARITHMIC MODELS 1 This sequence introduces the semilogarithmic model and shows how it may be applied to an earnings function. The dependent variable.
1 BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Economists are often interested in the factors behind the decision-making of individuals or enterprises,
1 REPARAMETERIZATION OF A MODEL AND t TEST OF A LINEAR RESTRICTION Linear restrictions can also be tested using a t test. This involves the reparameterization.
F TESTS RELATING TO GROUPS OF EXPLANATORY VARIABLES 1 We now come to more general F tests of goodness of fit. This is a test of the joint explanatory power.
Roger B. Hammer Assistant Professor Department of Sociology Oregon State University Conducting Social Research Logistic Regression Categorical Data Analysis.
Nonparametric Statistics
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used.
The Probit Model Alexander Spermann University of Freiburg SS 2008.
VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE In this sequence and the next we will investigate the consequences of misspecifying the regression.
Instructor: R. Makoto 1richard makoto UZ Econ313 Lecture notes.
LOGISTIC REGRESSION. Purpose  Logistical regression is regularly used when there are only two categories of the dependent variable and there is a mixture.
Logistic Regression: Regression with a Binary Dependent Variable.
The Probit Model Alexander Spermann University of Freiburg SoSe 2009
Nonparametric Statistics
The simple linear regression model and parameter estimation
BINARY LOGISTIC REGRESSION
Likelihood Ratio, Wald, and Lagrange Multiplier (Score) Tests
Introduction to Logistic Regression
Nonparametric Statistics
Chapter 6 Logistic Regression: Regression with a Binary Dependent Variable Copyright © 2010 Pearson Education, Inc., publishing as Prentice-Hall.
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Presentation transcript:

1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater than 1 or less than 0. XXiXi 1 0  1 +  2 X i Y, p 11 A 1 -  1 -  2 X i B  1 +  2 X i

2 The usual way of avoiding this problem is to hypothesize that the probability is a sigmoid (S-shaped) function of Z, F(Z), where Z is a function of the explanatory variables. BINARY CHOICE MODELS: LOGIT ANALYSIS

3 Several mathematical functions are sigmoid in character. One is the logistic function shown here. As Z goes to infinity, e -Z goes to 0 and p goes to 1 (but cannot exceed 1). As Z goes to minus infinity, e -Z goes to infinity and p goes to 0 (but cannot be below 0). BINARY CHOICE MODELS: LOGIT ANALYSIS

4 The model implies that, for values of Z less than -2, the probability of the event occurring is low and insensitive to variations in Z. Likewise, for values greater than 2, the probability is high and insensitive to variations in Z. BINARY CHOICE MODELS: LOGIT ANALYSIS

5 To obtain an expression for the sensitivity, we differentiate F(Z) with respect to Z. The box gives the general rule for differentiating a quotient and applies it to F(Z). BINARY CHOICE MODELS: LOGIT ANALYSIS

6 The sensitivity, as measured by the slope, is greatest when Z is 0. The marginal function, f(Z), reaches a maximum at this point.

7 For a nonlinear model of this kind, maximum likelihood estimation is much superior to the use of the least squares principle for estimating the parameters. More details concerning its application are given at the end of this sequence. BINARY CHOICE MODELS: LOGIT ANALYSIS

8 We will apply this model to the graduating from high school example described in the linear probability model sequence. We will begin by assuming that ASVABC is the only relevant explanatory variable, so Z is a simple function of it. BINARY CHOICE MODELS: LOGIT ANALYSIS

. logit GRAD ASVABC Iteration 0: Log Likelihood = Iteration 1: Log Likelihood = Iteration 2: Log Likelihood = Iteration 3: Log Likelihood = Iteration 4: Log Likelihood = Iteration 5: Log Likelihood = Logit Estimates Number of obs = 570 chi2(1) = Prob > chi2 = Log Likelihood = Pseudo R2 = grad | Coef. Std. Err. z P>|z| [95% Conf. Interval] asvabc | _cons | BINARY CHOICE MODELS: LOGIT ANALYSIS 9 The Stata command is logit, followed by the outcome variable and the explanatory variable(s). Maximum likelihood estimation is an iterative process, so the first part of the output will be like that shown.

. logit GRAD ASVABC Iteration 0: Log Likelihood = Iteration 1: Log Likelihood = Iteration 2: Log Likelihood = Iteration 3: Log Likelihood = Iteration 4: Log Likelihood = Iteration 5: Log Likelihood = Logit Estimates Number of obs = 570 chi2(1) = Prob > chi2 = Log Likelihood = Pseudo R2 = GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] ASVABC | _cons | In this case the coefficients of the Z function are as shown. BINARY CHOICE MODELS: LOGIT ANALYSIS

11 Since there is only one explanatory variable, we can draw the probability function and marginal effect function as functions of ASVABC. BINARY CHOICE MODELS: LOGIT ANALYSIS

12 We see that ASVABC has its greatest effect on graduating when it is in the range 20-40, that is, in the lower ability range. Any individual with a score above the average (50) is almost certain to graduate.

13 The t statistic indicates that the effect of variations in ASVABC on the probability of graduating from high school is highly significant. BINARY CHOICE MODELS: LOGIT ANALYSIS. logit GRAD ASVABC Iteration 0: Log Likelihood = Iteration 1: Log Likelihood = Iteration 2: Log Likelihood = Iteration 3: Log Likelihood = Iteration 4: Log Likelihood = Iteration 5: Log Likelihood = Logit Estimates Number of obs = 570 chi2(1) = Prob > chi2 = Log Likelihood = Pseudo R2 = GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] ASVABC | _cons |

BINARY CHOICE MODELS: LOGIT ANALYSIS. logit GRAD ASVABC Iteration 0: Log Likelihood = Iteration 1: Log Likelihood = Iteration 2: Log Likelihood = Iteration 3: Log Likelihood = Iteration 4: Log Likelihood = Iteration 5: Log Likelihood = Logit Estimates Number of obs = 570 chi2(1) = Prob > chi2 = Log Likelihood = Pseudo R2 = GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] ASVABC | _cons | Strictly speaking, the t statistic is valid only for large samples, so the normal distribution is the reference distribution. For this reason the statistic is denoted z in the Stata output. This z has nothing to do with our Z function.

BINARY CHOICE MODELS: LOGIT ANALYSIS 15 The coefficients of the Z function do not have any direct intuitive interpretation.

16 However, we can use them to quantify the marginal effect of a change in ASVABC on the probability of graduating. We will do this theoretically for the general case where Z is a function of several explanatory variables. BINARY CHOICE MODELS: LOGIT ANALYSIS

17 Since p is a function of Z, and Z is a function of the X variables, the marginal effect of X i on p can be written as the product of the marginal effect of Z on p and the marginal effect of X i on Z. BINARY CHOICE MODELS: LOGIT ANALYSIS

18 We have already derived an expression for dp/dZ. The marginal effect of X i on Z is given by its  coefficient. BINARY CHOICE MODELS: LOGIT ANALYSIS

19 Hence we obtain an expression for the marginal effect of X i on p. BINARY CHOICE MODELS: LOGIT ANALYSIS

20 The marginal effect is not constant because it depends on the value of Z, which in turn depends on the values of the explanatory variables. A common procedure is to evaluate it for the sample means of the explanatory variables. BINARY CHOICE MODELS: LOGIT ANALYSIS

21 The sample mean of ASVABC was BINARY CHOICE MODELS: LOGIT ANALYSIS. sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max GRAD | ASVABC | Logit Estimates Number of obs = 570 chi2(1) = Prob > chi2 = Log Likelihood = Pseudo R2 = GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] ASVABC | _cons |

. sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max GRAD | ASVABC | Logit Estimates Number of obs = 570 chi2(1) = Prob > chi2 = Log Likelihood = Pseudo R2 = GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] ASVABC | _cons | When evaluated at the mean, Z is equal to BINARY CHOICE MODELS: LOGIT ANALYSIS

23 e -Z is Hence f(Z) is BINARY CHOICE MODELS: LOGIT ANALYSIS. sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max GRAD | ASVABC |

24 The marginal effect, evaluated at the mean, is therefore This implies that a one point increase in ASVABC would increase the probability of graduating from high school by 0.5 percent. BINARY CHOICE MODELS: LOGIT ANALYSIS. sum GRAD ASVABC Variable | Obs Mean Std. Dev. Min Max GRAD | ASVABC |

25 In this example, the marginal effect at the mean of ASVABC is very low. The reason is that anyone with an average score is very likely to graduate anyway. So an increase in the score has little effect. BINARY CHOICE MODELS: LOGIT ANALYSIS 50.15

. sum COLLEGE ASVABC Variable | Obs Mean Std. Dev. Min Max COLLEGE | ASVABC | To show that the marginal effect varies, we will also calculate it for ASVABC equal to 30. A one point increase in ASVABC then increases the probability by 4.2 percent. BINARY CHOICE MODELS: LOGIT ANALYSIS

27 An individual with a score of 30 has only a 50 percent probability of graduating, and an increase in the score has a relatively large impact. BINARY CHOICE MODELS: LOGIT ANALYSIS

. logit GRAD ASVABC SM SF MALE Iteration 0: Log Likelihood = Iteration 1: Log Likelihood = Iteration 2: Log Likelihood = Iteration 3: Log Likelihood = Iteration 4: Log Likelihood = Iteration 5: Log Likelihood = Logit Estimates Number of obs = 570 chi2(4) = Prob > chi2 = Log Likelihood = Pseudo R2 = grad | Coef. Std. Err. z P>|z| [95% Conf. Interval] ASVABC | SM | SF | MALE | _cons | Here is the output for a model with a somewhat better specification. BINARY CHOICE MODELS: LOGIT ANALYSIS

. sum GRAD ASVABC SM SF MALE Variable | Obs Mean Std. Dev. Min Max GRAD | ASVABC | SM | SF | MALE | We will estimate the marginal effects, putting all the explanatory variables equal to their sample means. BINARY CHOICE MODELS: LOGIT ANALYSIS

Logit: Marginal Effects mean b product f(Z) f(Z)b ASVABC SM SF MALE Constant Total BINARY CHOICE MODELS: LOGIT ANALYSIS The first step is to calculate Z, when the X variables are equal to their sample means.

Logit: Marginal Effects mean b product f(Z) f(Z)b ASVABC SM SF MALE Constant Total BINARY CHOICE MODELS: LOGIT ANALYSIS We then calculate f(Z).

Logit: Marginal Effects mean b product f(Z) f(Z)b ASVABC SM SF MALE Constant Total The estimated marginal effects are f(Z) multiplied by the respective coefficients. We see that the effect of ASVABC is about the same as before. Every extra year of schooling of the mother increases the probability of graduating by 0.2 percent. BINARY CHOICE MODELS: LOGIT ANALYSIS

Logit: Marginal Effects mean b product f(Z) f(Z)b ASVABC SM SF MALE Constant Total BINARY CHOICE MODELS: LOGIT ANALYSIS Father's schooling has no discernible effect. Males have 0.9 percent lower probability of graduating than females. These effects would all have been larger if they had been evaluated at a lower ASVABC score.

Individuals who graduated: outcome probability is 34 This sequence will conclude with an outline explanation of how the model is fitted using maximum likelihood estimation. BINARY CHOICE MODELS: LOGIT ANALYSIS

35 In the case of an individual who graduated, the probability of that outcome is F(Z). We will give subscripts 1,..., s to the individuals who graduated. BINARY CHOICE MODELS: LOGIT ANALYSIS Individuals who graduated: outcome probability is

36 In the case of an individual who did not graduate, the probability of that outcome is 1 - F(Z). We will give subscripts s+1,..., n to these individuals. BINARY CHOICE MODELS: LOGIT ANALYSIS Maximize F(Z 1 ) x... x F(Z s ) x [1 - F(Z s+1 )] x... x [1 - F(Z n )] Individuals who graduated: outcome probability is Individuals did not graduate: outcome probability is

Maximize F(Z 1 ) x... x F(Z s ) x [1 - F(Z s+1 )] x... x [1 - F(Z n )] Did graduate Did not graduate 37 We choose b 1 and b 2 so as to maximize the joint probability of the outcomes, that is, F(Z 1 ) x... x F(Z s ) x [1 - F(Z s+1 )] x... x [1 - F(Z n )]. There are no mathematical formulae for b 1 and b 2. They have to be determined iteratively by a trial-and-error process. BINARY CHOICE MODELS: LOGIT ANALYSIS