Structure of the class 1.The linear probability model 2.Maximum likelihood estimations 3.Binary logit models and some other models 4.Multinomial models.

Slides:



Advertisements
Similar presentations
Dummy Dependent variable Models
Advertisements

Introduction Describe what panel data is and the reasons for using it in this format Assess the importance of fixed and random effects Examine the Hausman.
Tests of Static Asset Pricing Models
Linear Regression.
Brief introduction on Logistic Regression
Random Assignment Experiments
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 7. Specification and Data Problems.
Longitudinal and Multilevel Methods for Models with Discrete Outcomes with Parametric and Non-Parametric Corrections for Unobserved Heterogeneity David.
Binary Logistic Regression: One Dichotomous Independent Variable
Limited Dependent Variables
4.3 Confidence Intervals -Using our CLM assumptions, we can construct CONFIDENCE INTERVALS or CONFIDENCE INTERVAL ESTIMATES of the form: -Given a significance.
Error Component models Ric Scarpa Prepared for the Choice Modelling Workshop 1st and 2nd of May Brisbane Powerhouse, New Farm Brisbane.
Nguyen Ngoc Anh Nguyen Ha Trang
Models with Discrete Dependent Variables
1Prof. Dr. Rainer Stachuletz Limited Dependent Variables P(y = 1|x) = G(  0 + x  ) y* =  0 + x  + u, y = max(0,y*)
The Multiple Regression Model Prepared by Vera Tabakova, East Carolina University.
The Simple Linear Regression Model: Specification and Estimation
Binary Response Lecture 22 Lecture 22.
Maximum likelihood (ML) and likelihood ratio (LR) test
Chapter 10 Simple Regression.
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 7. Specification and Data Problems.
QUALITATIVE AND LIMITED DEPENDENT VARIABLE MODELS.
Maximum likelihood (ML) and likelihood ratio (LR) test
The Simple Regression Model
So far, we have considered regression models with dummy variables of independent variables. In this lecture, we will study regression models whose dependent.
Chapter 11 Multiple Regression.
Topic 3: Regression.
Lecture 14-2 Multinomial logit (Maddala Ch 12.2)
Single and Multiple Spell Discrete Time Hazards Models with Parametric and Non-Parametric Corrections for Unobserved Heterogeneity David K. Guilkey.
9. Binary Dependent Variables 9.1 Homogeneous models –Logit, probit models –Inference –Tax preparers 9.2 Random effects models 9.3 Fixed effects models.
Class 6 Qualitative Dependent Variable Models SKEMA Ph.D programme Lionel Nesta Observatoire Français des Conjonctures Economiques
MODELS OF QUALITATIVE CHOICE by Bambang Juanda.  Models in which the dependent variable involves two ore more qualitative choices.  Valuable for the.
Hypothesis Testing:.
Class 4 Ordinary Least Squares SKEMA Ph.D programme Lionel Nesta Observatoire Français des Conjonctures Economiques
Hypothesis Testing in Linear Regression Analysis
Lecture 14-1 (Wooldridge Ch 17) Linear probability, Probit, and
Chapter 4-5: Analytical Solutions to OLS
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
9-1 MGMG 522 : Session #9 Binary Regression (Ch. 13)
Limited Dependent Variables Ciaran S. Phibbs May 30, 2012.
University of Warwick, Department of Sociology, 2014/15 SO 201: SSAASS (Surveys and Statistics) (Richard Lampard) Week 7 Logistic Regression I.
Managerial Economics Demand Estimation & Forecasting.
Maximum Likelihood Estimation Methods of Economic Investigation Lecture 17.
Issues in Estimation Data Generating Process:
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
Limited Dependent Variables Ciaran S. Phibbs. Limited Dependent Variables 0-1, small number of options, small counts, etc. 0-1, small number of options,
Regression with a Binary Dependent Variable
Meeghat Habibian Analysis of Travel Choice Transportation Demand Analysis Lecture note.
Generalized Linear Models (GLMs) and Their Applications.
Review of Statistics.  Estimation of the Population Mean  Hypothesis Testing  Confidence Intervals  Comparing Means from Different Populations  Scatterplots.
Machine Learning 5. Parametric Methods.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Hypothesis Testing. Statistical Inference – dealing with parameter and model uncertainty  Confidence Intervals (credible intervals)  Hypothesis Tests.
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
The Probit Model Alexander Spermann University of Freiburg SS 2008.
Lecturer: Ing. Martina Hanová, PhD. Business Modeling.
Logit Models Alexander Spermann, University of Freiburg, SS Logit Models.
Instructor: R. Makoto 1richard makoto UZ Econ313 Lecture notes.
Econometric analysis of CVM surveys. Estimation of WTP The information we have depends on the elicitation format. With the open- ended format it is relatively.
Non-Linear Dependent Variables Ciaran S. Phibbs November 17, 2010.
Logistic Regression: Regression with a Binary Dependent Variable.
Estimating standard error using bootstrap
The Probit Model Alexander Spermann University of Freiburg SoSe 2009
Chapter 7. Classification and Prediction
M.Sc. in Economics Econometrics Module I
THE LOGIT AND PROBIT MODELS
Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference.
THE LOGIT AND PROBIT MODELS
Undergraduated Econometrics
Presentation transcript:

Structure of the class 1.The linear probability model 2.Maximum likelihood estimations 3.Binary logit models and some other models 4.Multinomial models

The Linear Probability Model

The linear probability model When the dependent variable is binary (0/1, for example, Y=1 if the firm innovates, 0 otherwise), OLS is called the linear probability model. How should one interpret β j ? Provided that E(u|X)=0 holds true, then: β measures the variation of the probability of success for a one-unit variation of X (ΔX=1)

1. Non normality of errors 2. Heteroskedastic errors 3. Fallacious predictions Limits of the linear probability model

Overcoming the limits of the LPM 1. Non normality of errors  Increase sample size 2. Heteroskedastic errors  Use robust estimators 3. Fallacious prediction  Perform non linear or constrained regressions

Persistent use of LPM Although it has limits, the LPM is still used 1.In the process of data exploration (early stages of the research) 2.It is a good indicator of the marginal effect of the representative observation (at the mean) 3.When dealing with very large samples, least squares can overcome the complications imposed by maximum likelihood techniques.  Time of computation  Endogeneity and panel data problems

The LOGIT/PROBIT Model

Probability, odds and logit/probit We need to explain the occurrence of an event: the LHS variable takes two values : y={0;1}. In fact, we need to explain the probability of occurrence of the event, conditional on X: P(Y=y | X) ∈ [0 ; 1]. OLS estimations are not adequate, because predictions can lie outside the interval [0 ; 1]. We need to transform a real number, say z to ∈ ]-∞;+∞[ into P(Y=y | X) ∈ [0 ; 1]. The logit/probit transformation links a real number z ∈ ]- ∞;+∞[ to P(Y=y | X) ∈ [0 ; 1].It is also called the link function

Binary Response Models: Logit - Probit Link function approach

Maximum likelihood estimations OLS can be of much help. We will use Maximum Likelihood Estimation (MLE) instead. MLE is an alternative to OLS. It consists of finding the parameters values which is the most consistent with the data we have. The likelihood is defined as the joint probability to observe a given sample, given the parameters involved in the generating function. One way to distinguish between OLS and MLE is as follows: OLS adapts the model to the data you have : you only have one model derived from your data. MLE instead supposes there is an infinity of models, and chooses the model most likely to explain your data.

Let us assume that you have a sample of n random observations. Let f(y i ) be the probability that y i = 1 or y i = 0. The joint probability to observe jointly n values of y i is given by the likelihood function: Logit likelihood Likelihood functions

Knowing p (as the logit), having defined f(.), we come up with the likelihood function:

The log transform of the likelihood function (the log likelihood) is much easier to manipulate, and is written: Log likelihood (LL) functions

The LL function can yield an infinity of values for the parameters β. Given the functional form of f(.) and the n observations at hand, which values of parameters β maximize the likelihood of my sample? In other words, what are the most likely values of my unknown parameters β given the sample I have? Maximum likelihood estimations

However, there is not analytical solutions to this non linear problem. Instead, we rely on a optimization algorithm (Newton-Raphson) The LL is globally concave and has a maximum. The gradient is used to compute the parameters of interest, and the hessian is used to compute the variance-covariance matrix. Maximum likelihood estimations You need to imagine that the computer is going to generate all possible values of β, and is going to compute a likelihood value for each (vector of ) values to then choose (the vector of) β such that the likelihood is highest.

Binary Dependent Variable – Research questions We want to explore the factors affecting the probability of being successful innovator (inno = 1): Why?

 Instruction Stata : logit logit y x 1 x 2 x 3 … x k [if] [weight] [, options] Options noconstant : estimates the model without the constant robust : estimates robust variances, also in case of heteroscedasticity if : it allows to select the observations we want to include in the analysis weight : it allows to weight different observations Logistic Regression with STATA

A positive coefficient indicates that the probability of innovation success increases with the corresponding explanatory variable. A negative coefficient implies that the probability to innovate decreases with the corresponding explanatory variable. Warning! One of the problems encountered in interpreting probabilities is their non-linearity: the probabilities do not vary in the same way according to the level of regressors This is the reason why it is normal in practice to calculate the probability of (the event occurring) at the average point of the sample Interpretation of Coefficients

Let’s run the more complete model  logit inno lrdi lassets spe biotech

Using the sample mean values of rdi, lassets, spe and biotech, we compute the conditional probability : Interpretation of Coefficients

It is often useful to know the marginal effect of a regressor on the probability that the event occur (innovation) As the probability is a non-linear function of explanatory variables, the change in probability due to a change in one of the explanatory variables is not identical if the other variables are at the average, median or first quartile, etc. level. Marginal Effects

Goodness of Fit Measures In ML estimations, there is no such measure as the R 2 But the log likelihood measure can be used to assess the goodness of fit. But note the following :  The higher the number of observations, the lower the joint probability, the more the LL measures goes towards -∞  Given the number of observations, the better the fit, the higher the LL measures (since it is always negative, the closer to zero it is) The philosophy is to compare two models looking at their LL values. One is meant to be the constrained model, the other one is the unconstrained model.

Goodness of Fit Measures A model is said to be constrained when the observed set the parameters associated with some variable to zero. A model is said to be unconstrained when the observer release this assumption and allows the parameters associated with some variable to be different from zero. For example, we can compare two models, one with no explanatory variables, one with all our explanatory variables. The one with no explanatory variables implicitly assume that all parameters are equal to zero. Hence it is the constrained model because we (implicitly) constrain the parameters to be nil.

The likelihood ratio test (LR test) The most used measure of goodness of fit in ML estimations is the likelihood ratio. The likelihood ratio is the difference between the unconstrained model and the constrained model. This difference is distributed  2. If the difference in the LL values is (no) important, it is because the set of explanatory variables brings in (un)significant information. The null hypothesis H 0 is that the model brings no significant information as follows: High LR values will lead the observer to reject hypothesis H 0 and accept the alternative hypothesis H a that the set of explanatory variables does significantly explain the outcome.

The McFadden Pseudo R 2 We also use the McFadden Pseudo R 2 (1973). Its interpretation is analogous to the OLS R 2. However its is biased doward and remain generally low. Le pseudo-R 2 also compares The likelihood ratio is the difference between the unconstrained model and the constrained model and is comprised between 0 and 1.

Goodness of Fit Measures Constrained model Unconstrained model

The Logit model is only one way of modeling binary choice models The Probit model is another way of modeling binary choice models. It is actually more used than logit models and assume a normal distribution (not a logistic one) for the z values. The complementary log-log models is used where the occurrence of the event is very rare, with the distribution of z being asymetric. Other Binary Choice models

Probit model Complementary log-log model

Likelihood functions and Stata commands Example logit inno rdi lassets spe pharma probit inno rdi lassets spe pharma cloglog inno rdi lassets spe pharma

Probability Density Functions

Cumulative Distribution Functions

Comparison of models OLSLogitProbitC log-log Ln(R&D intensity) [3.90]***[3.57]***[3.46]***[3.13]*** ln(Assets) [8.58]***[7.29]***[7.53]***[7.19]*** Spe [1.11][1.01][0.98][0.76] BiotechDummy [7.49]***[6.58]***[6.77]***[6.51]*** Constant [3.91]**[6.01]***[6.12]***[6.08]*** Observations431 Absolute t value in brackets (OLS) z value for other models. * 10%, ** 5%, *** 1%

Comparison of marginal effects OLSLogitProbitC log-log Ln(R&D intensity) ln(Assets) Specialisation Biotech Dummy For all models logit, probit and cloglog, marginal effects have been computed for a one-unit variation (around the mean) of the variable at stake, holding all other variables at the sample mean values.

Multinomial LOGIT Models

Multinomial models Let us now focus on the case where the dependent variable has several outcomes (or is multinomial). For example, innovative firms may need to collaborate with other organizations. One can code this type of interactions as follows  Collaborate with university (modality 1)  Collaborate with large incumbent firms (modality 2)  Collaborate with SMEs (modality 3)  Do it alone (modality 4) Or, studying firm survival  Survival (modality 1)  Liquidation (modality 2)  Mergers & acquisition (modality 3)

36 Multiple alternatives without obvious ordering  Choice of a single alternative out of a number of distinct alternatives e.g.: which means of transportation do you use to get to work? bus, car, bicycle etc.  example for ordered structure: how do you feel today: very well, fairly well, not too well, miserably

Random Utility Model RUM underlies economic interpretation of discrete choice models. Developed by Daniel McFadden for econometric applications  see JoEL January 2001 for Nobel lecture; also Manski (2001) Daniel McFadden and the Econometric Analysis of Discrete Choice, Scandinavian Journal of Economics, 103(2), Preferences are functions of biological taste templates, experiences, other personal characteristics  Some of these are observed, others unobserved  Allows for taste heterogeneity Discussion below is in terms of individual utility (e.g. migration, transport mode choice) but similar reasoning applies to firm choices

Random Utility Model Individual i’s utility from a choice j can be decomposed into two components: V ij is deterministic – common to everyone, given the same characteristics and constraints  representative tastes of the population e.g. effects of time and cost on travel mode choice  ij is random  reflects idiosyncratic tastes of i and unobserved attributes of choice j

Random Utility Model V ij is a function of attributes of alternative j (e.g. price and time) and observed consumer and choice characteristics. We are interested in finding , ,  Lets forget about z now for simplicity

RUM and binary choices Consider two choices e.g. bus or car We observe whether an individual uses one or the other Define What is the probability that we observe an individual choosing to travel by bus? Assume utility maximisation Individual chooses bus (y=1) rather than car (y=0) if utility of commuting by bus exceeds utility of commuting by car

RUM and binary choices So choose bus if So the probability that we observe an individual choosing bus travel is

The linear probability model Assume probability depends linearly on observed characteristics (price and time) Then you can estimate by linear regression Where is the “dummy variable” for mode choice (1 if bus, 0 if car) Other consumer and choice characteristics can be included (the zs in the first slide in this section)

Probits and logits Common assumptions:  Cumulative normal distribution function – “Probit”  Logistic function – “Logit” Estimation by maximum likelihood

45 A discrete choice underpinning choice between M alternatives decision is determined by the utility level U ij, an individual i derives from choosing alternative j Let: where i=1,…,N individuals; j=0,…,J alternatives (1) The alternative providing the highest level of utility will be chosen.

46 The probability that alternative j will be chosen is: In general, this requires solving multidimensional integrals  analytical solutions do not exist

47 Exception: If the error terms εij in are assumed to be independently & identically standard extreme value distributed, then an analytical solution exists. In this case, similar to binary logit, it can be shown that the choice probabilities are

Let us assume that you have a sample of n random observations. Let f(y j ) be the probability that y i = j. The joint probability to observe jointly n values of y j is given by the likelihood function: We need to specify function f(.). It comes from the empirical discrete distribution of an event that can have several outcomes. This is the multinomial distribution. Hence: Likelihood functions

The maximum likelihood function The maximum likelihood function reads:

The maximum likelihood function The log transform of the likelihood yields

Multinomial logit models  Stata Instruction : mlogit mlogit y x 1 x 2 x 3 … x k [if] [weight] [, options] Options : noconstant : omits the constant robust : controls for heteroskedasticity if : select observations weight : weights observations

use mlogit.dta, clear mlogit type_exit log_time log_labour entry_age entry_spin cohort_* Base outcome, chosen by STATA, with the highest empirical frequency Goodness of fit Parameter estimates, Standard errors and z values Multinomial logit models

Interpretation of coefficients The interpretation of coefficients always refer to the base category Does the probability of being bought- out decrease overtime ? No! Relative to survival the probability of being bought-out decrease overtime

Interpretation of coefficients The interpretation of coefficients always refer to the base category Is the probability of being bought-out lower for spinoff? No! Relative to survival the probability of being bought-out is lower for spinoff

55 Marginal Effects Elasticities  relative change of p ij if x increases by 1 per cent

Independence of irrelevant alternatives - IAA The model assumes that each pair of outcome is independent from all other alternatives. In other words, alternatives are irrelevant. From a statistical viewpoint, this is tantamount to assuming independence of the error terms across pairs of alternatives A simple way to test the IIA property is to estimate the model taking off one modality (called the restrained model), and to compare the parameters with those of the complete model  If IIA holds, the parameters should not change significantly  If IIA does not hold, the parameters should change significantly

Multinomial logit and “IIA” Many applications in economic and geographical journals (and other research areas) The multinomial logit model is the workhorse of multiple choice modelling in all disciplines. Easy to compute But it has a drawback

Independence of Irrelevant Alternatives Consider market shares  Red bus 20%  Blue bus 20%  Train 60% IIA assumes that if red bus company shuts down, the market shares become  Blue bus 20% + 5% = 25%  Train 60% + 15% = 75% Because the ratio of blue bus trips to train trips must stay at 1:3

Independence of Irrelevant Alternatives Model assumes that ‘unobserved’ attributes of all alternatives are perceived as equally similar But will people unable to travel by red bus really switch to travelling by train? Most likely outcome is (assuming supply of bus seats is elastic)  Blue bus: 40%  Train: 60% This failure of multinomial/conditional logit models is called the Independence of Irrelevant Alternatives assumption (IIA)

H 0 : The IIA property is valid H 1 : The IIA property is not valid The H statistics (H stands for Hausman) follows a χ² distribution with M degree of freedom (M being the number of parameters) Independence of irrelevant alternatives - IAA

STATA application: the IIA test H 0 : The IIA property is valid H 1 : The IIA property is not valid mlogtest, hausman Omitted variable

Application de IIA mlogtest, hausman We compare the parameters of the model “liquidation relative bought-out” estimated simultaneously with “survival relative to bought-out” avec the parameters of the model “liquidation relative bought-out” estimated without “survival relative to bought-out” H 0 : The IIA property is valid H 1 : The IIA property is not valid

Application de IIA mlogtest, hausman The conclusion is that outcome survival significantly alters the choice between liquidation and bought-out. In fact for a company, being bought-out must be seen as a way to remain active with a cost of losing control on economic decision, notably investment. H 0 : The IIA property is valid H 1 : The IIA property is not valid

64 Cramer-Ridder Test Often you want to know whether certain alternatives can be merged into one: e.g., do you have to distinguish between employment states such as “unemployment” and “nonemployment” The Cramer-Ridder tests the null hypothesis that the alternatives can be merged. It has the form of a LR test: 2(logL U -logL R )~χ²

65 Derive the log likelihood value of the restricted model where two alternatives (here, A and N) have been merged: where log is the log likelihood of the of the pooled model, and n A and n N are the number of times A and N have been chosen restricted model, log is the log likelihood

Exercise use press.com/data/r8/sysdsn3 tabulate insure mlogit insure age male nonwhite site2 site3