Nguyen Ngoc Anh Nguyen Ha Trang

Slides:



Advertisements
Similar presentations
The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Probit The two most common error specifications yield the logit and probit models. The probit model results if the are distributed as normal variates,
Econometrics I Professor William Greene Stern School of Business
Kin 304 Regression Linear Regression Least Sum of Squares
Brief introduction on Logistic Regression
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
The General Linear Model. The Simple Linear Model Linear Regression.
Simple Linear Regression
The Simple Linear Regression Model: Specification and Estimation
Multiple Linear Regression Model
Binary Response Lecture 22 Lecture 22.
Chapter 10 Simple Regression.

QUALITATIVE AND LIMITED DEPENDENT VARIABLE MODELS.
Maximum Likelihood We have studied the OLS estimator. It only applies under certain assumptions In particular,  ~ N(0, 2 ) But what if the sampling distribution.
Today Today: Chapter 9 Assignment: 9.2, 9.4, 9.42 (Geo(p)=“geometric distribution”), 9-R9(a,b) Recommended Questions: 9.1, 9.8, 9.20, 9.23, 9.25.
Maximum-Likelihood estimation Consider as usual a random sample x = x 1, …, x n from a distribution with p.d.f. f (x;  ) (and c.d.f. F(x;  ) ) The maximum.
Chapter 11 Multiple Regression.
1 Power 14 Goodness of Fit & Contingency Tables. 2 Outline u I. Projects u II. Goodness of Fit & Chi Square u III.Contingency Tables.
Lecture 7 1 Statistics Statistics: 1. Model 2. Estimation 3. Hypothesis test.
Maximum likelihood (ML)
Simple Linear Regression and Correlation
BINARY CHOICE MODELS: LOGIT ANALYSIS
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:
MODELS OF QUALITATIVE CHOICE by Bambang Juanda.  Models in which the dependent variable involves two ore more qualitative choices.  Valuable for the.
Chapter 11 Simple Regression
Intermediate Econometrics
Lecture 14-1 (Wooldridge Ch 17) Linear probability, Probit, and
Chapter 4-5: Analytical Solutions to OLS
Lecture 3: Inference in Simple Linear Regression BMTRY 701 Biostatistical Methods II.
Qualitative and Limited Dependent Variable Models Adapted from Vera Tabakova’s notes ECON 4551 Econometrics II Memorial University of Newfoundland.
Limited Dependent Variable Models ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
9-1 MGMG 522 : Session #9 Binary Regression (Ch. 13)
Forecasting Choices. Types of Variable Variable Quantitative Qualitative Continuous Discrete (counting) Ordinal Nominal.
Maximum Likelihood Estimation Methods of Economic Investigation Lecture 17.
CY3A2 System identification1 Maximum Likelihood Estimation: Maximum Likelihood is an ancient concept in estimation theory. Suppose that e is a discrete.
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Estimation in Marginal Models (GEE and Robust Estimation)
I271B QUANTITATIVE METHODS Regression and Diagnostics.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
Qualitative and Limited Dependent Variable Models ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Lecture 1: Basic Statistical Tools. A random variable (RV) = outcome (realization) not a set value, but rather drawn from some probability distribution.
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
The Probit Model Alexander Spermann University of Freiburg SS 2008.
4. Tobit-Model University of Freiburg WS 2007/2008 Alexander Spermann 1 Tobit-Model.
Estimation Econometría. ADE.. Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x.
Instructor: R. Makoto 1richard makoto UZ Econ313 Lecture notes.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
The Probit Model Alexander Spermann University of Freiburg SoSe 2009
The simple linear regression model and parameter estimation
Chapter 4 Basic Estimation Techniques
Regression Analysis AGEC 784.
Probability Theory and Parameter Estimation I
M.Sc. in Economics Econometrics Module I
Ch3: Model Building through Regression
Drop-in Sessions! When: Hillary Term - Week 1 Where: Q-Step Lab (TBC) Sign up with Alice Evans.
Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference.
Simple Linear Regression - Introduction
Statistical Assumptions for SLR
EC 331 The Theory of and applications of Maximum Likelihood Method
The Simple Linear Regression Model: Specification and Estimation
Simple Linear Regression
Parametric Methods Berlin Chen, 2005 References:
Maximum Likelihood We have studied the OLS estimator. It only applies under certain assumptions In particular,  ~ N(0, 2 ) But what if the sampling distribution.
Presentation transcript:

Nguyen Ngoc Anh Nguyen Ha Trang Applied Econometrics Maximum Likelihood Estimation and Discrete choice Modelling Nguyen Ngoc Anh Nguyen Ha Trang

Content Basic introduction to principle of Maximum Likelihood Estimation Binary choice RUM Extending the binary choice Mutinomial Ordinal

Maximum Likelihood Estimation Back to square ONE Population model: Y = α + βX + ε Assume that the true slope is positive, so β > 0 Sample model: Y = a + bX + e Least squares (LS) estimator of β: bLS = (X′X)–1X′Y = Cov(X,Y) / Var(X) Key assumptions E(|x) = E( ) = 0  Cov(x, ) = E(x ) = 0 Adding: Error Normality Assumption – e is idd with normal distribution

Maximum Likelihood Estimation joint estimation of all the unknown parameters of a statistical model.  that the model in question be completely specified.  Complete specification of the model includes specifying the specific form of the probability distribution of the model's random variables.  joint estimation of the regression coefficient vector β and the scalar error variance σ2.

Maximum Likelihood Estimation Step 1: Formulation of the sample likelihood function Step 2: Maximization of the sample likelihood function with respect to the unknown parameters β and σ2 .

Maximum Likelihood Estimation Step 1 Normal Distribution Function: if Then we have the density function From our assumption with e or u

Maximum Likelihood Estimation Substitute for Y: By random sampling, we have N independent observations, each with a pdf joint pdf of all N sample values of Yi can be written as

Maximum Likelihood Estimation Substitute for Y

Maximum Likelihood Estimation the joint pdf f(y) is the sample likelihood function for the sample of N independent observations The key difference between the joint pdf and the sample likelihood function is their interpretation, not their form.

Maximum Likelihood Estimation The joint pdf is interpreted as a function of the observable random variables for given values of the parameters and The sample likelihood function is interpreted as a function of the parameters β and σ2 for given values of the observable variables

Maximum Likelihood Estimation STEP 2: Maximization of the Sample likelihood Function Equivalence of maximizing the likelihood and log-likelihood functions : Because the natural logarithm is a positive monotonic transformation, the values of β and σthat maximize the likelihood function are the same as those that maximize the log-likelihood function take the natural logarithm of the sample likelihood function to obtain the sample log-likelihood function.

Maximum Likelihood Estimation Differentiation and prove that MLE estimates is the same as OLS

Maximum Likelihood Estimation Statistical Properties of the ML Parameter Estimators Consistency 2. Asymptotic efficiency 3. Asymptotic normality Shares the small sample properties of the OLS coefficient estimator

Binary Response Models: Linear Probability Model, Logit, and Probit Many economic phenomena of interest, however, concern variables that are not continuous or perhaps not even quantitative What characteristics (e.g. parental) affect the likelihood that an individual obtains a higher degree? What determines labour force participation (employed vs not employed)? What factors drive the incidence of civil war?

Binary Response Models Consider the linear regression model Quantity of interest

Binary Response Models the change in the probability that Yi = 1 associated with a one-unit increase in Xj, holding constant the values of all other explanatory variables

Binary Response Models

Binary Response Models Two Major limitation of OLS Estimation of BDV Models Predictions outside the unit interval [0, 1] The error terms ui are heteroskedastic – i.e., have nonconstant variances.

Binary Response Models: Logit - Probit Link function approach

Binary Response Models Latent variable approach The problem is that we do not observe y*i. Instead, we observe the binary variable

Binary Response Models

Binary Response Models Random utility model

Binary Response Models Maximum Likelihood estimation Measuring the Goodness of Fit

Binary Response Models Interpreting the results: Marginal effects In a binary outcome model, a given marginal effect is the ceteris paribus effect of changing one individual characteristic upon an individual’s probability of ‘success’.

STATA Example Logitprobit.dta Logitprobit description STATA command Probit/logit inlf nwifeinc ed exp expsq age kidslt6 kidsge6 dprobit inlf nwifeinc ed exp expsq age kidslt6 kidsge6