Estimation Econometría. ADE.. Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x.

Slides:



Advertisements
Similar presentations
The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Tests of Static Asset Pricing Models
General Linear Model With correlated error terms  =  2 V ≠  2 I.
The Simple Regression Model
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
Integration of sensory modalities
The General Linear Model. The Simple Linear Model Linear Regression.
Nguyen Ngoc Anh Nguyen Ha Trang
1 Chapter 2 Simple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
The Simple Linear Regression Model: Specification and Estimation
Maximum likelihood (ML) and likelihood ratio (LR) test
Chapter 10 Simple Regression.
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
Simple Linear Regression
1 Chapter 3 Multiple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
AGC DSP AGC DSP Professor A G Constantinides© Estimation Theory We seek to determine from a set of data, a set of parameters such that their values would.
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
Visual Recognition Tutorial
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Linear and generalised linear models
Linear and generalised linear models
Economics Prof. Buckles
Maximum likelihood (ML)
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Ordinary Least Squares
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
ELEC 303 – Random Signals Lecture 18 – Classical Statistical Inference, Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 4, 2010.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
1 Javier Aparicio División de Estudios Políticos, CIDE Primavera Regresión.
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
1 We will now look at the properties of the OLS regression estimators with the assumptions of Model B. We will do this within the context of the simple.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Chap 5 The Multiple Regression Model
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
Stats & Summary. The Woodbury Theorem where the inverses.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Computacion Inteligente Least-Square Methods for System Identification.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
1 AAEC 4302 ADVANCED STATISTICAL METHODS IN AGRICULTURAL RESEARCH Part II: Theory and Estimation of Regression Models Chapter 5: Simple Regression Theory.
Presentation : “ Maximum Likelihood Estimation” Presented By : Jesu Kiran Spurgen Date :
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
MathematicalMarketing Slide 3c.1 Mathematical Tools Chapter 3: Part c – Parameter Estimation We will be discussing  Nonlinear Parameter Estimation  Maximum.
12. Principles of Parameter Estimation
LECTURE 06: MAXIMUM LIKELIHOOD ESTIMATION
Probability Theory and Parameter Estimation I
The Simple Linear Regression Model: Specification and Estimation
Ch3: Model Building through Regression
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
OVERVIEW OF LINEAR MODELS
Integration of sensory modalities
Simple Linear Regression
OVERVIEW OF LINEAR MODELS
Econometrics Chengyuan Yin School of Mathematics.
Parametric Methods Berlin Chen, 2005 References:
12. Principles of Parameter Estimation
Presentation transcript:

Estimation Econometría. ADE.

Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x 3, …, x k ), with x 1 being an intercept (a column vector of 1s) We also have a vector of unknown parameters  1,  2, …,  k )’ We have a perturbation vector (u) of dimension T A set of hypothesis about the relationship between X,  and u.

Estimation In these circumstances, we can link the dependent and the explanatory variables as follows: Y t =  1 +  2 x 2t + … +  k  x kt + u t Matrix form: Y = X  + u Compact form: Y t = x t ’  + u t Our key problem is to provide value to the components of the vector 

Estimation To assign values to the parameters, we have some options. -We can invent them. -We can phone Rappel and ask him for the more appropriate values according to his “view”. - We can use a random procedure. - We can estimate them.

Estimation Estimation theory is a branch of statistics that deals with providing values to a set of parameters based on measured/empirical data that has a random component. An estimator attempts to approximate the unknown parameters using the measurements.

Estimation Estimation methods Maximum likelihood Ordinary Least Squares Non-linear methods GMM Which one will we use along the course?

Estimation Do you know this John Ford’s film? There is a very famous scene where cowboys escape from redskins by crossing the Rio Grande river. (28:05) Do they use a least squares principle?

Estimation We have the following the relationship: Y = X  + u Thus, we can express it as follows:

Estimation The OLS estimator

Estimation Then, The projection matrix is: P X = X(X’X) -1 X’ The residual matrix is: M = I - P X = I - X(X’X) -1 X’ P X and M are orthogonal P X is symmetric (P X = P’ X ) P X is idempotent (P X P X = P X )

Estimation Consequently, we have that:

Estimation By simply comparing the equation, we obtain the expression of the ordinary least squares (OLS) estimator:

Estimation We can alternatively obtain this same result by simply applying the least squares principle: We need to get the combination of values of the estimation vector that minimizes the sum of the squared residuals. Analytically, this implies to solve this:

Estimation

Properties of the LS Estimator We have obtained the LS estimator. But, is good enough this estimator? Properties - Unbiased - BLUE - Consistent - Efficient

Properties of the LS Estimator It is advisable to express the LS estimator as follows:

Properties of the LS Estimator Thus, we have that: The OLS estimator vector is unbiased

Properties of the LS Estimator The variance of the estimator vector is

Properties of the LS Estimator The OLS estimator is BLUE Best Linear Unbiased Estimator This implies that it has the lowest variance, as compared to other unbiased, linear estimators. Gauss-Markov theorem helps us to prove it.prove it

Properties of the LS Estimator The LS estimator is consistent if we can prove that: Or equivalently

Properties of the LS Estimator The use of plim’s can help us to prove consistency, given that:

Properties of the LS Estimator We can employ an alternative method to prove consistency. An estimator is consistent if: It is asymptotically unbiased Its variance goes to zero when the sample grows to infinity.

Properties of the LS Estimator The first condition is held by simply considering that:

Properties of the LS Estimator The second condition also holds.

Properties of the LS Estimator We can also follow a very intuitive approach. If an estimator is consistent, we have that: The estimator asymptotically collapses towards a value, which is the true value of the parameter. Thus, the differences between estimator and true value of the parameter vanishes. It is also true that the higher the sample size, the more information available to estimate the model.

Properties of the LS Estimator Let us consider that the observations of a dependent variable are generated by the following model: T={10, 50, 100, 500, 1000} The perturbation is generated by a N(0,1) distribution

Properties of the LS Estimator Then, we estimate by OLS the following model The results we have obtained are reported in next Figures.

Properties of the OLS estimator. Efficiency An estimator is efficient if it has the lowest variance, compare to other unbiased estimators.

Properties of the OLS estimator. Efficiency To prove it: We should impose that the perturbation follows a normal distribution We should then obtain the Cramer-Rao bound.

Properties of the OLS estimator. Efficiency We should compare the Cramer-Rao bound with the variance of the OLS estimator. For the OLS estimator vector, the Cramer-Rao bound is  2 (X’X) -1. It coincides with the covariance matrix of the OLS estimator vector. Consequently, the OLS estimator is efficient.

OLS estimator of the variance of the perturbation. If we assume that the perturbation vector follows a N(0,  2 I) distribution, we have that the vector y follows a N(X ,  2 I) distribution. We already know how the vector  can be estimated We need now to get an estimator of the variance of the perturbation (  2 ) We can use the OLS approach

OLS estimator of the variance of the perturbation. The OLS estimator can be defined as follows: With being the sum of the squared residuals (SSR)

OLS estimator of the variance of the perturbation. This estimator shows the following properties: It is unbiased It is NOT linear It is consistent It is NOT efficient

Maximum Likelihood estimator The ML estimation selects the set of values of the model parameters that maximizes the likelihood function. Assuming all the previous hypothesis, the joint density function of (y 1, y 2, …, y T ) can be expressed as follows:

Maximum Likelihood estimator Then, the likelihood function can be stated as follows: Which is the difference with respect the joint density function?

Maximum Likelihood estimator To obtain the ML estimators, we should first the log- likelihood function Then, we should obtain the values of the parameters that maximize this function

Maximum Likelihood estimator The solutions to this optimization problem are: The ML estimator and the OLS estimator coincide. Both have the same properties.

Maximum Likelihood estimator The ML estimator of the variance of the perturbation is: It is consistent It is not unbiased, linear, efficient It has asymptotical good properties (it is a MLE)