1 Copyright © 2007 Thomson Asia Pte. Ltd. All rights reserved. CH5 Multiple Regression Analysis: OLS Asymptotic 

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 9 Inferences Based on Two Samples.
Advertisements

Multiple Regression Analysis
The Simple Regression Model
1 Javier Aparicio División de Estudios Políticos, CIDE Otoño Regresión.
Econ 488 Lecture 5 – Hypothesis Testing Cameron Kaplan.
Lecture 3 (Ch4) Inferences
3.3 Omitted Variable Bias -When a valid variable is excluded, we UNDERSPECIFY THE MODEL and OLS estimates are biased -Consider the true population model:
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
10 Further Time Series OLS Issues Chapter 10 covered OLS properties for finite (small) sample time series data -If our Chapter 10 assumptions fail, we.
8. Heteroskedasticity We have already seen that homoskedasticity exists when the error term’s variance, conditional on all x variables, is constant: Homoskedasticity.
Lecture 4 Econ 488. Ordinary Least Squares (OLS) Objective of OLS  Minimize the sum of squared residuals: where Remember that OLS is not the only possible.
Assumption MLR.3 Notes (No Perfect Collinearity)
The Simple Linear Regression Model: Specification and Estimation
1 Research Method Lecture 4 (Ch5) OLS Asymptotics ©
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Hypothesis Testing.
Economics Prof. Buckles1 Time Series Data y t =  0 +  1 x t  k x tk + u t 1. Basic Analysis.
Chapter 10 Simple Regression.
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
Simple Linear Regression
Prof. Dr. Rainer Stachuletz
Econ Prof. Buckles1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Multiple Regression Analysis
4. Multiple Regression Analysis: Estimation -Most econometric regressions are motivated by a question -ie: Do Canadian Heritage commercials have a positive.
The Simple Regression Model
Multiple Regression Analysis
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Inference.
FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 3. Asymptotic Properties.
Economics 20 - Prof. Anderson
Inference about a Mean Part II
The Simple Regression Model
EC Prof. Buckles1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Inference.
Economics Prof. Buckles
Multiple Linear Regression Analysis
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Intermediate Econometrics
2-1 MGMG 522 : Session #2 Learning to Use Regression Analysis & The Classical Model (Ch. 3 & 4)
STA Lecture 161 STA 291 Lecture 16 Normal distributions: ( mean and SD ) use table or web page. The sampling distribution of and are both (approximately)
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 9 Inferences Based on Two Samples.
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u.
Properties of OLS How Reliable is OLS?. Learning Objectives 1.Review of the idea that the OLS estimator is a random variable 2.How do we judge the quality.
3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2.
Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin The Two-Variable Model: Hypothesis Testing chapter seven.
405 ECONOMETRICS Domodar N. Gujarati Prof. M. El-Sakka
1 Javier Aparicio División de Estudios Políticos, CIDE Primavera Regresión.
Chapter 4 The Classical Model Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
5. Consistency We cannot always achieve unbiasedness of estimators. -For example, σhat is not an unbiased estimator of σ -It is only consistent -Where.
Review of Statistics.  Estimation of the Population Mean  Hypothesis Testing  Confidence Intervals  Comparing Means from Different Populations  Scatterplots.
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Inference.
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
Economics 20 - Prof. Anderson1 Time Series Data y t =  0 +  1 x t  k x tk + u t 1. Basic Analysis.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Hypothesis Testing. Suppose we believe the average systolic blood pressure of healthy adults is normally distributed with mean μ = 120 and variance σ.
Week 21 Statistical Assumptions for SLR  Recall, the simple linear regression model is Y i = β 0 + β 1 X i + ε i where i = 1, …, n.  The assumptions.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
Regression Overview. Definition The simple linear regression model is given by the linear equation where is the y-intercept for the population data, is.
Multiple Regression Analysis: Estimation
The Simple Regression Model
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Tutorial 1: Misspecification
Heteroskedasticity.
Multiple Regression Analysis: OLS Asymptotics
Multiple Regression Analysis: OLS Asymptotics
Multiple Regression Analysis
Presentation transcript:

1 Copyright © 2007 Thomson Asia Pte. Ltd. All rights reserved. CH5 Multiple Regression Analysis: OLS Asymptotic 

2 5.1 Consistency  y =  0 +  1 x 1 +  2 x  k x k + u (5.1) Under the Gauss-Markov assumptions (MLR1~5) OLS is BLUE, but in other cases it won’t always be possible to find unbiased estimators.

3 5.1 Consistency The OLS estimators have normal sampling distributions, which led directly to the t and F distributions for t and F statistic. If the error is not normality distribution, the distribution of a t and F statistic is not exactly t and F statistic for any sample size.

4 5.1 Consistency In addition to finite sample properties, it is important to know the asymptotic properties or large sample properties of estimators and test statistics. These properties are defined the sample size grows without bound.

5 Therom 5.1 Consistency of OLS Even without the normality assumption (Assumption MLR.6.), t and F statistic have approximately t and F distribution at least in large sample size.

6 Consistency of OLS Under the Gauss-Markov assumptions, the OLS estimator is consistent (and unbiased) Consistency is a minimal requirement for an estimator.

7

8 Therom 5.1 Consistency of OLS Under Assumption MLR1~MLR4, the OLS estimator 0, 1,…, k.

9 Proving Consistency for Simple Regression (5.2) (5.3)

10 Assumption MLR4 ’ Assumption MLR.4’ requires only that each x j is uncorrelated with u. For consistency, we can have the weaker assumption of zero mean and zero correlation – E(u) = 0 and Cov(x j,u) = 0, for j = 1, 2, …, k E(y|x 1, x 2,…,x k )= β 0 +β 1 x β k x k, Assumpt MLR.4’, β 0 +β 1 x β k x k, need not represent the population regression function. Without this assumption, in the finite sample, OLS will be biased and inconsistent!

11 Deriving the Inconsistency in OLS Just as we could derive the omitted variable bias earlier, now we want to think about the inconsistency, or asymptotic bias, in this case

12 Deriving the Inconsistency in OLS We can views the inconsistency as being the same as the bias. An import point about inconsistency in OLS estimators. Remember: inconsistency is a large sample problem it doesn’t go away as add data, the problem gets worse with more data.

Asymptotic Normality and Large Sample Inference Recall that under the CLM assumptions (MLR.1~6), the sampling distributions are normal, so we could derive t and F distributions for testing This exact normality was due to assuming the population error distribution was normal This assumption of normal errors (MLR6) implied that the distribution of y, given the x’s, was normal.

14 Large Sample Inference Easy to come up with examples for which this exact normality assumption will fail The wages, arrests, savings, etc. can’t be normal, since a normal distribution is symmetric. Normality assumption not needed to conclude OLS is BLUE, but exact inference based on t and F statistics requires MLR6.

15 Central Limit Theorem Based on the central limit theorem, we can show that OLS estimators are asymptotically normal Asymptotic Normality implies that P(Z<z)  (z) as n , or P(Z<z)   (z) The central limit theorem states that the standardized average of any population with mean  and variance  2 is asymptotically ~N(0,1), or

16 Theorem 5.2 Asymptotic Normality of OLS

17 Asymptotic Normality (cont) Because the t distribution approaches the normal distribution for large df., we can also say that Note that while we no longer need to assume normality (MLR6.) with a large sample, we do still need assume zero conditional mean (MLR.4.) and homoskedasticity of u (MLR.5.)

18 Asymptotic Normality (cont) when u is not normally distribution, if the sample size is not very large, then the t distribution can be a poor approximation to the distribution of the t statistics. The central limit theorem delivers a useful approximation. It is very import require the homoskedasticity assumption. Var(y|X) is not constant, the usual t statistic and confidence interval are invalid no matter how large the sample size is.

19 Asymptotic Standard Errors If u is not normally distributed, we sometimes will refer to the standard error as an asymptotic standard error, since So, we can expect standard errors to shrink at a rate proportional to the inverse of √n

20 Lagrange Multiplier statistic Once we are using large samples and relying on asymptotic normality for inference, we can use more that t and F stats The Lagrange Multiplier or LM statistic is an alternative for testing multiple exclusion restrictions Because the LM statistic uses an auxiliary regression it’s sometimes called an nR u 2 stat.

21 LM Statistic (step by step) Suppose we have a standard model, y = β 0 + β 1 x 1 + β 2 x β k x k + u (5.11) (unrestricted) and our null hypothesis is H 0 : β k-q+1 = 0,..., β k = 0 (5.12) (1) we just run the restricted model

22 LM Statistic (cont) With a large sample, the result from an F test and from an LM test should be similar Unlike the F test and t test for one exclusion, the LM test and F test will not be identical

23 Example 5.3 We use the LM statistic to test the null hypothesis that avgsen and tottime have no effect on narr86 once the other factors have been controlled for.

24 Example 5.3

Asymptotic Efficiency of OLS Under the Gauss-Markov assumptions, the OLS estimators will have the smallest asymptotic variances. The OLS estimators are BLUE and is also asymptotically efficient.

Asymptotic Efficiency of OLS In the simple regression case. In the model

Asymptotic Efficiency of OLS Can apply the law of large number to the numerator and denominator, which converge in probability to Cov(z, u) and Cov(z, x)

Asymptotic Efficiency of OLS Under the Gauss-Markov assumptions