The regression model in matrix form

Slides:



Advertisements
Similar presentations
SJS SDI_21 Design of Statistical Investigations Stephen Senn 2 Background Stats.
Advertisements

The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Ordinary Least-Squares
Properties of Least Squares Regression Coefficients
Classical Linear Regression Model
General Linear Model With correlated error terms  =  2 V ≠  2 I.
The Simple Regression Model
CHAPTER 3: TWO VARIABLE REGRESSION MODEL: THE PROBLEM OF ESTIMATION
The Multiple Regression Model.
CHAPTER 8 MULTIPLE REGRESSION ANALYSIS: THE PROBLEM OF INFERENCE
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
The Simple Linear Regression Model: Specification and Estimation
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 2. Hypothesis Testing.
GRA 6020 Multivariate Statistics Regression examples Ulf H. Olsson Professor of Statistics.
1-1 Regression Models  Population Deterministic Regression Model Y i =  0 +  1 X i u Y i only depends on the value of X i and no other factor can affect.
Chapter 10 Simple Regression.
CHAPTER 3 ECONOMETRICS x x x x x Chapter 2: Estimating the parameters of a linear regression model. Y i = b 1 + b 2 X i + e i Using OLS Chapter 3: Testing.
Simple Linear Regression
Classical Linear Regression Model Finite Sample Properties of OLS Restricted Least Squares Specification Errors –Omitted Variables –Irreverent Variables.
3-variable Regression Derive OLS estimators of 3-variable regression
All rights reserved by Dr.Bill Wan Sing Hung - HKBU 4A.1 Week 4a Multiple Regression The meaning of partial regression coefficients.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Econ 140 Lecture 71 Classical Regression Lecture 7.
Ordinary least squares regression (OLS)
GRA 6020 Multivariate Statistics Regression examples Ulf H. Olsson Professor of Statistics.
Topic4 Ordinary Least Squares. Suppose that X is a non-random variable Y is a random variable that is affected by X in a linear fashion and by the random.
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
The Proof of unbiased estimator of  2 (Ref. To Gujarati (2003)pp ) The least squares formulas (estimators) in the simple regression case: b2b2.
Basics of regression analysis
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Variance and covariance Sums of squares General linear models.
Lecture 5 Correlation and Regression
7.1 Multiple Regression More than one explanatory/independent variable This makes a slight change to the interpretation of the coefficients This changes.
LECTURE 2. GENERALIZED LINEAR ECONOMETRIC MODEL AND METHODS OF ITS CONSTRUCTION.
MTH 161: Introduction To Statistics
9-1 MGMG 522 : Session #9 Binary Regression (Ch. 13)
1Spring 02 First Derivatives x y x y x y dy/dx = 0 dy/dx > 0dy/dx < 0.
6. Simple Regression and OLS Estimation Chapter 6 will expand on concepts introduced in Chapter 5 to cover the following: 1) Estimating parameters using.
Chap 5 The Multiple Regression Model
Charles University FSV UK STAKAN III Institute of Economic Studies Faculty of Social Sciences Institute of Economic Studies Faculty of Social Sciences.
Class 5 Multiple Regression CERAM February-March-April 2008 Lionel Nesta Observatoire Français des Conjonctures Economiques
CWR 6536 Stochastic Subsurface Hydrology
Computacion Inteligente Least-Square Methods for System Identification.
The Simple Linear Regression Model: Specification and Estimation  Theory suggests many relationships between variables  These relationships suggest that.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Lecturer: Ing. Martina Hanová, PhD..  How do we evaluate a model?  How do we know if the model we are using is good?  assumptions relate to the (population)
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
Regression Overview. Definition The simple linear regression model is given by the linear equation where is the y-intercept for the population data, is.
Heteroscedasticity Heteroscedasticity is present if the variance of the error term is not a constant. This is most commonly a problem when dealing with.
6. Simple Regression and OLS Estimation
Ch. 2: The Simple Regression Model
ELG5377 Adaptive Signal Processing
(5) Notes on the Least Squares Estimate
Probability Theory and Parameter Estimation I
The Simple Linear Regression Model: Specification and Estimation
Evgeniya Anatolievna Kolomak, Professor
Welcome to Econ 420 Applied Regression Analysis
ECONOMETRICS DR. DEEPTI.
Chapter 3: TWO-VARIABLE REGRESSION MODEL: The problem of Estimation
Ch. 2: The Simple Regression Model
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Linear regression Fitting a straight line to observations.
OVERVIEW OF LINEAR MODELS
Econometrics Chengyaun yin School of Mathematics SHUFE.
Multiple Regression Analysis: Estimation
OVERVIEW OF LINEAR MODELS
Linear Regression Summer School IFPRI
Ch3 The Two-Variable Regression Model
6.1.1 Deriving OLS OLS is obtained by minimizing the sum of the square errors. This is done using the partial derivative 6.
Regression Models - Introduction
Presentation transcript:

The regression model in matrix form The least squares normal equations take the form: and we can calculate the OLS estimator as: under the assumption that (X’X) is invertible.

Distribution of the OLS estimator This depends on our assumptions about the errors and the variance-covariance matrix of the errors. This is a matrix with the variances of the errors on the diagonal and the covariances off the diagonal. e.g. if N = 3. Note that this matrix is symmetric by construction.

The Gauss-Markov assumptions in matrix form and finally, 5. the errors follow a normal distribution. These assumptions are essentially the same as for the bivariate regression model.

Assumptions 2 and 3 mean that the variance-covariance matrix of the errors takes the form: If the GM assumptions hold then we can show that OLS is BLUE i.e the Best Linear Unbiased Estimator.

OLS is unbiased under the GM assumptions Proof: GM4 states that the X matrix is non-stochastic and GM1 states that the error has expectation zero, using these we have:

The variance of the OLS estimator can be derived as follows: By GM4 the X matrix is exogenous. Therefore we have: By GM2 and GM3 we can write:

Therefore: This is the variance-covariance matrix of the OLS estimator. It contains the variances of the coefficient estimates on the diagonal and the covariances off the diagonal. This matrix will be k x k where k is the number of coefficients estimated (including any constant term).

Example: The bivariate regression model. The Gauss-Markov Theorem shows that, under the GM assumptions, the variance-covariance matrix of any linear unbiased estimator differs from the OLS v-cov matrix by a positive semi- definite matrix. Hence OLS is BLUE.

The sample variance-covariance matrix is obtained by replacing the unknown error variance with an unbiased estimator. This is the expression we use to calculate the coefficient standard errors which are reported in the regression output.