3-variable Regression Derive OLS estimators of 3-variable regression

Slides:



Advertisements
Similar presentations
Properties of Least Squares Regression Coefficients
Advertisements

1 Regression as Moment Structure. 2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure.
Multiple Regression Analysis
Classical Linear Regression Model
Multiple Regression W&W, Chapter 13, 15(3-4). Introduction Multiple regression is an extension of bivariate regression to take into account more than.
The Simple Regression Model
CHAPTER 3: TWO VARIABLE REGRESSION MODEL: THE PROBLEM OF ESTIMATION
The Multiple Regression Model.
Chapter 12 Simple Linear Regression
CHAPTER 8 MULTIPLE REGRESSION ANALYSIS: THE PROBLEM OF INFERENCE
3.2 OLS Fitted Values and Residuals -after obtaining OLS estimates, we can then obtain fitted or predicted values for y: -given our actual and predicted.
Multiple Regression [ Cross-Sectional Data ]
Assumption MLR.3 Notes (No Perfect Collinearity)
Lecture 3 Cameron Kaplan
The Simple Linear Regression Model: Specification and Estimation
CHAPTER 3 ECONOMETRICS x x x x x Chapter 2: Estimating the parameters of a linear regression model. Y i = b 1 + b 2 X i + e i Using OLS Chapter 3: Testing.
Simple Linear Regression
Econ Prof. Buckles1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
4.1 All rights reserved by Dr.Bill Wan Sing Hung - HKBU Lecture #4 Studenmund (2006): Chapter 5 Review of hypothesis testing Confidence Interval and estimation.
1 Chapter 3 Multiple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
Statistics for Business and Economics
All rights reserved by Dr.Bill Wan Sing Hung - HKBU 4A.1 Week 4a Multiple Regression The meaning of partial regression coefficients.
FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Econ 140 Lecture 71 Classical Regression Lecture 7.
FIN357 Li1 The Simple Regression Model y =  0 +  1 x + u.
The Simple Regression Model
CHAPTER 4 ECONOMETRICS x x x x x Multiple Regression = more than one explanatory variable Independent variables are X 2 and X 3. Y i = B 1 + B 2 X 2i +
FIN357 Li1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Topic4 Ordinary Least Squares. Suppose that X is a non-random variable Y is a random variable that is affected by X in a linear fashion and by the random.
Lecture 2 (Ch3) Multiple linear regression
Ch. 14: The Multiple Regression Model building
FIN357 Li1 The Simple Regression Model y =  0 +  1 x + u.
So are how the computer determines the size of the intercept and the slope respectively in an OLS regression The OLS equations give a nice, clear intuitive.
Ordinary Least Squares
Lecture 5 Correlation and Regression
Correlation and Regression
Hypothesis Testing in Linear Regression Analysis
2-1 MGMG 522 : Session #2 Learning to Use Regression Analysis & The Classical Model (Ch. 3 & 4)
7.1 Multiple Regression More than one explanatory/independent variable This makes a slight change to the interpretation of the coefficients This changes.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u.
1Spring 02 First Derivatives x y x y x y dy/dx = 0 dy/dx > 0dy/dx < 0.
Y X 0 X and Y are not perfectly correlated. However, there is on average a positive relationship between Y and X X1X1 X2X2.
Roger B. Hammer Assistant Professor Department of Sociology Oregon State University Conducting Social Research Ordinary Least Squares Regression.
Chapter Three TWO-VARIABLEREGRESSION MODEL: THE PROBLEM OF ESTIMATION
Copyright © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin The Two-Variable Model: Hypothesis Testing chapter seven.
May 2004 Prof. Himayatullah 1 Basic Econometrics Chapter 7 MULTIPLE REGRESSION ANALYSIS: The Problem of Estimation.
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
Chapter 4 The Classical Model Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Math 4030 – 11b Method of Least Squares. Model: Dependent (response) Variable Independent (control) Variable Random Error Objectives: Find (estimated)
Chap 5 The Multiple Regression Model
Class 5 Multiple Regression CERAM February-March-April 2008 Lionel Nesta Observatoire Français des Conjonctures Economiques
1 AAEC 4302 ADVANCED STATISTICAL METHODS IN AGRICULTURAL RESEARCH Part II: Theory and Estimation of Regression Models Chapter 5: Simple Regression Theory.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Bivariate Regression. Bivariate Regression analyzes the relationship between two variables. Bivariate Regression analyzes the relationship between two.
STA302/1001 week 11 Regression Models - Introduction In regression models, two types of variables that are studied:  A dependent variable, Y, also called.
The simple linear regression model and parameter estimation
Chapter 4: Basic Estimation Techniques
Multiple Regression Analysis: Estimation
Basic Estimation Techniques
The Simple Linear Regression Model: Specification and Estimation
ECONOMETRICS DR. DEEPTI.
Chapter 3: TWO-VARIABLE REGRESSION MODEL: The problem of Estimation
Chapter 6: MULTIPLE REGRESSION ANALYSIS
The regression model in matrix form
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Goodness of Fit The sum of squared deviations from the mean of a variable can be decomposed as follows: TSS = ESS + RSS This decomposition can be used.
Linear Regression Summer School IFPRI
Regression Models - Introduction
Presentation transcript:

3-variable Regression Derive OLS estimators of 3-variable regression Properties of 3-variable OLS estimators

Derive OLS estimators of multiple regression Y = 0 + 1X1 + 2X2 +  ^  = Y - 0 - 1X1 - 2X2 OLS is to minimize the SSR( 2) ^ min. RSS = min.  2 = min. (Y - 0 - 1X1 - 2X2)2 RSS 0 =2  ( Y - 0- 1X1 - 2X2)(-1) = 0 1 =2  ( Y - 0- 1X1 - 2X2)(-X1) = 0 2 =2  ( Y - 0- 1X1 - 2X2)(-X2) = 0

 rearranging three equations: n0 + 1 X1 + 2  X2 = Y ^ 1 X1 + 1 X12 + 2  X1X2 = X1Y 0 X2 + 1 X1X2 + 2  X22 = X2Y rewrite in matrix form: n X1 X2 X1 X12 X1X2 X2 X1X2 X22 0 1 2 ^ = Y  X1Y X2Y 2-variables Case 3-variables Case (X’X)  ^ = X’Y Matrix notation

Cramer’s rule: n Y X2 X1 X1Y X1X2 X2 X2Y X22 n X1 X2 X1 X12 X1X2 X2 X1X2 X22 = 1 ^ (yx1)(x22) - (yx2)(x1x2) (x12)(x22) - (x1x2)2 n X1 Y X1 X12 X1Y X2 X1X1 X2Y n X1 X2 X1 X12 X1X2 X2 X1X2 X22 = 2 ^ (yx2)(x12) - (yx1)(x1x2) (x12)(x22) - (x1x2)2 0 = Y - 1X1 - 2X2 ^ _

or in matrix form: ^  (X’X) X’Y = 3x3 3x1 3x1 ==>  ^ = (X’X)-1 (X’Y) 3x3 3x1 Var-cov() = 2 (X’X)-1 and 2 = ^   2 n-3 Variance-Covariance matrix Var-cov() = ^ Var(0) Cov(0 1) Cov(0 2) Cov (1 0) Var(1) Cov(1 2) Cov (2 0) Cov(2 1) Var(2) = 2(X’X)-1

n X1 X2 X1 X12 X1X2 X2 X2X1 X22 = 2 ^ -1 2 = ^ u2 n-3 and = 2 n- k -1 k=2 # of independent variables ( not including the constant term)

Properties of multiple OLS estimators 1. The regression line(surface)passes through the mean of Y1, X1, X2 _ i.e., Y = 0 + 1X1 + 2X2 ^ ==> 0 = Y - 1X1 - 2X2 Linear in parameters Regression through the mean 2. Y = Y + 1x1 + 2x2 ^ _ y = 1x1 + 2x2 or Unbiased: E(i) = i 3. =0 ^ Zero mean of error X1 = X2 = 0 ^ 4. (Xk=0 ) constant Var() = 2 Y=0 ^ 5. random sample

Properties of multiple OLS estimators 6. As X1 and X2 are closely related ==> var(1) and var(2) become large and infinite. Therefore the true values of 1 and 2 are difficult to know. ^ All the normality assumptions in the two-variables case regression are also applied to the multiple variable regression. But one addition assumption is No exact linear relationship among the independent variables. (No perfect collinearity, i.e., Xk  Xj ) 7. The greater the variation in the sample values of X1 or X2, the smaller variance of 1 and 2 , and the estimations are more precisely. ^ 8. BLUE (Gauss-Markov Theorem)

Note: Don’t misuse the adjusted R2, read Studenmund(2001) pp. 53-55 The adjusted R2 (R2) as one of indicator of the overall fitness R2 = ESS TSS = 1 - RSS 2 y2 ^ 2 / (n-k) y2 / (n-1) R2 = 1 - _ ^ k : # of independent variables plus the constant term. n : # of obs. R2 = 1 - _ 2 SY2 ^ R2 = 1 - _ 2 y2 ^ (n-1) (n-k-1) n-1 R2 = 1 - (1-R2) _ n-k-1 R2  R2 Adjusted R2 can be negative: R2  0 0 < R2 < 1 Note: Don’t misuse the adjusted R2, read Studenmund(2001) pp. 53-55

The meaning of partial regression coefficients Y = 0 + 1X1 + 2X2 +  (suppose this is a true model) Y X1 = 1 : 1 measures the change in the mean values of Y, per unit change in X1, holding X2 constant. or The ‘direct’ effect of a unit change in X1 on the mean value of Y, net of X2 Y X2 = 2 holding X1 constant, the direct effect of a unit change in X2 on the mean value of Y. Holding constant: To assess the true contribution of X1 to the change in Y, we control the influence of X2.

Y Y  ^ C X1 X2 Y = 0 + 1 X1 + 2 X2 +  TSS n-1

Suppose X3 is not an explanatory Variable but is included in regression

Partial effect : holding other variables constant Unemployment rate(%) Y X1 = 1 = -1.3925 Y = 0 + 1X1 + 1 ^ X1 Direct effect from X1 Y expected inflation rate (%) Actual inflation rate(%) Y = 0 + 2X2 + 2 ^ X2 Y X2 = 2= -1.4700 Diret effect from X2 X2 = b20 + b21X1 + 12 X1 = b1 + b12X2 + 12 X2 X1 = b21 = 1.1138 Indirect effect from X2

Total effect from X1: 1 + 2 * b21 = -1.392472 + (1.470032)(1.11385) ^ ‘direct’ + ‘indirect’ = -1.392472 + 1.637395 = 0.244923 Y X1 = 1’ = 0.2449 ^ X1 Y Y = 0’ + 1’ X1 +  Implicitly reflects the hidden true model is including the X2

C X1 Y = 0 + ’1 X1 + ’ “’” includes X2

C X1 X2 X2 = b20 + b21 X1 + ’’

Total effect from X2: 2 + 1 * b12 = 1.470032 + (-1.392472) (0.369953) ^ ‘direct’ + ‘indirect’ = 1.470032 - 0.515149 = 0.9548828 Y X2 = 2’ = 0.954883 ^ X2 Y Y = 0’ + 2’ X2 +  Implicitly reflects the hidden true model is including the X1

C X2 X1 X1 = b10 + b12 X2 + ’’’

C X2 Y = 0 + ’2 X2 + ’’’’ ’”’ includes X1