Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classical Linear Regression Model Notation and Assumptions Model Estimation –Method of Moments –Least Squares –Partitioned Regression Model Interpretation.

Similar presentations


Presentation on theme: "Classical Linear Regression Model Notation and Assumptions Model Estimation –Method of Moments –Least Squares –Partitioned Regression Model Interpretation."— Presentation transcript:

1 Classical Linear Regression Model Notation and Assumptions Model Estimation –Method of Moments –Least Squares –Partitioned Regression Model Interpretation

2 Notations y: Dependent Variable (Regressand) –y i, i = 1,2,…n X: Explanatory Variables (Regressors) –x i ', i = 1,2,…,n –x ij, i = 1,2,…,n; j = 1,2,…,K

3 Assumptions Assumption 1: Linearity y i =  1 x i1 +  2 x i2 +…+  K x iK +  i (i = 1,2,…,n) Linearity in Vector Notation y i = x i '  +  i (i = 1,2,…,n)

4 Assumptions Linearity in Matrix Notation y = X  + 

5 Assumptions Assumption 2: Exogeneity E(  i |X) = E(  i |x j1,x j2,…,x jK ) = 0 (i,j = 1,2,…,n) Implications of Exogeneity  E(  i ) = E(E(  i |X)) = 0  E(x jk  i ) = E(E(x jk  i |X)) = E(x jk E(  i |X)) = 0  Cov(  i,x jk ) = E(x jk  i ) - E(x jk )E(  i ) = 0

6 Assumptions Assumption 3: No Multicolinearity rank(X) = K, with probability 1. Assumption 4: Spherical Error Variance E(  i 2 |X) =  2 > 0, i = 1,2,…,n E(  i  j |X) = 0, i,j = 1,2,…,n; i  j In matrix notation: Var(  |X) = E(  '|X) =  2 I n

7 Assumptions Implication of Spherical Error Variance (Homoscedasticity and No Autocorrelation) –Var(  ) = E(Var(  |X))+Var(E(  |X)) = E(Var(  |X)) =  2 I n –Var(X'  |X) = E(X'  'X|X) =  2 E(X'X) Note: E(X'  |X) = 0 followed from Exogeneity Assumption

8 Discussions Exogeneity in Time Series Models Random Samples The sample (y,X) is a random sample if {y i,x i } is i.i.d (independently and identically distributed). Fixed Regressors X is fixed or deterministic.

9 Discussions Nonlinearity in Variables g(y i ) =  1 f 1 (x i1 )+  2 f 2 (x i2 )+…+  K f K (x iK )+  i (i = 1,2,…,n) –Linearity in Parameters and Model Errors

10 Method-of-Moments Estimator From the implication of strict exogeneity assumption, E(x jk  i ) = 0 for i = 1,2,…,n and k = 1,2,…,K. That is, Moment Conditions: E(X'  ) = 0 X'(y-X  ) = 0 or X'y=X'X  The method-of-moments estimator of  : b = (X'X) -1 X'y

11 Least Squares Estimator OLS: b = argmin  SSR(  ) SSR(  ) =  '  = (y-X  )'(y-X  ) =  i=1,2…,n (y i -x i '  ) 2 b = (X'X) -1 X'y

12 The Algebra of Least Squares Normal Equations: X'Xb = X'y b = (X'X) -1 X’y

13 The Algebra of Least Squares b = (X'X) -1 X'y =  + (X'X) -1 X'  b = (X'X/n) -1 (X'y/n) = S xx -1 s xy S xx is the sample average of x i x i ', and s xy is the sample average of x i y i. OLS Residuals: e = y – Xb X'e = 0 s 2 = SSR/(n-K) = e'e/(n-K)

14 The Algebra of Least Squares Projection Matrix P and “Residual Maker” M: –P = X(X'X) -1 X', M = I n - P –PX = X, MX = 0 –Py = Xb, My = y-Py = e, M  = e SSR = e'e =  'M  h = Vector of diagonal elements in P can be used to check for the influential observation: h i > K/n, with 0  h i  1.

15 The Algebra of Least Squares Fitted Value: Uncentered R 2 : R uc 2 = 1 – e'e/y'y = Centered R 2 : R 2 =

16 Analysis of Variance TSS = ESS = RSS = e'e (or SSR) TSS = ESS + RSS Degrees of Freedom –TSS: n-1 –ESS: K-1 –RSS: n-K

17 Analysis of Variance R 2 Adjusted R 2

18 Discussions Examples –Constant or Mean Regression y i =  +  i (i=1,2,…,n) –Simple Regression y i =  +  x i +  i (i=1,2,…,n)

19 Partitioned Regression y = X  +  X1  1  + X2  2  +  X =  X1 X2],  1'  2'  ' Normal Equations X'Xb = X'y, or

20 Partitioned Regression b1 = (X1'X1) -1 X1'y - (X1'X1) -1 (X1'X2)b2 = (X1'X1) -1 X1'(y-X2b2) b2 = (X2'M1X2) -1 X2'M1y = (X2 * 'X2 * ) -1 X2 * 'y * M1 = [I-X1(X1'X1) -1 X1'] X2 * = M1X2 (matrix of residuals obtained from each column of X2 regressed on X1) y * = M1y (residuals of y regressed on X1) b2 is LS estimator of y * on X2 *

21 Partitioned Regression Frisch-Waugh Theorem If y = X1  1  + X2  2  + , then b2 is the LS estimator for the regression of residuals from a regression of y on X1 alone are regressed on the set of residuals obtained when each column of X2 is regressed on X1. If X1 and X2 are independent, X1'X2 = 0. b1= (X1'X1) -1 X1'y b2 = (X2'X2) -1 X2'y

22 Partitioned Regression Applications of Frisch-Waugh Theorem –Partial Regression Coefficient –Trend Regression y t =  +  t  +  t

23 Model Interpretation Marginal Effects (Ceteris Paribus Interpretation): y = Xb + e, where Elasticity Interpretation

24 Example U. S. Gasoline Market, –EXPG = Total U.S. gasoline expenditure –PG = Price index for gasoline –Y = Per capita disposable income –Pnc = Price index for new cars –Puc = Price index for used cars –Ppt = Price index for public transportation –Pd = Aggregate price index for consumer durables –Pn = Aggregate price index for consumer nondurables –Ps = Aggregate price index for consumer services –Pop = U.S. total population in thousands

25 Example y = X  +  y = G; X = [1 PG Y] where G = (EXPG/PG)/POP y = ln(G); X = [1 ln(PG) ln(Y)] Elasticity Interpretation

26 Example (Continued) y = X1  + X2  +  y = ln(G); X1 = [1 YEAR]; X2 = [ln(PG) ln(Y)] where G = (EXPG/PG)/POP Frisch-Waugh Theorem


Download ppt "Classical Linear Regression Model Notation and Assumptions Model Estimation –Method of Moments –Least Squares –Partitioned Regression Model Interpretation."

Similar presentations


Ads by Google