Presentation is loading. Please wait.

Presentation is loading. Please wait.

9.1 Lecture #9 Studenmund (2006) Chapter 9 Objectives The nature of autocorrelation The consequences of autocorrelation Testing the existence of autocorrelation.

Similar presentations


Presentation on theme: "9.1 Lecture #9 Studenmund (2006) Chapter 9 Objectives The nature of autocorrelation The consequences of autocorrelation Testing the existence of autocorrelation."— Presentation transcript:

1

2 9.1 Lecture #9 Studenmund (2006) Chapter 9 Objectives The nature of autocorrelation The consequences of autocorrelation Testing the existence of autocorrelation Correcting autocorrelation

3 9.2 Time Series Data Time series process of economic variables e.g., GDP, M1, interest rate, exchange rate, imports, exports, inflation rate, etc. Realization An observed time series data set generated from a time series process Remark: Age is not a realization of time series process. Time trend is not a time series process too.

4 9.3 Decomposition of time series Trend random Cyclical or seasonal XtXt time X t = Trend + seasonal + random

5 9.4 Static Models C t =  0 +  1 Yd t +  t Subscript “t” indicates time. The regression is a contemporaneous relationship, i.e., how does current consumption (C) be affected by current Yd? Example: Static Phillips curve model inflat t =  0 +  1 unemploy t +  t inflat: inflation rate unemploy: unemployment rate

6 9.5 Lag Finite Distributed Lag Models Forward Distributed Lag Effect (with order q) Effect at time t+2 Economic action at time t Effect at time t C t =  0 +  0 Yd t +  t Effect at time t+1 C t+1 =  0 +  0 Yd t+1 +  1 Yd t +  t  C t =  0 +  0 Yd t +  1 Yd t-1 +  t Effect at time t+q …. C t+q =  0 +  1 Y d t+q +…+  1 Y d t +  t  C t =  0 +  1 Y d t +…+  1 Y d t-q +  t

7 9.6 Economic action at time t Effect at time t-1 Backward Distributed Lag Effect Y t =  0 +  0 Z t +  1 Z t-1 +  2 Z t-2 +…+  2 Z t-q +  t Initial state: z t = z t-1 = z t-2 = c Effect at time t-q …. Effect at time t-3 Effect at time t-2

8 9.7  0  1  2 C =  0 +  0 Yd t +  1 Yd t-1 +  2 Yd t-2 +  t  0 +  1 +  2 Long-run propensity (LRP) = (  0 +  1 +  2 ) permanent Permanent unit change in C for 1 unit permanent (long-run) change in Yd. Distributed Lag model in general: C t =  0 +  0 Yd t +  1 Yd t-1 +…+  q Yd t-q + other factors +  t LRP (or long run multiplier) =  0 +   q

9 9.8 Time Trends Linear time trend Y t =  0 +  1 t +  t Constant absolute change Exponential time trend ln(Y t ) =  0 +  1 t +  t Constant growth rate Quadratic time trend Y t =  0 +  1 t +  2 t 2 +  t Accelerate change ECON 3670 For advances on time series analysis and modeling, welcome to take ECON 3670

10 9.9 Definition: First-order of Autocorrelation, AR(1) If Cov (  t,  s ) = E (  t  s )  0 where t  s Y t =  0 +  1 X 1t +  t t = 1,……,T and if   t =   t-1 + u t  RHO where -1 <  < 1 (  : RHO) and u t ~ iid (0,  u 2 ) (white noise) This scheme is called first-order autocorrelation and denotes as AR(1) Autoregressive : The regression of  t can be explained by itself lagged one period.  RHO  (RHO) : the first-order autocorrelation coefficient or ‘coefficient of autocovariance’

11 u 1990 … … … ….... … … … u u u u u u 2007 Year Consumption t =  0 +  1 Income t + error t Example of serial correlation: TaxPay 2006 TaxPay 2007 Error term represents other factors that affect consumption u t ~ iid(0,  u 2 )  u TaxPay 2007 =   TaxPay u 2007  u   t =   t-1 + u t The current year Tax Pay may be determined by previous year rate

12 9.11 If  t =  1  t-1 + u t it is AR(1), first-order autoregressive If  t =  1  t-1 +  2  t-2 + u t it is AR(2), second-order autoregressive If  t =  1  t-1 +  2  t-2 + …… +  n  t-n + u t it is AR(n), n th -order autoregressive ………………………………………………. High order autocorrelation Autocorrelation AR(1) : Cov (  t  t-1 ) > 0 => 0 <  < 1 positive AR(1) Cov (  t  t-1 ) -1 <  < 0 negative AR(1) -1 <  < 1 If  t =  1  t-1 +  2  t-2 +  3  t-3 + u t it is AR(2), third-order autoregressive

13 9.12 time 0 ii ^ x x x x x x x x x Positive autocorrelation time 0 ii ^ x x x x x x x x Positive autocorrelation time0 ii ^ Cyclical: Positive autocorrelation x x x x x x x x x x x x x x x x The current error term tends to have the same sign as the previous one.

14 9.13 Negative autocorrelation time ii ^ x x x x x x x x x x x x x x No autocorrelation x x x x x x x x x x x x x x x x x x x x x x x 0 time x x x ii ^ The current error term tends to have the opposite sign from the previous. The current error term tends to be randomly appeared from the previous.

15 9.14 The meaning of The meaning of  : The error term  t at time t is a linear combination of the current and past disturbance. 0 <  < 1 -1 <  < 0 The further the period is in the past, the smaller is the weight of that error term (  t-1 ) in determining  t  = 1 The past is equal importance to the current.  > 1 The past is more importance than the current.

16 9.15 The consequences of serial correlation: 3. The standard error of the estimated coefficient, Se(  k ) becomes large ^ ^ no longer the smallest 2. The variances of the  k is no longer the smallest Therefore, when AR(1) is existing in the regression, The estimation will not be “BLUE” BLUE still unbiased 1.The estimated coefficients are still unbiased. E(  k ) =  k ^

17 9.16 If E(  t  t-1 )  0, and  t =   t-1 + u t, then If  = 0, zero autocorrelation, than Var(  1 ) AR1 = Var(  1 ) ^ ^ > If   0, autocorrelation, than Var(  1 ) AR1 > Var(  1 ) ^ ^ Two variable regression model: Y t =  0 +  1 X 1t +  t The OLS estimator of  1, ^  x y  xt2 xt2 If E(  t  t-1 ) = 0 then Var (  1 ) = ^ 22  xt2 xt2 ===>  1 = Var (  1 ) AR1 = +  +  2 ^  2 2  2  x t x t+1  x t x t+2  x t 2  x t 2 -1 <  < 1 + …. The AR(1) variance is not the smallest Example:

18 9.17  t =   t-1 + u t ==>  t =  [   t-2 + u t-1 ] + u t ==>  t-2 =   t-3 + u t-2 =>  t =  2 [   t-3 + u t-2 ] +  u t-1 + u t ==>  t-1 =   t-2 + u t-1  t =  2  t-2 +  u t-1 + u t  t =  3  t-3 +  2 u t-2 +  u t-1 + u t E(  t  t-1 ) = 22 1 -  2 E(  t  t-3 ) =  2  2 E(  t  t-2 ) =  2 ……………. E(  t  t-k ) =  k-1  2 Autoregressive scheme: It means the more periods in the past, the less effect on current period  k-1  becomes smaller and smaller

19 9.18 How to detect autocorrelation ? DW* or d*

20 9.195% level of significance, k = 1 k = 1,n=24 DW* = d L = 1.27 d u = 1.45 k k is the number of independent variables (excluding the intercept) < d L  DW* < d L

21 9.20 From OLS regression result: where d or DW * = Check DW Statistic Table (At 5% level of significance, k’ = 1, n=24) d L = 1.27 d u = dLdL dudu DW DW * Durbin-Watson Autocorrelation test Reject H 0 region H 0 : no autocorrelation  = 0 H 1 : yes, autocorrelation exists. or  > 0 positive autocorrelation

22 9.21 Durbin-Watson test OLS : Y =  0 +  1 X 2 + …… +  k X k +  t obtain  t, DW-statistic(d) ^ Assuming AR(1) process:  t =   t-1 + u t I. H 0 :  ≤ 0 no positive autocorrelation H 1 :  > 0 yes, positive autocorrelation -1 <  < 1 dd L d u Compare d * and d L, d u (critical values) DW * dd L if d * reject H 0 d u if d * > d u ==> not reject H 0 d L dd u if d L  d *  d u ==> this test is inconclusive

23 9.22 Durbin-Watson test(Cont.) -1  Since -1    1 ^ d implies 0  d  4 DW =  2 (1 -  )  (  t -  t-1 ) 2 t=2 T ^^  t2 t2 t=1 T ^ ^ d(d)d(d) d  d  2(1  ) ^ d d ≈ 2 (1-  ) ==> ≈ 1 -  ==>  ≈ 1- ^ d 2 d 2 ^ ^ dLdL dudu 4 (4-d L ) (4-d L ) (4-d U )

24 9.23 Durbin-Watson test(Cont.) II. H 0 :  ≥0 no negative autocorrelation H 1 :  < 0 yes, negative autocorrelation d we use (4-d) (when d is greater than 2) if (4 - d) < d L or 4 - d L reject H 0 if d L  (4 - d)  d u or 4 - d u > d > 2 ==> not reject H 0 if d L  (4 - d)  d u or 4 - d u  d  4 - d L ==> inconclusive dLdL dudu 4 (4 - d L ) (4-d U )

25 9.24 Durbin-Watson test(Cont.) II. H 0 :  =0 No autocorrelation H 1 :   0 two-tailed test for auto correlation either positive or negative AR(1) If d < d L or d > 4 - d L ==> reject H 0 If d u not reject H 0 If d L  d  d u or 4 - d u  d  4 - d L ==> inconclusive

26 9.25 For example : UM t = CAP t CAP t T t ^ (15.6) (2.0) (3.7) (10.3) DW = 0.23 R 2 = 0.78 F = 78.9   = SSR = 29.3 DW = 0.23 n = 68 _ ^ (i) K = 3 (number of independent variable) Observed (ii) n = 68,  = 0.01 significance level 0.05 (iii) d L = 1.525, d u = d L = 1.372, d u = Reject H 0, positive autocorrelation exists (excluding intercept)

27 9.26 H 0 :  = 0 positive autocorrelation H 1 :  > 0 0 dLdL dudu 2 reject H 0 not reject inconclusive DW (d) 4-d u 4-d L 4 inconclusive reject H 0 H 0 :  = 0 negative autocorrelation H 1 :  < 0 not reject % & 5% Critical values 0.23

28 9.27 The assumptions underlying the d(DW) statistics : 1. Intercept term must be included. 2. X’s are nonstochastic  3. Only test AR(1) :  t =  t-1 + u t where u t ~ iid (0,  u 2 ) 4. Not include the lagged dependent variable, Y t =  0 +  1 X t 1 +  2 X t 2 + …… +  k X t k +  Y t-1 +  t (autoregressive model) 5. No missing observation N.A. N.A. 82 N.A. N.A YX missing

29 9.28 Lagrange Multiplier (LM) Test Lagrange Multiplier (LM) Test or called Durbin’s m test Or Breusch-Godfrey (BG) test of higher-order autocorrelation ^ Test Procedures: (1) Run OLS and obtain the residuals  t. (n-p)R 2 (3) compute the BG-statistic: (n-p)R 2 p (4) compare the BG-statistic to the  2 p (p is # of degree-order) (5) If BG >  2 p, reject Ho, it means there is a higher-order autocorrelation If BG <  2 p, not reject Ho, it means there is a no higher-order autocorrelation ^ ^^ ^ ^ ^^ ^ ^ (2) Run  t against all the regressors in the model plus the additional regressors,  t-1,  t-2,  t-3,…,  t-p. p  t =  0 +  1 X t +  t-1 +  t-2 +  t-3 + … +  t-p + u R 2 Obtain the R 2 value from this regression. ^

30 9.29 Remedy: 1. First-difference transformation Y t =  0 +  1 X t +  t Y t-1 =  0 +  1 X t-1 +  t-1 assume  = 1 ==> Y t - Y t-1 =  0 -  0 +  1 (X t - X t-1 ) + (  t -  t-1 ) ==> Y t =  1  X t +  t  no intercept 2. Add a trend (T) Y t =  0 +  1 X t +  2 T +  t Y t-1 =  0 +  1 X t-1 +  2 (T -1) +  t-1 ==> (Y t - Y t-1 ) = (  0 -  0 ) +  1 (X t - X t-1 ) +  2 [T- (T -1)] + (  t -  t-1 ) ==>  Y t =  1  X t +  2 *1 +  ’ t ==> Y t =  2 * +  1  X t +  ’ t  If  1 * > 0 => an upward trend in Y ^ (  2 > 0) ^

31 9.30 Cochrane-Orcutt Two-step procedure 3. Cochrane-Orcutt Two-step procedure (CORC) (1). Run OLS on Y t =  0 +  1 X t +  t and obtains  t ^ (3). Use the  to transform the variables : ^ Y t * = Y t -  Y t-1 ^ ^ X t * = X t -  X t-1 -)  Y t-1 =  0  +  1  X t-1 +  t-1 ^ ^ ^^ Y t =  0 +  1 X t +  t (4). Run OLS on Y t * =  0 * +  1 * X t * + u t (2). Run OLS on  t =   t-1 + u t ^  and obtains  ^ ^ Where u~(0,  ) Generalized Least Squares Least Squares(GLS)method  (Y t -  Y t-1 )=  0 (1-  ) +  1 (X t -  X t-1 ) + (  t -  t-1 ) ^ ^ ^ ^

32 9.31 Cochrane-Orcutt 4. Cochrane-Orcutt Iterative Procedure (5). If DW test shows that the autocorrelation still existing, than it needs to iterate the procedures from (4). Obtains the  t * (6). Run OLS  t * =   t-1 * + u t ’ ^ ^   (1 - ) DW 2 2 ^ ^ and obtains  which is the second-round estimated  ^ ^ X t ** = X t -  X t-1  Y t-1 =  0  +  1  X t-1 +  t-1 (7). Use the  to transform the variable ^ ^ Y t ** = Y t -  Y t-1 Y t =  0 +  1 X t +  t ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^

33 9.32 Cochrane-Orcutt Iterative procedure(Cont.) (8). Run OLS on Y t ** =  0 ** +  1 ** X t ** +  t ** Where is ^ ^ (Y t -  Y t-1 ) =  0 (1 -  ) +  1 (X t -  X t-1 ) + (  t -   t-1 ) ^ ^ ^ ^ ^ ^ (9). Check on the DW 3 -statistic, if the autocorrelation is still existing, than go into third-round procedures and so on. Until the estimated  ’s differs a little ^ ^ ^ ^ ^ (  -  < 0.01).

34 9.33 Example: Studenmund (2006) Exercise 14 and Table 9.1, pp (1) Low DW statistic Obtain the Residuals (Usually after you run regression, the residuals will be immediately stored in this icon

35 9.34(2) Give a new name for the residual series Run regression of the current residual on the lagged residual ρ Obtain the estimated ρ(“rho”) ^

36 9.35(3) Transform the Y* and X* New series are created, but each first observation is lost.

37 9.36(4) Obtain the estimated result which is improved Run the transformed regression

38 9.37 The Cochrane-Orcutt Iterative procedure in the EVIEWS The is the EVIEWS’ Command to run the iterative procedure (5)~(9)

39 9.38 The result of the Iterative procedure The DW is improved This is the ρ estimated ρ Each variable is transformed

40 9.39 Generalized least Squares (GLS) Y t =  0 +  1 X t +  t t = 1,……,T (1) Assume AR(1) :  t =  t-1 + u t -1 <  < 1  Y t-1 =  0 +  1 X t-1 +  t-1 (2) (1) - (2) => (Y t -  Y t-1 ) =  0 (1 -  ) +  1 (X t -  X t-1 ) + (  t -  t-1 ) GLS => Y t * =  0 * +  1 * X t * + u t 5. Prais-Winsten transformation

41 9.40 To avoid the loss of the first observation, the first observation of Y 1 * and X 1 * should be transformed as : X 1 * = 1 -  2 (X 1 ) ^ Y 1 * = 1 -  2 (Y 1 ) ^ Edit the figure here To restore the first observation but Y 2 * = Y 2 -  Y 1 ; X 2 * = X 2 -  X 1 ^^ Y 3 * = Y 3 -  Y 2 ; X 3 * = X 3 -  X 2 ^^ …... Y t * = Y t -  Y t-1 ; X t * = X t -  X t-1 ^ ^

42 Durbin’s Two-step method : Since (Y t -  Y t-1 ) =  0 (1 -  ) +  1 (X t -  X t-1 ) + u t => Y t =  0 * +  1 X t -  1 X t-1 +  Y t-1 + u t Y t =  0 +  1 X t +  t III. Run OLS on model : Y t * =  0 +  1 X t * +  ’ t and  1 =  1 ^ ^ where  0 =  0 (1 -  ) ^ ^ I. Run OLS => this specification Y t =  0 * +  1 * X t -  2 * X t-1 +  3 * Y t-1 + u t Obtain  3 * as an estimated  (RHO) ^ ^ II. Transforming the variables : Y t * = Y t -  3 * Y t-1 as Y t * = Y t -  Y t-1 and X t * = X t -  3 * X t-1 as X t * = X t -  X t-1 ^ ^ ^ ^

43 9.42 Including this lagged term of Y Obtain the estimated ρ ρ(“rho”) ^

44 9.43 Lagged Dependent Variable Lagged Dependent Variable and Autocorrelation Compare h * to Z where Z c ~ N (0,1) normal distribution If |h * | > Z c => reject H 0 :  = 0 (no autocorrelation) Y t =  0 +  1 X 1 t +  2 X 2 t + …… +  k X k.t +  1 Y t-1 +  t DW statistic will often be closed to 2 or DW does not converge to 2 (1 -  ) ^ DW is not reliable Durbin-h Test: Compute h * =   ^ 1 - n*Var (  1 ) n ^ Limitation of Durbin-Watson Test:

45 9.44 Durbin-h Test: Compute h * =   ^ 1 - n*Var (  1 ) n ^ Therefore reject H0 :  = 0 (no autocorrelation) h* = > Z


Download ppt "9.1 Lecture #9 Studenmund (2006) Chapter 9 Objectives The nature of autocorrelation The consequences of autocorrelation Testing the existence of autocorrelation."

Similar presentations


Ads by Google