Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture #9 Autocorrelation Serial Correlation

Similar presentations


Presentation on theme: "Lecture #9 Autocorrelation Serial Correlation"— Presentation transcript:

1 Lecture #9 Autocorrelation Serial Correlation
Studenmund (2006) Chapter 9 Autocorrelation Serial Correlation Objectives The nature of autocorrelation The consequences of autocorrelation Testing the existence of autocorrelation Correcting autocorrelation

2 Time Series Data Time series process of economic variables
e.g., GDP, M1, interest rate, exchange rate, imports, exports, inflation rate, etc. Realization An observed time series data set generated from a time series process Remark: Age is not a realization of time series process. Time trend is not a time series process too.

3 Cyclical or seasonal random
Decomposition of time series Xt = Trend + seasonal + random Trend Xt time Cyclical or seasonal random

4 Example: Static Phillips curve model inflatt = 0 + 1unemployt + t
Static Models Ct = 0 + 1Ydt + t Subscript “t” indicates time. The regression is a contemporaneous relationship, i.e., how does current consumption (C) be affected by current Yd? Example: Static Phillips curve model inflatt = 0 + 1unemployt + t inflat: inflation rate unemploy: unemployment rate Contemporaneous relation: a. Equilibrium relation b. No dynamic effects, i.e., immediate effect only

5 Ct+1=0+0Ydt+1+1Ydt+tCt=0 +0Ydt+1Ydt-1+t
Finite Distributed Lag Models Economic action at time t Effect Ct =0+0Ydt+t Effect at time t+1 Ct+1=0+0Ydt+1+1Ydt+tCt=0 +0Ydt+1Ydt-1+t Effect at time t+2 Effect at time t+q …. Forward Distributed Lag Effect (with order q) …. Ct+q=0+1Ydt+q+…+1Ydt+tCt=0+1Ydt+…+1Ydt-q+t

6 Backward Distributed Lag Effect
Economic action at time t Effect at time t-1 Backward Distributed Lag Effect Effect at time t-2 Effect at time t-3 Effect at time t-q …. Yt= 0+0Zt+1Zt-1+2Zt-2+…+2Zt-q+t Initial state: zt = zt-1 = zt-2 = c

7 C = 0 + 0Ydt + 1Ydt-1 + 2Ydt-2 + t
Long-run propensity (LRP) = (0 + 1 + 2) Permanent unit change in C for 1 unit permanent (long-run) change in Yd. Distributed Lag model in general: Ct = 0 + 0Ydt + 1Ydt-1 +…+ qYdt-q + other factors + t LRP (or long run multiplier) = 0 +  q

8 Time Trends Linear time trend
Yt = 0 + 1t + t Constant absolute change Exponential time trend ln(Yt) = 0 + 1t + t Constant growth rate Quadratic time trend Yt = 0 + 1t + 2t2 + t Accelerate change For advances on time series analysis and modeling , welcome to take ECON 3670

9 Definition: First-order of Autocorrelation, AR(1)
If Cov (t, s) = E (t s)  where t  s Yt = 0 + 1 X1t + t t = 1,……,T and if t =  t-1 + ut where <  < ( : RHO) and ut ~ iid (0, u2) (white noise) This scheme is called first-order autocorrelation and denotes as AR(1) Autoregressive : The regression of t can be explained by itself lagged one period. (RHO) : the first-order autocorrelation coefficient or ‘coefficient of autocovariance’

10 Example of serial correlation:
u1990 … … … …. … … …. u2002 u2003 u2004 u2005 u2006 u2007 Year Consumptiont = 0 + 1 Incomet + errort Example of serial correlation: TaxPay2006 TaxPay2007 Error term represents other factors that affect consumption The current year Tax Pay may be determined by previous year rate TaxPay2007 =  TaxPay u2007  t =  t-1 + ut ut ~ iid(0, u2)

11 If t = 1 t-1 + ut it is AR(1), first-order autoregressive If t = 1 t-1 + 2 t-2 + ut it is AR(2), second-order autoregressive High order autocorrelation If t = 1 t-1 + 2 t-2 + 3 t-3 + ut it is AR(2), third-order autoregressive If t = 1 t-1 + 2 t-2 + …… + n t-n + ut it is AR(n), nth-order autoregressive ………………………………………………. Autocorrelation AR(1) : Cov (t  t-1) > => 0 <  < positive AR(1) Cov (t t-1) < => -1 <  < negative AR(1) -1 <  < 1

12 time i ^ x Positive autocorrelation time i ^ x Positive autocorrelation time i ^ Cyclical: Positive autocorrelation x The current error term tends to have the same sign as the previous one.

13 Negative autocorrelation
time i ^ x The current error term tends to have the opposite sign from the previous. No autocorrelation x time i ^ The current error term tends to be randomly appeared from the previous.

14 The meaning of  : The error term t at time t is a linear
The meaning of  : The error term t at time t is a linear combination of the current and past disturbance. 0 <  < 1 -1 <  < 0 The further the period is in the past, the smaller is the weight of that error term (t-1) in determining t  = 1 The past is equal importance to the current.  > 1 The past is more importance than the current.

15 The consequences of serial correlation:
The estimated coefficients are still unbiased. E(k) = k ^ BLUE ^ 2. The variances of the k is no longer the smallest 3. The standard error of the estimated coefficient, Se(k) becomes large ^ Therefore, when AR(1) is existing in the regression, The estimation will not be “BLUE”

16 The AR(1) variance is not the smallest
Example: Two variable regression model: Yt = 0 + 1X1t + t The OLS estimator of 1, ^  x y  xt2 If E(t t-1) = 0 then Var (1) = 2 ===> 1 = If E(tt-1)  0, and t =  t-1 + ut , then Var (1)AR1=  2 ^    xt xt xt xt+2  xt2  xt  xt xt2 -1 <  < 1 + …. If  = 0, zero autocorrelation, than Var(1)AR1 = Var(1) ^ If   0, autocorrelation, than Var(1)AR1 > Var(1) The AR(1) variance is not the smallest

17 Autoregressive scheme:
t =  t-1 + ut ==> t = [ t-2 + ut-1] + ut ==> t-2 =  t-3 + ut => t = 2 [ t-3 + ut-2] + ut-1 + ut ==> t-1 =  t-2 + ut  t = 2  t-2 + ut-1 + ut t = 3 t-3 + 2 ut-2 + ut-1 + ut E(t t-1) = 2 1 - 2 E(t t-3) = 2 2 E(t t-2) = 2 ……………. E(t t-k) = k-1 2 It means the more periods in the past, the less effect on current period k-1  becomes smaller and smaller

18 How to detect autocorrelation ?
DW* or d*

19 5% level of significance,
k = 1, n=24 k is the number of independent variables (excluding the intercept) dL = 1.27 du = 1.45 DW* =  DW* < dL

20 Durbin-Watson Autocorrelation test
From OLS regression result: where d or DW* = Check DW Statistic Table (At 5% level of significance, k’ = 1, n=24) dL = 1.27 du = 1.45 1.27 1.45 2 dL du DW* 0.9107 Reject H0 region H0 : no autocorrelation  = 0 H1 : yes, autocorrelation exists. or  > 0 positive autocorrelation

21 Durbin-Watson test OLS : Y = 0 + 1 X2 + …… + k Xk + t
obtain t , DW-statistic(d) ^ Assuming AR(1) process: t =  t-1 + ut I H0 :  ≤ no positive autocorrelation H1 :  > yes, positive autocorrelation -1 <  < 1 Compare d* and dL, du (critical values) DW* if d* < dL ==> reject H0 if d* > du ==> not reject H0 if dL  d*  du ==> this test is inconclusive

22 ^ d  2(1-r) Durbin-Watson test(Cont.) DW =  2 (1 - )  (t - t-1)2
==> ≈ 1 -  ==>  ≈ 1- ^ d 2 Since -1    1 ^ implies 0  d  4 1.27 1.45 2 dL du 4 (4-dL) (4-dU) 2.73 2.55

23 Durbin-Watson test(Cont.)
II. H0 :  ≥ no negative autocorrelation H1 :  < yes, negative autocorrelation we use (4-d) (when d is greater than 2) if (4 - d) < dL or dL < d < ==> reject H0 if dL  (4 - d) du or du  d  4 - dL ==> inconclusive if dL  (4 - d) du or du > d > ==> not reject H0 1.27 1.45 2 dL du 4 (4 - dL) (4-dU) 2.73 2.55

24 Durbin-Watson test(Cont.)
II. H0 :  = No autocorrelation H1 :   two-tailed test for auto correlation either positive or negative AR(1) If d < dL or d > 4 - dL ==> reject H0 If du < d < 4 - du ==> not reject H0 If dL  d  du or 4 - du  d  4 - dL ==> inconclusive

25 For example : UMt = CAPt CAPt Tt ^ (15.6) (2.0) (3.7) (10.3) R2 = F =  = SSR = DW = n = 68 _ (i) K = 3 (number of independent variable) Observed (ii) n = 68 , = significance level 0.05 (iii) dL = , du = dL = , du = Reject H0, positive autocorrelation exists (excluding intercept)

26 negative autocorrelation
H0 :  = 0 positive autocorrelation H1 :  > 0 dL du 2 DW (d) 4-du 4-dL 4 inconclusive reject H0 H0 :  = 0 negative autocorrelation H1 :  < 0 not reject reject H0 not reject inconclusive 1% & 5% Critical values 0.23

27 The assumptions underlying the d(DW) statistics :
1. Intercept term must be included. 2. X’s are nonstochastic 3. Only test AR(1) : t = t-1 + ut where ut ~ iid (0, u2) 4. Not include the lagged dependent variable, Yt = 0+ 1 Xt1 + 2 Xt2 + …… + kXtk +  Yt-1 + t (autoregressive model) 5. No missing observation N.A N.A. N.A N.A. 95 ... Y X missing

28 Lagrange Multiplier (LM) Test or called Durbin’s m test
Or Breusch-Godfrey (BG) test of higher-order autocorrelation ^ Test Procedures: (1) Run OLS and obtain the residuals t. ^ (2) Run t against all the regressors in the model plus the additional regressors, t-1, t-2, t-3,…, t-p. t = 0 + 1 Xt + t-1 + t-2 + t-3 + … + t-p + u Obtain the R2 value from this regression. (3) compute the BG-statistic: (n-p)R2 (4) compare the BG-statistic to the 2p (p is # of degree-order) (5) If BG > 2p, reject Ho, it means there is a higher-order autocorrelation If BG < 2p, not reject Ho, it means there is a no higher-order autocorrelation

29 D D Remedy: 1. First-difference transformation Yt = 0 + 1 Xt + t
Yt-1 = 0 + 1 Xt-1 + t assume  = 1 ==> Yt - Yt-1 = 0 - 0 + 1 (Xt - Xt-1) + (t - t-1) ==> Yt = 1 DXt + t D no intercept 2. Add a trend (T) Yt = 0 + 1 Xt + 2 T + t Yt-1 = 0 + 1 Xt-1 + 2 (T -1) + t-1 ==> (Yt - Yt-1) = (0 - 0) + 1 (Xt - Xt-1) + 2 [T- (T -1)] + (t - t-1) ==> DYt = 1 DXt + 2*1 + ’t ==> Yt = 2* + 1 DXt + ’t D If 1* > 0 => an upward trend in Y ^ (2 > 0)

30 3. Cochrane-Orcutt Two-step procedure (CORC)
(1). Run OLS on Yt = 0 + 1 Xt + t and obtains t ^ Generalized Least Squares (GLS) method (2). Run OLS on t =  t-1 + ut ^ and obtains  Where u~(0, ) (3). Use the  to transform the variables : ^ Yt* = Yt -  Yt-1 Xt* = Xt -  Xt-1 Yt = 0 + 1 Xt + t -)  Yt-1 = 0  + 1  Xt-1 + t-1 ^ (Yt - Yt-1)= 0(1-) +1(Xt - Xt-1) + (t -t-1) ^ (4). Run OLS on Yt* = 0* + 1* Xt* + ut

31 4. Cochrane-Orcutt Iterative Procedure
(5). If DW test shows that the autocorrelation still existing, than it needs to iterate the procedures from (4). Obtains the t* (6). Run OLS t* =  t-1* + ut’ ^   ( ) DW2 2 and obtains  which is the second-round estimated  Xt** = Xt -  Xt  Yt-1 = 0  + 1 Xt-1 + t-1 (7). Use the  to transform the variable ^ Yt** = Yt -  Yt Yt = 0 + 1 Xt + t

32 Cochrane-Orcutt Iterative procedure(Cont.)
(8). Run OLS on Yt** = 0** + 1** Xt** + t** Where is ^ (Yt -  Yt-1) = 0 (1 - ) + 1 (Xt -  Xt-1) + (t -  t-1) (9). Check on the DW3 -statistic, if the autocorrelation is still existing, than go into third-round procedures and so on. Until the estimated ’s differs a little ^ ( -  < 0.01).

33 Example: Studenmund (2006) Exercise 14 and Table 9.1, pp.342-344
(1) Low DW statistic Obtain the Residuals (Usually after you run regression, the residuals will be immediately stored in this icon

34 (2) Give a new name for the residual series Run regression of the current residual on the lagged residual Obtain the estimated ρ(“rho”) ^

35 (3) Transform the Y* and X* New series are created, but each first observation is lost.

36 (4) Run the transformed regression Obtain the estimated result which is improved

37 The is the EVIEWS’ Command to run the iterative procedure
(5)~(9) The Cochrane-Orcutt Iterative procedure in the EVIEWS The is the EVIEWS’ Command to run the iterative procedure

38 The result of the Iterative procedure
This is the estimated ρ Each variable is transformed The DW is improved

39 Generalized least Squares (GLS) 5. Prais-Winsten transformation
Yt = 0 + 1 Xt + t t = 1,……,T (1) Assume AR(1) : t = t-1 + ut <  < 1 Yt-1 = 0 + 1 Xt-1 + t (2) (1) - (2) => (Yt - Yt-1) = 0 (1 - ) + 1 (Xt - Xt-1) + (t - t-1) GLS => Yt* = 0* + 1* Xt* + ut

40 To avoid the loss of the first observation, the first observation of Y1* and X1* should be transformed as : Edit the figure here To restore the first observation Y1* = 2 (Y1) ^ X1* = 2 (X1) ^ but Y2* = Y2 -  Y1 ; X2* = X2 -  X1 ^ Y3* = Y3 -  Y2 ; X3* = X3 -  X2 …... Yt* = Yt -  Yt-1 ; Xt* = Xt -  Xt-1

41 6. Durbin’s Two-step method :
Since (Yt - Yt-1) = 0 (1 - ) + 1 (Xt - Xt-1) + ut => Yt = 0* + 1 Xt - 1 Xt-1 + Yt-1 + ut Yt = 0 + 1 Xt + t 6. Durbin’s Two-step method : I. Run OLS => this specification Yt = 0* + 1* Xt - 2* Xt-1 + 3* Yt-1 + ut Obtain 3* as an estimated  (RHO) ^ II. Transforming the variables : Yt* = Yt - 3* Yt as Yt* = Yt -  Yt-1 and Xt* = Xt - 3* Xt as Xt* = Xt -  Xt-1 ^ III. Run OLS on model : Yt* = 0 + 1 Xt* + ’t and 1 = 1 ^ where 0 = 0 (1 - )

42 Including this lagged term of Y Obtain the estimated ρ(“rho”) ^

43 Limitation of Durbin-Watson Test:
Lagged Dependent Variable and Autocorrelation Yt = 0 + 1 X1t + 2 X2t + …… + k Xk.t + 1 Yt-1 +t DW statistic will often be closed to 2 or DW does not converge to 2 (1 - ) ^ DW is not reliable Durbin-h Test: Compute h* =   ^ 1 - n*Var (1) n Compare h* to Z where Zc ~ N (0,1) normal distribution If |h*| > Zc => reject H0 :  = 0 (no autocorrelation)

44 Durbin-h Test: n ^ Compute h* =   1 - n*Var (1) h* = 4.458 > Z
Therefore reject H0 :  = 0 (no autocorrelation)


Download ppt "Lecture #9 Autocorrelation Serial Correlation"

Similar presentations


Ads by Google