Each realization of the process has a proportion rho of the previous one plus an error drawn from a distribution with mean zero and variance sigma squared It can be generalised to a higher autocorrelation order We just show AR(1)
We can show that the constant mean of this series is zero
We can also allow for a non-zero mean, by replacing yt with yt-mu Which boils down to (using alpha = mu(1-rho))
Or we can allow for a AR(1) with a fluctuation around a linear trend (mu+delta times t) The “de-trended” model, which is now stationary, behaves like an autoregressive model: With alpha =(mu(1-rho)+rho times delta) And lambda = delta(1-rho)
Both independent and artificially generated, but…
Figure 12.3 (a) Time Series of Two Random Walk Variables
Figure 12.3 (b) Scatter Plot of Two Random Walk Variables
Dickey-Fuller Test 1 (no constant and no trend)
Easier way to test the hypothesis about rho Remember that the null is a unit root: nonstationarity!
Dickey-Fuller Test 2 (with constant but no trend)
Dickey-Fuller Test 3 (with constant and with trend)
First step: plot the time series of the original observations on the variable. If the series appears to be wandering or fluctuating around a sample average of zero, use Version 1 If the series appears to be wandering or fluctuating around a sample average which is non-zero, use Version 2 If the series appears to be wandering or fluctuating around a linear trend, use Version 3
An important extension of the Dickey-Fuller test allows for the possibility that the error term is autocorrelated. The unit root tests based on (12.6) and its variants (intercept excluded or trend included) are referred to as augmented Dickey-Fuller tests.
F = US Federal funds interest rate B = 3-year bonds interest rate
In STATA: use usa, clear gen date = q(1985q1) + _n - 1 format %tq date tsset date TESTING UNIT ROOTS “BY HAND”: * Augmented Dickey Fuller Regressions regress D.F L1.F L1.D.F regress D.B L1.B L1.D.B
In STATA: Augmented Dickey Fuller Regressions with built in functions dfuller F, regress lags(1) dfuller B, regress lags(1) Choice of lags if we want to allow For more than a AR(1) order
In STATA: Augmented Dickey Fuller Regressions with built in functions dfuller F, regress lags(1)
In STATA: Augmented Dickey Fuller Regressions with built in functions dfuller F, regress lags(1) Alternative: pperron uses Newey-West standard errors to account for serial correlation, whereas the augmented Dickey-Fuller test implemented in dfuller uses additional lags of the first-difference variable. Also consider now using DFGLS (Elliot Rothenberg and Stock, 1996) to counteract problems of lack of power in small samples. It also has in STATA a lag selection procedure based on a sequential t test suggeste by Ng and Perron (1995)
In STATA: Augmented Dickey Fuller Regressions with built in functions dfuller F, regress lags(1) Alternatives: use tests with stationarity as the null KPSS (Kwiatowski, Phillips, Schmidt and Shin. 1992) which also has an automatic bandwidth selection tool or the Leybourne & McCabe test.
The first difference of the random walk is stationary It is an example of a I(1) series (“integrated of order 1” First-differencing it would turn it into I(0) (stationary) In general, the order of integration is the minimum number of times a series must be differenced to make it stationarity
So now we reject the Unit root after differencing once: We have a I(1) series
In STATA: ADF on differences dfuller D.F, noconstant lags(0) dfuller D.B, noconstant lags(0)
To summarize: If variables are stationary, or I(1) and cointegrated, we can estimate a regression relationship between the levels of those variables without fear of encountering a spurious regression. Then we can use the lagged residuals from the cointegrating regression in an ECM model This is the best case scenario, since if we had to first-differentiate the variables, we would be throwing away the long-run variation Additionally, the cointegrated regression yields a “superconsistent” estimator in large samples
To summarize: If the variables are I(1) and not cointegrated, we need to estimate a relationship in first differences, with or without the constant term. If they are trend stationary, we can either de-trend the series first and then perform regression analysis with the stationary (de-trended) variables or, alternatively, estimate a regression relationship that includes a trend variable. The latter alternative is typically applied.
Slide 12-47 Principles of Econometrics, 3rd Edition Augmented Dickey-Fuller test Autoregressive process Cointegration Dickey-Fuller tests Mean reversion Order of integration Random walk process Random walk with drift Spurious regressions Stationary and nonstationary Stochastic process Stochastic trend Tau statistic Trend and difference stationary Unit root tests
Slide 12-48 Principles of Econometrics, 3rd Edition Kit Baum has really good notes on these topics that can be used to learn also about extra STATA commands to handle the analysis: http://fmwww.bc.edu/ec-c/s2003/821/ec821.sect05.nn1.pdf http://fmwww.bc.edu/ec-c/s2003/821/ec821.sect06.nn1.pdf For example, some of you should look at seasonal unit root analysis (command HEGY in STATA) Panel unit roots would be here http://fmwww.bc.edu/ec-c/s2003/821/ec821.sect09.nn1.pdf Further issues
Slide 12-49 Principles of Econometrics, 3rd Edition You may want to some time consider unit root tests that allow for structural Breaks You can also take a look at the literature review in this working paper: http://ideas.repec.org/p/wpa/wuwpot/0410002.html Further issues
Slide 12-50 Principles of Econometrics, 3rd Edition When you deal with more than 2 regressors you should consider the Johansen’s method to examine the cointegration relationships Further issues