Download presentation
1
Unit Roots & Forecasting
Methods of Economic Investigation Lecture 20
2
Last Time Descriptive Time Series Processes
Estimating with exogenous serial correlation Estimating with endogenous processes
3
Today’s Class Non-stationaryTime Series Returning to Causal Effects
Unit Roots and Spurious Regressions Orders of Integration Returning to Causal Effects Impulse Response Functions Forecasting
4
Random Walk Processes Definition:
Et[xt+1] = xt that is today’s value of X is the best predictor of tomorrow’s value. This looks very similar to our AR(1) processes, where φ = 1. Autocovariances of a random walk are not well defined in a technical sense, but imagine AR(1) process with φ1: we have nearly perfect autocorrelation for any two time periods. persistence dies out too slowly so most of variance will largely be due to very low-frequency “shocks.”
5
Permanence of Shocks in Unit Root
An innovation (a shock at t ) to a stationary AR process dies out eventually (the autocorrelation function declines eventually to zero). A shock to a random walk is permanent Variance is increasing over time Var(xt) = Var(x0) + tσ2
7
Drifts and Trends Deterministic trend
yt = δt + xt + εt xt is some stationary process yt is “trend” stationary It’s easy to add a deterministic trend to a random walk
9
Orders of Integration A series is integrated of order p if a p differences render it stationary. If a time series is integrated and differencing once renders the time series stationary, then it is integrated of order 1 or I(1). If it is necessary to difference twice before a time series is stationary, then it is I(2), and so forth.
10
Integrated Series If a time series has a unit root, it is said to be integrated. First differencing the time series removes the unit root. E.g. in the case of a random walk yt = yt-1 + ut, ut ~ N(0, σ2) Δyt = ut the first difference is white noise, which is stationary. For an AR(p) a unit root implies 1 – β1L – β2L2 – ... – βpLp = (1 – L) (1 – λ1L – λ2L2 ... – λpLp-1) = 0 and as a result first differencing also removes the unit root.
11
Non-stationarity Nonstationarity can have important consequences for regression modelsand inference. Autoregressive coefficients are biased t-statistics have non-normal distributions even in large samples Spurious regression
12
Problem: Spurious Regression
imagine we now have two series are generated by independent random walks, Suppose we run yt on xt using OLS, that is we estimate yt = α + βxt + νt. In this case, you tend to see ”significant” β because the low-frequency changes make it seem as if the two series are in some way associated.
14
Unit Root Tests Standard Dickey-Fuller test appropriate for AR(1) processes Many economic and financial time series have a more complicated dynamic structure than is captured by a simple AR(1) model. Said and Dickey (1984) augment the basic autoregressive unit root test to accommodate general ARMA(p, q) models with unknown orders and Called the augmented Dickey-Fuller (ADF) test
15
ADF Test – 1 The ADF test tests the null hypothesis that a time series yt is I(1) against the alternative that it is I(0), assuming that the dynamics in the data have an ARMA structure. The ADF test is based on estimating the test regression Other serial correlation Deterministic variables Potential unit root
16
ADF Test - 2 To see why: Subtract yt-1 from both sides and define
Φ = (α1+ α2+…+ αp – 1) and we get Test Φ= 0 against alternative Φ<0 Use special DF upperbound and lowerbound Under alternative, test statistic is not normally distributed
17
Estimating in Time Series
Non-stationary time series can lead to a lot of problems in econometric analysis. In order to work with time series, particular in regression models, we should therefore transform our variables to stationary time series first. First differencing removes unit roots or trends. Hence, difference a time series until it is I(0). Differencing too often is less of a problem since a differenced stationary series is still stationary. Regressions of one stationary variable on another is less problematic. Although observations may not be independent, we can expect regression to have similar properties as with cross sectional data.
18
Impulse Response Function
One of the most interesting things to do with an ARMA model is form predictions of the variable given its past. we want to know what is Et(xt+j ) Can do inference with Vart(xt+j) The impulse response function is a simpel way to do that Follow te path that x follows if it is kicked by unit shock characterization of the behavior of our models. allows us to start thinking about “causes” and “effects”.
19
Impulse Response and MA(∞)
1. The MA(∞) representation is the same thing as the impulse response function. i.e. The easiest way to calculate an MA(∞) representation is to simulate the impulse-response function. The impulse response function is the same as Et(xt+j) − Et−1(xt+j).
20
Causality and Impulse Response
Can either forecast or simulate the effect of a given shock Try to pick a shock time/level to simulate and try to replicate observed data Issue of whether that shock is what really happened Know a shock happened in time time t See if observed change (more on this next time) Granger causality implies a correlation between the current value of one variable and the past values of others it does not necessarily imply that changes in one variable “causes” changes in another. Use a F-test to jointly test for the significance of the lags on the explanatory variables, this in effect tests for ‘Granger causality’ between these variables. Can visually see correlation in impulse response functions
21
Source: Cochrane, QJE (1994)
22
Next Time Estimating Causality in Time Series
Some additional forecasting stuff Testing for breaks Regression discontinuity/Event study
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.