Presentation is loading. Please wait.

Presentation is loading. Please wait.

MODEL IDENTIFICATION.

Similar presentations


Presentation on theme: "MODEL IDENTIFICATION."— Presentation transcript:

1 MODEL IDENTIFICATION

2 Stationary Time Series

3 Wold’ Theorem Wold’s decomposition theorem states that a stationary time series process with no deterministic components has an infinite moving average (MA) representation. This in turn can be represented by a finite autoregressive moving average (ARMA) process. Therefore, by examining the first and second order moments, we can identify a stationary process.

4 Non-Stationary Time Series

5 NON-STATIONARY TIME SERIES MODELS
Non-constant in mean Non-constant in variance Both

6 NON-STATIONARITY IN MEAN
Deterministic trend Detrending Stochastic trend Differencing

7 DETERMINISTIC TREND A deterministic trend is when we say that the series is trending because it is an explicit function of time. Using a simple linear trend model, the deterministic (global) trend can be estimated. This way to proceed is very simple and assumes the pattern represented by linear trend remains fixed over the observed time span of the series. A simple linear trend model:

8 DETERMINISTIC TREND The parameter  measure the average change in Yt from one period to the another: The sequence {Yt} will exhibit only temporary departures from the trend line +t. This type of model is called a trend stationary (TS) model.

9 EXAMPLE

10 TREND STATIONARY If a series has a deterministic time trend, then we simply regress Yt on an intercept and a time trend (t=1,2,…,n) and save the residuals. The residuals are detrended series. If Yt is trend stationary, we will get stationary residual series. If Yt is stochastic, we do not necessarily get stationary series.

11 DETERMINISTIC TREND Many economic series exhibit “exponential trend/growth”. They grow over time like an exponential function over time instead of a linear function. For such series, we want to work with the log of the series:

12 STOCHASTIC TREND The ARIMA models where the difference
order ≥ 1 (that is, such series has at least one unit root) is a typical time series that has stochastic trend. Such series is also called difference stationary – in contrast to trend stationary.

13 STOCHASTIC TREND Recall the AR(1) model: Yt = c + Yt−1 + at.
As long as || < 1, it is stationary and everything is fine (OLS is consistent, t-stats are asymptotically normal, ...). Now consider the extreme case where  = 1, i.e. Yt = c + Yt−1 + at. Where is the trend? No t term.

14 STOCHASTIC TREND Let us replace recursively the lag of Yt on the right-hand side: Deterministic trend This is what we call a “random walk with drift”. If c = 0, it is a “random walk”.

15 STOCHASTIC TREND Each ai shock represents shift in the intercept. Since all values of {ai} have a coefficient of unity, the effect of each shock on the intercept term is permanent. In the time series literature, such a sequence is said to have a stochastic trend since each ai shock imparts a permanent and random change in the conditional mean of the series. To be able to define this situation, we use Autoregressive Integrated Moving Average (ARIMA) models.

16 DETERMINISTIC VS STOCHASTIC TREND
They might appear similar since they both lead to growth over time but they are quite different. To see why, suppose that through any policies, you got a bigger Yt because the noise at is big. What will happen next period? – With a deterministic trend, Yt+1 = c +(t+1)+at+1. The noise at is not affecting Yt+1. Your policy had a one period impact. – With a stochastic trend, Yt+1 = c + Yt + at+1 = c + (c + Yt−1 + at) + at+1. The noise at is affecting Yt+1. In fact, the policy will have a permanent impact.

17 DETERMINISTIC VS STOCHASTIC TREND
Conclusions: – When dealing with trending series, we are always interested in knowing whether the growth is a deterministic or stochastic trend. – There are also economic time series that do not grow over time (e.g., interest rates) but we will need to check if they have a behavior ”similar” to stochastic trends ( = 1 instead of || < a, while c = 0). – A deterministic trend refers to the long-term trend that is not affected by short term fluctuations in the series. Some of the occurrences are random and may have a permanent effect of the trend. Therefore the trend must contain a deterministic and a stochastic component.

18 DETERMINISTIC TREND EXAMPLE
Simulate data from let’s say AR(1): >x=arima.sim(list(order = c(1,0,0), ar = 0.6), n = 100) Simulate data with deterministic trend >y=2+time(x)*2+x >plot(y)

19 DETERMINISTIC TREND EXAMPLE
> reg=lm(y~time(y)) > summary(reg) Call: lm(formula = y ~ time(y)) Residuals: Min Q Median Q Max Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) e-14 *** time(y) < 2e-16 *** --- Signif. codes: 0 ‘***’ ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: on 98 degrees of freedom Multiple R-squared: , Adjusted R-squared: F-statistic: 2.142e+05 on 1 and 98 DF, p-value: < 2.2e-16

20 DETERMINISTIC TREND EXAMPLE
> plot(y=rstudent(reg),x=as.vector(time(y)), ylab='Standardized Residuals',xlab='Time',type='o')

21 DETERMINISTIC TREND EXAMPLE
> z=rstudent(reg) > par(mfrow=c(1,2)) > acf(z) > pacf(z) De-trended series AR(1)

22 STOCHASTIC TREND EXAMPLE
Simulate data from ARIMA(0,1,1): > x=arima.sim(list(order = c(0,1,1), ma = -0.7), n = 200) > plot(x) > acf(x) > pacf(x)

23 AUTOREGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA) PROCESSES
Consider an ARIMA(p,d,q) process

24 ARIMA MODELS When d=0, 0 is related to the mean of the process.
When d>0, 0 is a deterministic trend term. Non-stationary in mean: Non-stationary in level and slope:

25 RANDOM WALK PROCESS A random walk is defined as a process where the current value of a variable is composed of the past value plus an error term defined as a white noise (a normal variable with zero mean and variance one). ARIMA(0,1,0) PROCESS

26 RANDOM WALK PROCESS Behavior of stock market. Brownian motion.
Movement of a drunken men. It is a limiting process of AR(1).

27 RANDOM WALK PROCESS The implication of a process of this type is that the best prediction of Y for next period is the current value, or in other words the process does not allow to predict the change (YtYt-1). That is, the change of Y is absolutely random. It can be shown that the mean of a random walk process is constant but its variance is not. Therefore a random walk process is nonstationary, and its variance increases with t. In practice, the presence of a random walk process makes the forecast process very simple since all the future values of Yt+s for s > 0, is simply Yt.

28 RANDOM WALK PROCESS

29 RANDOM WALK PROCESS

30 RANDOM WALK WITH DRIFT Change in Yt is partially deterministic and partially stochastic. It can also be written as Pure model of a trend (no stationary component)

31 RANDOM WALK WITH DRIFT After t periods, the cumulative change in Yt is t0. Each ai shock has a permanent effect on the mean of Yt.

32 RANDOM WALK WITH DRIFT

33 ARIMA(0,1,1) OR IMA(1,1) PROCESS
Consider a process Letting

34 ARIMA(0,1,1) OR IMA(1,1) PROCESS
Characterized by the sample ACF of the original series failing to die out and by the sample ACF of the first differenced series shows the pattern of MA(1). IF: Exponentially decreasing. Weighted MA of its past values.

35 ARIMA(0,1,1) OR IMA(1,1) PROCESS
where  is the smoothing constant in the method of exponential smoothing.

36 REMOVING THE TREND A series containing a trend will not revert to a long-run mean. The usual methods for eliminating the trend are detrending and differencing.

37 DETRENDING Detrending is used to remove deterministic trend.
Regress Yt on time and save the residuals. Then, check whether residuals are stationary.

38 DIFFERENCING Differencing is used for removing the stochastic trend.
d-th difference of ARIMA(p,d,q) model is stationary. A series containing unit roots can be made stationary by differencing. ARIMA(p,d,q)  d unit roots Integrated of order d, I(d)

39 DIFFERENCING Random Walk: Non-stationary Stationary

40 KPSS TEST To be able to test whether we have a deterministic trend vs stochastic trend, we are using KPSS (Kwiatkowski, Phillips, Schmidt and Shin) Test (1992).

41 KPSS TEST STEP 1: Regress Yt on a constant and trend and construct the OLS residuals e=(e1,e2,…,en)’. STEP 2: Obtain the partial sum of the residuals. STEP 3: Obtain the test statistic where is the estimate of the long-run variance of the residuals.

42 KPSS TEST STEP 4: Reject H0 when KPSS is large, because that is the evidence that the series wander from its mean. Asymptotic distribution of the test statistic uses the standard Brownian bridge. It is the most powerful unit root test but if there is a volatility shift it cannot catch this type non-stationarity.

43 DETERMINISTIC TREND EXAMPLE
 kpss.test(x,null=c("Level")) KPSS Test for Level Stationarity data: x KPSS Level = , Truncation lag parameter = 2, p-value = 0.01 Warning message: In kpss.test(x, null = c("Level")) : p-value smaller than printed p-value > kpss.test(x,null=c("Trend")) KPSS Test for Trend Stationarity KPSS Trend = , Truncation lag parameter = 2, p-value = 0.1 In kpss.test(x, null = c("Trend")) : p-value greater than printed p-value Here, we have deterministic trend or trend stationary process. Hence, we need de-trending to work with stationary series.

44 STOCHASTIC TREND EXAMPLE
> kpss.test(x, null = "Level")   KPSS Test for Level Stationarity  data: x KPSS Level = 3.993, Truncation lag parameter = 3, p-value = 0.01 Warning message: In kpss.test(x, null = "Level") : p-value smaller than printed p-value > kpss.test(x, null = "Trend")   KPSS Test for Trend Stationarity KPSS Trend = , Truncation lag parameter = 3, p-value = 0.01 In kpss.test(x, null = "Trend") : p-value smaller than printed p-value Here, we have stochastic trend or difference stationary process. Hence, we need differencing to work with stationary series.

45 PROBLEM When an inappropriate method is used to eliminate the trend, we may create other problems like non-invertibility. E.g.

46 PROBLEM But if we misjudge the series as difference stationary, we need to take a difference. Actually, detrending should be applied. Then, the first difference: Now, we create a non-invertible unit root process in the MA component.

47 NON-STATIONARITY IN VARIANCE
We will learn more about this in the future. For now, we will only learn the variance stabilizing transformation.

48 VARIANCE STABILIZING TRANSFORMATION
Generally, we use the power function Transformation 1 1/Yt 0.5 1/(Yt)0.5 ln Yt 0.5 (Yt)0.5 1 Yt (no transformation)

49 VARIANCE STABILIZING TRANSFORMATION
Variance stabilizing transformation is only for positive series. If your series has negative values, then you need to add each value with a positive number so that all the values in the series are positive. Now, you can search for any need for transformation. It should be performed before any other analysis such as differencing. Not only stabilize the variance but also improves the approximation of the distribution by Normal distribution.

50 Box-Cox TRANSFORMATION
install(TSA) library(TSA) oil=ts(read.table('c:/oil.txt',header=T), start=1996, frequency=12) BoxCox.ar(y=oil)

51 Unit Root Tests

52 UNIT ROOTS IN TIME SERIES MODELS
Shock is usually used to describe an unexpected change in a variable or in the value of the error terms at a particular time period. When we have a stationary system, effect of a shock will die out gradually. When we have a non-stationary system, effect of a shock is permanent.

53 UNIT ROOTS IN TIME SERIES MODELS
Two types of non-stationarity: Unit root i.e.,|i| = 1: homogeneous non-stationarity |i| > 1: explosive non-stationarity Shock to the system become more influential as time goes on. Can never be seen in real life

54 UNIT ROOTS IN TIME SERIES MODELS
e.g. AR(1)

55 UNIT ROOTS IN TIME SERIES MODELS
A root near 1 of the AR polynomial differencing A root near 1 of the MA polynomial over-differencing

56 UNIT ROOTS IN AUTOREGRESSION
DICKEY-FULLER (DF) TEST: The simplest approach to test for a unit root begins with AR(1) model DF test actually does not consider 0 in the model, but actually model with 0 and without 0 gives different results.

57 DF TEST Consider the hypothesis
The hypothesis is the reverse of KPSS test.

58 DF TEST To simplify the computation, subtract Yt-1 from both sides of the AR(1) model; If =0, system has a unit root.

59 DF TEST DF (1979)

60 DF TEST Applying OLS method and finding the estimator for , the test statistic is given by The test is a one-sided left tail test. If {Yt} is stationary (i.e.,|φ| < 1) then it can be shown This means that under H0, the limiting distribution of t=1 is N(0,1).

61 DF TEST With a constant term: The test regression is
and includes a constant to capture the nonzero mean under the alternative. The hypotheses to be tested This formulation is appropriate for non-trending economic and financial series like interest rates, exchange rates and spreads.

62 DF TEST The test statistics tφ=1 and (n − 1)( − 1) are computed from the above regression. Under H0 : φ = 1, c = 0 the asymptotic distributions of these test statistics are influenced by the presence, but not the coefficient value, of the constant in the test regression:

63 DF TEST With constant and trend term The test regression is
and includes a constant and deterministic time trend to capture the deterministic trend under the alternative. The hypotheses to be tested

64 DF TEST This formulation is appropriate for trending time series like asset prices or the levels of macroeconomic aggregates like real GDP. The test statistics tφ=1 and (n − 1) ( − 1) are computed from the above regression. Under H0 : φ = 1, δ = 0 the asymptotic distributions of these test statistics are influenced by the presence but not the coefficient values of the constant and time trend in the test regression.

65 DF TEST The inclusion of a constant and trend in the test regression further shifts the distributions of tφ=1 and (n − 1)( − 1) to the left.

66 DF TEST What do we conclude if H0 is not rejected? The series contains a unit root, but is that it? No! What if Yt∼I(2)? We would still not have rejected. So we now need to test H0: Yt∼I(2) vs. H1: Yt∼I(1) We would continue to test for a further unit root until we rejected H0.

67 DF TEST This test is valid only if at is WN. If there is a serial correlation, the test should be augmented. So, check for possible autoregression in at. Many economic and financial time series have a more complicated dynamic structure than is captured by a simple AR(1) model. Said and Dickey (1984) augment the basic autoregressive unit root test to accommodate general ARMA(p, q) models with unknown orders and their test is referred to as the augmented Dickey- Fuller (ADF) test.

68 AUGMENTED DICKEY-FULLER (ADF) TEST
If serial correlation exists in the DF test equation (i.e., if the true model is not AR(1)), then use AR(p) to get rid of the serial correlation.

69 ADF TEST To test for a unit root, we assume that

70 ADF TEST Hence, testing for a unit root is equivalent to testing =1 in the following model or

71 ADF TEST Hypothesis Reject H0 if t=1<CV Reject H0 if t=0<CV
We can also use the following test statistics:

72 ADF TEST The limiting distribution of the test statistic
is non-standard distribution (function of Brownian motion _ or Wiener process).

73 Choosing the Lag Length for the ADF Test
An important practical issue for the implementation of the ADF test is the specification of the lag length p. If p is too small, then the remaining serial correlation in the errors will bias the test. If p is too large, then the power of the test will suffer.

74 Choosing the Lag Length for the ADF Test
Ng and Perron (1995) suggest the following data dependent lag length selection procedure that results in stable size of the test and minimal power loss: First, set an upper bound pmax for p. Next, estimate the ADF test regression with p = pmax. If the absolute value of the t-statistic for testing the significance of the last lagged difference is greater than 1.6, then set p = pmax and perform the unit root test. Otherwise, reduce the lag length by one and repeat the process.

75 Choosing the Lag Length for the ADF Test
A useful rule of thumb for determining pmax, suggested by Schwert (1989), is where [x] denotes the integer part of x. This choice allows pmax to grow with the sample so that the ADF test regressions are valid if the errors follow an ARMA process with unknown order.

76 ADF TEST EXAMPLE: n=54 Examine the original model and the differenced one to determine the order of AR parameters. For this example, p=3. Fit the model with t = 4, 5,…, 54.

77 ADF TEST EXAMPLE (contd.) Under H0,
n=50, CV=-13.3 H0 cannot be rejected. There is a unit root. The series should be differenced.

78 ADF TEST If the test statistics is positive, you can automatically decide to not reject the null hypothesis of unit root. Augmented model can be extended to allow MA terms in at. It is generally believed that MA terms are present in many macroeconomic time series after differencing. Said and Dickey (1984) developed an approach in which the orders of the AR and MA components in the error terms are unknown, but can be approximated by an AR(k) process where k is large enough to allow good approximation to the unknown ARMA(p,q) process.

79 ADF TEST Ensuring that at is approximately WN

80 PHILLIPS-PERRON (PP) UNIT ROOT TEST
Phillips and Perron (1988) have developed a more comprehensive theory of unit root nonstationarity. The tests are similar to ADF tests. The Phillips-Perron (PP) unit root tests differ from the ADF tests mainly in how they deal with serial correlation and heteroskedasticity in the errors. In particular, where the ADF tests use a parametric autoregression to approximate the ARMA structure of the errors in the test regression, the PP tests ignore any serial correlation in the test regression. The tests usually give the same conclusions as the ADF tests, and the calculation of the test statistics is complex.

81 PP TEST Consider a model DF: at ~ iid PP: at ~ serially correlated Add a correction factor to the DF test statistic. (ADF is to add lagged ΔYt to ‘whiten’ the serially correlated residuals)

82 PP TEST The hypothesis to be tested:

83 PP TEST The PP tests correct for any serial correlation and heteroskedasticity in the errors at of the test regression by directly modifying the test statistics t=0 and These modified statistics, denoted Zt and Z, are given by The terms and are consistent estimates of the variance parameters

84 PP TEST Under the null hypothesis that  = 0, the PP Zt and Z statistics have the same asymptotic distributions as the ADF t-statistic and normalized bias statistics. One advantage of the PP tests over the ADF tests is that the PP tests are robust to general forms of heteroskedasticity in the error term at. Another advantage is that the user does not have to specify a lag length for the test regression.

85 PROBLEM OF PP TEST On the other hand, the PP tests tend to be more powerful but, also subject to more severe size distortions Size problem: actual size is larger than the nominal one when autocorrelations of at are negative. more sensitive to model misspecification (the order of autoregressive and moving average components). Plotting ACFs help us to detect the potential size problem Economic time series sometimes have negative autocorrelations especially at lag one, we can use a Monte Carlo analysis to simulate the appropriate critical values, which may not be attractive to do.

86 Criticism of Dickey-Fuller and Phillips-Perron Type Tests
Main criticism is that the power of the tests is low if the process is stationary but with a root close to the non-stationary boundary. e.g. the tests are poor at deciding if φ=1 or φ=0.95, especially with small sample sizes. If the true data generating process (dgp) is Yt= 0.95Yt-1+ at then the null hypothesis of a unit root should be rejected. One way to get around this is to use a stationarity test (like KPSS test) as well as the unit root tests we have looked at (like ADF or PP).

87 Criticism of Dickey-Fuller and Phillips-Perron Type Tests
The ADF and PP unit root tests are known (from MC simulations) to suffer potentially severe finite sample power and size problems. 1. Power – The ADF and PP tests are known to have low power against the alternative hypothesis that the series is stationary (or TS) with a large autoregressive root. (See, e.g., DeJong, et al, J. of Econometrics, 1992.) 2. Size – The ADF and PP tests are known to have severe size distortion (in the direction of over-rejecting the null) when the series has a large negative moving average root. (See, e.g., Schwert. JBES, 1989: MA = -0.8, size = 100%!)

88 Criticism of Dickey-Fuller and Phillips-Perron Type Tests
A variety of alternative procedures have been proposed that try to resolve these problems, particularly, the power problem, but the ADF and PP tests continue to be the most widely used unit root tests. That may be changing!

89 STRUCTURAL BREAKS A stationary time-series may look like nonstationary when there are structural breaks in the intercept or trend The unit root tests lead to false non-rejection of the null when we don’t consider the structural breaks  low power A single breakpoint is introduced in Perron (1989) into the regression model; Perron (1997) extended it to a case of unknown breakpoint Perron, P., (1989), “The Great Crash, the Oil Price Shock and the Unit Root Hypothesis,” Econometrica, 57, 1361–1401.

90 STRUCTURAL BREAKS Consider the null and alternative hypotheses
H0: Yt = a0 + Yt-1 + µ1DP + at H1: Yt = a0 + a2t + µ2DL + at Pulse break: DP = 1 if t = TB + 1 and zero otherwise, Level break: DL = 0 for t = 1, , TB and one otherwise. Null: Yt contains a unit root with a one–time jump in the level of the series at time t = TB + 1 . Alternative: Yt is trend stationary with a one–time jump in the intercept at time t = TB + 1 .

91 Simulated unit root and trend stationary processes with structural break.
at ~ i.i.d. N(0,1) y0=0 H0: a0 = 0.5, DP = 1 for n = 51 zero otherwise, µ1 = 10. H1: a2 = 0.5, DL = 1 for n > 50. µ2 = 10

92 Power of ADF tests: Rejection frequencies of ADF–tests
ADF tests are biased toward non-rejection of the null. Rejection frequency is inversely related to the magnitude of the shift. Perron: estimated values of the autoregressive parameter in the Dickey–Fuller regression was biased toward unity and that this bias increased as the magnitude of the break increased.

93 Testing for unit roots when there are structural changes
Perron suggests running the following OLS regression: H0: a1 = 1; t–ratio, DF unit root test. Perron shows that the asymptotic distribution of the t-statistic depends on the location of the structural break, = TB/n critical values are supplied in Perron (1989) for different assumptions about l, see Table IV.B. 2018/7/4

94 EXAMPLE Consider following time series plots

95 KPSS RESULTS > kpss.test(y,c("Level")) KPSS Test for Level Stationarity data: y KPSS Level = , Truncation lag parameter = 2, p-value = 0.01 Warning message: In kpss.test(y, c("Level")) : p-value smaller than printed p-value > kpss.test(y,c("Trend")) KPSS Test for Trend Stationarity KPSS Trend = , Truncation lag parameter = 2, p-value = 0.01 In kpss.test(y, c("Trend")) : p-value smaller than printed p-value There is a stochastic trend. We need differencing to observe stationary series.

96 PP TEST RESULTS > pp.test(y) Phillips-Perron Unit Root Test data: y Dickey-Fuller Z(alpha) = , Truncation lag parameter = 4, p-value = alternative hypothesis: stationary

97 ADF TEST RESULT To decide the lag order

98 ADF TEST RESULTS Load ‘fUnitRoot’ package Unit root test with no drift
> adfTest(y, lags = 5, type = c("nc")) Title: Augmented Dickey-Fuller Test Test Results: PARAMETER: Lag Order: 5 STATISTIC: Dickey-Fuller: P VALUE: 0.4964 > adfTest(y, lags = 5, type = c("c")) Dickey-Fuller: 0.99 Unit root test with no drift Unit root test with drift

99 ADF TEST RESULTS Unit root test with drift and trend
> adfTest(y, lags = 5, type = c("ct")) Title: Augmented Dickey-Fuller Test Test Results: PARAMETER: Lag Order: 5 STATISTIC: Dickey-Fuller: P VALUE: Unit root test with drift and trend Combining results of ADF and KPSS tests, we can say that there is a stochastic trend. Differencing is needed.

100 REPEAT TESTS ON DIFFERECED SERIES
> kpss.test(ydif,c("Level")) KPSS Test for Level Stationarity data: ydif KPSS Level = , Truncation lag parameter = 2, p-value = 0.1 Warning message: In kpss.test(ydif, c("Level")) : p-value greater than printed p-value > kpss.test(ydif,c("Trend")) KPSS Test for Trend Stationarity KPSS Trend = , Truncation lag parameter = 2, p-value = 0.1 In kpss.test(ydif, c("Trend")) : p-value greater than printed p-value No need after getting the above result.

101 ADF TEST ON DIFFERENCED SERIES
> adfTest(ydif, lags = 5, type = c("nc")) Title: Augmented Dickey-Fuller Test Test Results: PARAMETER: Lag Order: 5 STATISTIC: Dickey-Fuller: P VALUE: > adfTest(ydif, lags = 5, type = c("c")) Dickey-Fuller: So, use this test result When you apply ADF test on a differenced series, use unit root test with no drift term. Differencing makes the constant part zero.

102 ADF TEST ON DIFFERENCED SERIES
> adfTest(ydif, lags = 5, type = c("ct")) Title: Augmented Dickey-Fuller Test Test Results: PARAMETER: Lag Order: 5 STATISTIC: Dickey-Fuller: P VALUE: 0.01

103 PP TEST ON DIFFERENCED SERIES
> pp.test(ydif) Phillips-Perron Unit Root Test data: ydif Dickey-Fuller Z(alpha) = , Truncation lag parameter = 3, p-value = 0.01 alternative hypothesis: stationary Warning message: In pp.test(ydif) : p-value smaller than printed p-value After the first order difference, the series became stationary. We don’t need the second difference. Model identification and estimation can be done on the first order differenced series. You don’t need to use ADF and PP test at the same time. Both of them are unit root tests. If you don’t want to determine the order of lags during testing, use just PP test.

104 Reference For a further comparison of the Unit Root Tests:
Our thanks go to Professor CEYLAN YOZGATLIGIL as this lecture is largely based on her notes for applied time series analysis!


Download ppt "MODEL IDENTIFICATION."

Similar presentations


Ads by Google