Presentation is loading. Please wait.

Presentation is loading. Please wait.

Regression with Time Series Data:

Similar presentations


Presentation on theme: "Regression with Time Series Data:"— Presentation transcript:

1 Regression with Time Series Data:
Chapter 9 Regression with Time Series Data: Stationary Variables Walter R. Paczkowski Rutgers University

2 9.2 Finite Distributed Lags 9.3 Serial Correlation
Chapter Contents 9.1 Introduction 9.2 Finite Distributed Lags 9.3 Serial Correlation 9.4 Other Tests for Serially Correlated Errors 9.5 Estimation with Serially Correlated Errors 9.6 Autoregressive Distributed Lag Models 9.7 Forecasting 9.8 Multiplier Analysis

3 9.1 Introduction

4 Two features of time-series data to consider:
9.1 Introduction When modeling relationships between variables, the nature of the data that have been collected has an important bearing on the appropriate choice of an econometric model Two features of time-series data to consider: Time-series observations on a given economic unit, observed over a number of time periods, are likely to be correlated Time-series data have a natural ordering according to time

5 9.1 Introduction There is also the possible existence of dynamic relationships between variables A dynamic relationship is one in which the change in a variable now has an impact on that same variable, or other variables, in one or more future time periods These effects do not occur instantaneously but are spread, or distributed, over future time periods

6 FIGURE 9.1 The distributed lag effect
Introduction FIGURE 9.1 The distributed lag effect

7 Dynamic Nature of Relationships
9.1 Introduction 9.1.1 Dynamic Nature of Relationships Ways to model the dynamic relationship: Specify that a dependent variable y is a function of current and past values of an explanatory variable x Because of the existence of these lagged effects, Eq. 9.1 is called a distributed lag model Eq. 9.1

8 Dynamic Nature of Relationships
9.1 Introduction 9.1.1 Dynamic Nature of Relationships Ways to model the dynamic relationship (Continued): Capturing the dynamic characteristics of time-series by specifying a model with a lagged dependent variable as one of the explanatory variables Or have: Such models are called autoregressive distributed lag (ARDL) models, with ‘‘autoregressive’’ meaning a regression of yt on its own lag or lags Eq. 9.2 Eq. 9.3

9 Dynamic Nature of Relationships
9.1 Introduction 9.1.1 Dynamic Nature of Relationships Ways to model the dynamic relationship (Continued): Model the continuing impact of change over several periods via the error term In this case et is correlated with et - 1 We say the errors are serially correlated or autocorrelated Eq. 9.4

10 Least Squares Assumptions
9.1 Introduction 9.1.2 Least Squares Assumptions The primary assumption is Assumption MR4: For time series, this is written as: The dynamic models in Eqs. 9.2, 9.3 and 9.4 imply correlation between yt and yt - 1 or et and et - 1 or both, so they clearly violate assumption MR4

11 9.1 Introduction 9.1.2a Stationarity A stationary variable is one that is not explosive, nor trending, and nor wandering aimlessly without returning to its mean

12 FIGURE 9.2 (a) Time series of a stationary variable
9.1 Introduction FIGURE 9.2 (a) Time series of a stationary variable 9.1.2a Stationarity

13 FIGURE 9.2 (b) time series of a nonstationary variable that is ‘‘slow-turning’’ or ‘‘wandering’’
9.1 Introduction 9.1.2a Stationarity

14 FIGURE 9.2 (c) time series of a nonstationary variable that ‘‘trends”
9.1 Introduction FIGURE 9.2 (c) time series of a nonstationary variable that ‘‘trends” 9.1.2a Stationarity

15 Alternative Paths Through the Chapter
FIGURE 9.3 (a) Alternative paths through the chapter starting with finite distributed lags 9.1 Introduction 9.1.3 Alternative Paths Through the Chapter

16 Alternative Paths Through the Chapter
FIGURE 9.3 (b) Alternative paths through the chapter starting with serial correlation 9.1 Introduction 9.1.3 Alternative Paths Through the Chapter

17 Finite Distributed Lags
9.2 Finite Distributed Lags

18 Finite Distributed Lags
9.2 Finite Distributed Lags Consider a linear model in which, after q time periods, changes in x no longer have an impact on y Note the notation change: βs is used to denote the coefficient of xt-s and α is introduced to denote the intercept Eq. 9.5

19 Finite Distributed Lags
9.2 Finite Distributed Lags Model 9.5 has two uses: Forecasting Policy analysis What is the effect of a change in x on y? Eq. 9.6 Eq. 9.7

20 Finite Distributed Lags
9.2 Finite Distributed Lags Assume xt is increased by one unit and then maintained at its new level in subsequent periods The immediate impact will be β0 the total effect in period t + 1 will be β0 + β1, in period t + 2 it will be β0 + β1 + β2, and so on These quantities are called interim multipliers The total multiplier is the final effect on y of the sustained increase after q or more periods have elapsed

21 Finite Distributed Lags
9.2 Finite Distributed Lags The effect of a one-unit change in xt is distributed over the current and next q periods, from which we get the term ‘‘distributed lag model’’ It is called a finite distributed lag model of order q It is assumed that after a finite number of periods q, changes in x no longer have an impact on y The coefficient βs is called a distributed-lag weight or an s-period delay multiplier The coefficient β0 (s = 0) is called the impact multiplier

22 Finite Distributed Lags
9.2 Finite Distributed Lags ASSUMPTIONS OF THE DISTRIBUTED LAG MODEL 9.2.1 Assumptions TSMR1. TSMR2. y and x are stationary random variables, and et is independent of current, past and future values of x. TSMR3. E(et) = 0 TSMR4. var(et) = σ2 TSMR5. cov(et, es) = 0 t ≠ s TSMR6. et ~ N(0, σ2)

23 Finite Distributed Lags
9.2 Finite Distributed Lags 9.2.2 An Example: Okun’s Law Consider Okun’s Law In this model the change in the unemployment rate from one period to the next depends on the rate of growth of output in the economy: We can rewrite this as: where DU = ΔU = Ut - Ut-1, β0 = -γ, and α = γGN Eq. 9.8 Eq. 9.9

24 Finite Distributed Lags
9.2 Finite Distributed Lags 9.2.2 An Example: Okun’s Law We can expand this to include lags: We can calculate the growth in output, G, as: Eq. 9.10 Eq. 9.11

25 Finite Distributed Lags
9.2 Finite Distributed Lags FIGURE 9.4 (a) Time series for the change in the U.S. unemployment rate: 1985Q3 to 2009Q3 9.2.2 An Example: Okun’s Law

26 Finite Distributed Lags
9.2 Finite Distributed Lags FIGURE 9.4 (b) Time series for U.S. GDP growth: 1985Q2 to 2009Q3 9.2.2 An Example: Okun’s Law

27 Finite Distributed Lags
9.2 Finite Distributed Lags Table 9.1 Spreadsheet of Observations for Distributed Lag Model 9.2.2 An Example: Okun’s Law

28 Finite Distributed Lags
9.2 Finite Distributed Lags Table 9.2 Estimates for Okun’s Law Finite Distributed Lag Model 9.2.2 An Example: Okun’s Law

29 9.3 Serial Correlation

30 These terms are used interchangeably
9.3 Serial Correlation When is assumption TSMR5, cov(et, es) = 0 for t ≠ s likely to be violated, and how do we assess its validity? When a variable exhibits correlation over time, we say it is autocorrelated or serially correlated These terms are used interchangeably

31 Serial Correlation in Output Growth
9.3 Serial Correlation FIGURE 9.5 Scatter diagram for Gt and Gt-1 9.3.1 Serial Correlation in Output Growth

32 Computing Autocorrelation
9.3 Serial Correlation 9.3.1a Computing Autocorrelation Recall that the population correlation between two variables x and y is given by:

33 Computing Autocorrelation
9.3 Serial Correlation 9.3.1a Computing Autocorrelation For the Okun’s Law problem, we have: The notation ρ1 is used to denote the population correlation between observations that are one period apart in time This is known also as the population autocorrelation of order one. The second equality in Eq holds because var(Gt) = var(Gt-1) , a property of time series that are stationary Eq. 9.12

34 Computing Autocorrelation
9.3 Serial Correlation 9.3.1a Computing Autocorrelation The first-order sample autocorrelation for G is obtained from Eq using the estimates:

35 Computing Autocorrelation
9.3 Serial Correlation 9.3.1a Computing Autocorrelation Making the substitutions, we get: Eq. 9.13

36 Computing Autocorrelation
9.3 Serial Correlation 9.3.1a Computing Autocorrelation More generally, the k-th order sample autocorrelation for a series y that gives the correlation between observations that are k periods apart is: Eq. 9.14

37 Computing Autocorrelation
9.3 Serial Correlation 9.3.1a Computing Autocorrelation Because (T - k) observations are used to compute the numerator and T observations are used to compute the denominator, an alternative that leads to larger estimates in finite samples is: Eq. 9.15

38 Computing Autocorrelation
9.3 Serial Correlation 9.3.1a Computing Autocorrelation Applying this to our problem, we get for the first four autocorrelations: Eq. 9.16

39 Computing Autocorrelation
9.3 Serial Correlation 9.3.1a Computing Autocorrelation How do we test whether an autocorrelation is significantly different from zero? The null hypothesis is H0: ρk = 0 A suitable test statistic is: Eq. 9.17

40 Computing Autocorrelation
9.3 Serial Correlation 9.3.1a Computing Autocorrelation For our problem, we have: We reject the hypotheses H0: ρ1 = 0 and H0: ρ2 = 0 We have insufficient evidence to reject H0: ρ3 = 0 ρ4 is on the borderline of being significant. We conclude that G, the quarterly growth rate in U.S. GDP, exhibits significant serial correlation at lags one and two

41 9.3 Serial Correlation 9.3.1b The Correlagram The correlogram, also called the sample autocorrelation function, is the sequence of autocorrelations r1, r2, r3, … It shows the correlation between observations that are one period apart, two periods apart, three periods apart, and so on

42 FIGURE 9.6 Correlogram for G
9.3 Serial Correlation FIGURE 9.6 Correlogram for G 9.3.1b The Correlagram

43 Serially Correlated Errors
9.3 Serial Correlation 9.3.2 Serially Correlated Errors The correlogram can also be used to check whether the multiple regression assumption cov(et, es) = 0 for t ≠ s is violated

44 Consider a model for a Phillips Curve:
9.3 Serial Correlation 9.3.2a A Phillips Curve Consider a model for a Phillips Curve: If we initially assume that inflationary expectations are constant over time (β1 = INFEt) set β2= -γ, and add an error term: Eq. 9.18 Eq. 9.19

45 FIGURE 9.7 (a) Time series for Australian price inflation
9.3 Serial Correlation FIGURE 9.7 (a) Time series for Australian price inflation 9.3.2a A Phillips Curve

46 9.3 Serial Correlation FIGURE 9.7 (b) Time series for the quarterly change in the Australian unemployment rate 9.3.2a A Phillips Curve

47 9.3 Serial Correlation 9.3.2a A Phillips Curve To determine if the errors are serially correlated, we compute the least squares residuals: Eq. 9.20

48 9.3 Serial Correlation FIGURE 9.8 Correlogram for residuals from least-squares estimated Phillips curve 9.3.2a A Phillips Curve

49 The k-th order autocorrelation for the residuals can be written as:
9.3 Serial Correlation 9.3.2a A Phillips Curve The k-th order autocorrelation for the residuals can be written as: The least squares equation is: Eq. 9.21 Eq. 9.22

50 The values at the first five lags are:
9.3 Serial Correlation 9.3.2a A Phillips Curve The values at the first five lags are:

51 Other Tests for Serially Correlated Errors
9.4 Other Tests for Serially Correlated Errors

52 9.4 Other Tests for Serially Correlated Errors 9.4.1 A Lagrange Multiplier Test An advantage of this test is that it readily generalizes to a joint test of correlations at more than one lag

53 We can substitute this into a simple regression equation:
9.4 Other Tests for Serially Correlated Errors 9.4.1 A Lagrange Multiplier Test If et and et-1 are correlated, then one way to model the relationship between them is to write: We can substitute this into a simple regression equation: Eq. 9.23 Eq. 9.24

54 We have one complication: is unknown Two ways to handle this are:
9.4 Other Tests for Serially Correlated Errors 9.4.1 A Lagrange Multiplier Test We have one complication: is unknown Two ways to handle this are: Delete the first observation and use a total of T observations Set and use all T observations

55 For the Phillips Curve:
9.4 Other Tests for Serially Correlated Errors 9.4.1 A Lagrange Multiplier Test For the Phillips Curve: The results are almost identical The null hypothesis H0: ρ = 0 is rejected at all conventional significance levels We conclude that the errors are serially correlated

56 But since we know that , we get:
9.4 Other Tests for Serially Correlated Errors 9.4.1 A Lagrange Multiplier Test To derive the relevant auxiliary regression for the autocorrelation LM test, we write the test equation as: But since we know that , we get: Eq. 9.25

57 9.4 Other Tests for Serially Correlated Errors 9.4.1 A Lagrange Multiplier Test Rearranging, we get: If H0: ρ = 0 is true, then LM = T x R2 has an approximate χ2(1) distribution T and R2 are the sample size and goodness-of-fit statistic, respectively, from least squares estimation of Eq. 9.26 Eq. 9.26

58 Considering the two alternative ways to handle :
9.4 Other Tests for Serially Correlated Errors 9.4.1 A Lagrange Multiplier Test Considering the two alternative ways to handle : These values are much larger than 3.84, which is the 5% critical value from a χ2(1)-distribution We reject the null hypothesis of no autocorrelation Alternatively, we can reject H0 by examining the p-value for LM = 27.61, which is 0.000

59 For a four-period lag, we obtain:
9.4 Other Tests for Serially Correlated Errors 9.4.1a Testing Correlation at Longer Lags For a four-period lag, we obtain: Because the 5% critical value from a χ2(4)-distribution is 9.49, these LM values lead us to conclude that the errors are serially correlated

60 9.4 Other Tests for Serially Correlated Errors 9.4.2 The Durbin-Watson Test This is used less frequently today because its critical values are not available in all software packages, and one has to examine upper and lower critical bounds instead Also, unlike the LM and correlogram tests, its distribution no longer holds when the equation contains a lagged dependent variable

61 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors

62 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors Three estimation procedures are considered: Least squares estimation An estimation procedure that is relevant when the errors are assumed to follow what is known as a first-order autoregressive model A general estimation strategy for estimating models with serially correlated errors

63 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors We will encounter models with a lagged dependent variable, such as:

64 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors ASSUMPTION FOR MODELS WITH A LAGGED DEPENDENT VARIABLE TSMR2A In the multiple regression model Where some of the xtk may be lagged values of y, vt is uncorrelated with all xtk and their past values.

65 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors 9.5.1 Least Squares Estimation Suppose we proceed with least squares estimation without recognizing the existence of serially correlated errors. What are the consequences? The least squares estimator is still a linear unbiased estimator, but it is no longer best The formulas for the standard errors usually computed for the least squares estimator are no longer correct Confidence intervals and hypothesis tests that use these standard errors may be misleading

66 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors 9.5.1 Least Squares Estimation It is possible to compute correct standard errors for the least squares estimator: HAC (heteroskedasticity and autocorrelation consistent) standard errors, or Newey-West standard errors These are analogous to the heteroskedasticity consistent standard errors

67 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors 9.5.1 Least Squares Estimation Consider the model yt = β1 + β2xt + et The variance of b2 is: where Eq. 9.27

68 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors 9.5.1 Least Squares Estimation When the errors are not correlated, cov(et, es) = 0, and the term in square brackets is equal to one. The resulting expression is the one used to find heteroskedasticity-consistent (HC) standard errors When the errors are correlated, the term in square brackets is estimated to obtain HAC standard errors

69 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors 9.5.1 Least Squares Estimation If we call the quantity in square brackets g and its estimate , then the relationship between the two estimated variances is: Eq. 9.28

70 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors 9.5.1 Least Squares Estimation Let’s reconsider the Phillips Curve model: Eq. 9.29

71 Estimation with Serially Correlated Errors
9.5 Estimation with Serially Correlated Errors 9.5.1 Least Squares Estimation The t and p-values for testing H0: β2 = 0 are:

72 9.5 Estimation with Serially Correlated Errors 9.5.2 Estimating an AR(1) Error Model Return to the Lagrange multiplier test for serially correlated errors where we used the equation: Assume the vt are uncorrelated random errors with zero mean and constant variances: Eq. 9.30 Eq. 9.31

73 9.5 Estimation with Serially Correlated Errors 9.5.2 Estimating an AR(1) Error Model Eq describes a first-order autoregressive model or a first-order autoregressive process for et The term AR(1) model is used as an abbreviation for first-order autoregressive model It is called an autoregressive model because it can be viewed as a regression model It is called first-order because the right-hand-side variable is et lagged one period

74 The mean and variance of et are:
9.5 Estimation with Serially Correlated Errors 9.5.2a Properties of an AR(1) Error We assume that: The mean and variance of et are: The covariance term is: Eq. 9.32 Eq. 9.33 Eq. 9.34

75 The correlation implied by the covariance is:
9.5 Estimation with Serially Correlated Errors 9.5.2a Properties of an AR(1) Error The correlation implied by the covariance is: Eq. 9.35

76 r1 is an estimate for ρ when we assume a series is AR(1)
9.5 Estimation with Serially Correlated Errors 9.5.2a Properties of an AR(1) Error Setting k = 1: ρ represents the correlation between two errors that are one period apart It is the first-order autocorrelation for e, sometimes simply called the autocorrelation coefficient It is the population autocorrelation at lag one for a time series that can be described by an AR(1) model r1 is an estimate for ρ when we assume a series is AR(1) Eq. 9.36

77 Each et depends on all past values of the errors vt:
9.5 Estimation with Serially Correlated Errors 9.5.2a Properties of an AR(1) Error Each et depends on all past values of the errors vt: For the Phillips Curve, we find for the first five lags: For an AR(1) model, we have: Eq. 9.37

78 For longer lags, we have:
9.5 Estimation with Serially Correlated Errors 9.5.2a Properties of an AR(1) Error For longer lags, we have:

79 Our model with an AR(1) error is:
9.5 Estimation with Serially Correlated Errors 9.5.2b Nonlinear Least Squares Estimation Our model with an AR(1) error is: with -1 < ρ < 1 For the vt, we have: Eq. 9.38 Eq. 9.39

80 With the appropriate substitutions, we get:
9.5 Estimation with Serially Correlated Errors 9.5.2b Nonlinear Least Squares Estimation With the appropriate substitutions, we get: For the previous period, the error is: Multiplying by ρ: Eq. 9.40 Eq. 9.41 Eq. 9.42

81 Substituting, we get: Eq. 9.43 9.5
Estimation with Serially Correlated Errors 9.5.2b Nonlinear Least Squares Estimation Substituting, we get: Eq. 9.43

82 The coefficient of xt-1 equals -ρβ2
9.5 Estimation with Serially Correlated Errors 9.5.2b Nonlinear Least Squares Estimation The coefficient of xt-1 equals -ρβ2 Although Eq is a linear function of the variables xt , yt-1 and xt-1, it is not a linear function of the parameters (β1, β2, ρ) The usual linear least squares formulas cannot be obtained by using calculus to find the values of (β1, β2, ρ) that minimize Sv These are nonlinear least squares estimates

83 Our Phillips Curve model assuming AR(1) errors is:
9.5 Estimation with Serially Correlated Errors 9.5.2b Nonlinear Least Squares Estimation Our Phillips Curve model assuming AR(1) errors is: Applying nonlinear least squares and presenting the estimates in terms of the original untransformed model, we have: Eq. 9.44 Eq. 9.45

84 9.5 Estimation with Serially Correlated Errors 9.5.2c Generalized Least Squares Estimation Nonlinear least squares estimation of Eq is equivalent to using an iterative generalized least squares estimator called the Cochrane-Orcutt procedure

85 Suppose now that we consider the model:
9.5 Estimation with Serially Correlated Errors 9.5.3 Estimating a More General Model We have the model: Suppose now that we consider the model: This new notation will be convenient when we discuss a general class of autoregressive distributed lag (ARDL) models Eq is a member of this class Eq. 9.46 Eq. 9.47

86 Note that Eq. 9.47 is the same as Eq. 9.47 since:
9.5 Estimation with Serially Correlated Errors 9.5.3 Estimating a More General Model Note that Eq is the same as Eq since: Eq is a restricted version of Eq with the restriction δ1 = -θ1δ0 imposed Eq. 9.48

87 9.5 Estimation with Serially Correlated Errors 9.5.3 Estimating a More General Model Applying the least squares estimator to Eq using the data for the Phillips curve example yields: Eq. 9.49

88 The equivalent AR(1) estimates are:
9.5 Estimation with Serially Correlated Errors 9.5.3 Estimating a More General Model The equivalent AR(1) estimates are: These are similar to our other estimates

89 The original economic model for the Phillips Curve was:
9.5 Estimation with Serially Correlated Errors 9.5.3 Estimating a More General Model The original economic model for the Phillips Curve was: Re-estimation of the model after omitting DUt-1 yields: Eq. 9.50 Eq. 9.51

90 In this model inflationary expectations are given by:
9.5 Estimation with Serially Correlated Errors 9.5.3 Estimating a More General Model In this model inflationary expectations are given by: A 1% rise in the unemployment rate leads to an approximate 0.5% fall in the inflation rate

91 Estimate the model using least squares with HAC standard errors
9.5 Estimation with Serially Correlated Errors 9.5.4 Summary of Section 9.5 and Looking Ahead We have described three ways of overcoming the effect of serially correlated errors: Estimate the model using least squares with HAC standard errors Use nonlinear least squares to estimate the model with a lagged x, a lagged y, and the restriction implied by an AR(1) error specification Use least squares to estimate the model with a lagged x and a lagged y, but without the restriction implied by an AR(1) error specification

92 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models

93 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models An autoregressive distributed lag (ARDL) model is one that contains both lagged xt’s and lagged yt’s Two examples: Eq. 9.52

94 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models An ARDL model can be transformed into one with only lagged x’s which go back into the infinite past: This model is called an infinite distributed lag model Eq. 9.53

95 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models Four possible criteria for choosing p and q: Has serial correlation in the errors been eliminated? Are the signs and magnitudes of the estimates consistent with our expectations from economic theory? Are the estimates significantly different from zero, particularly those at the longest lags? What values for p and q minimize information criteria such as the AIC and SC?

96 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models The Akaike information criterion (AIC) is: where K = p + q + 2 The Schwarz criterion (SC), also known as the Bayes information criterion (BIC), is: Because Kln(T)/T > 2K/T for T ≥ 8, the SC penalizes additional lags more heavily than does the AIC Eq. 9.54 Eq. 9.55

97 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models 9.6.1 The Phillips Curve Consider the previously estimated ARDL(1,0) model: Eq. 9.56

98 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models FIGURE 9.9 Correlogram for residuals from Phillips curve ARDL(1,0) model 9.6.1 The Phillips Curve

99 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models Table 9.3 p-values for LM Test for Autocorrelation 9.6.1 The Phillips Curve

100 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models 9.6.1 The Phillips Curve For an ARDL(4,0) version of the model: Eq. 9.57

101 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models 9.6.1 The Phillips Curve Inflationary expectations are given by:

102 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models Table 9.4 AIC and SC Values for Phillips Curve ARDL Models 9.6.1 The Phillips Curve

103 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models 9.6.2 Okun’s Law Recall the model for Okun’s Law: Eq. 9.58

104 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models FIGURE 9.10 Correlogram for residuals from Okun’s law ARDL(0,2) model 9.6.2 Okun’s Law

105 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models Table 9.5 AIC and SC Values for Okun’s Law ARDL Models 9.6.2 Okun’s Law

106 Autoregressive Distributed Lag Models
9.6 Autoregressive Distributed Lag Models 9.6.2 Okun’s Law Now consider this version: Eq. 9.59

107 An autoregressive model of order p, denoted AR(p), is given by:
9.6 Autoregressive Distributed Lag Models 9.6.3 Autoregressive Models An autoregressive model of order p, denoted AR(p), is given by: Eq. 9.60

108 Consider a model for growth in real GDP:
9.6 Autoregressive Distributed Lag Models 9.6.3 Autoregressive Models Consider a model for growth in real GDP: Eq. 9.61

109 FIGURE 9.11 Correlogram for residuals from AR(2) model for GDP growth
9.6 Autoregressive Distributed Lag Models FIGURE 9.11 Correlogram for residuals from AR(2) model for GDP growth 9.6.3 Autoregressive Models

110 Table 9.6 AIC and SC Values for AR Model of Growth in U.S. GDP
Autoregressive Distributed Lag Models Table 9.6 AIC and SC Values for AR Model of Growth in U.S. GDP 9.6.3 Autoregressive Models

111 9.7 Forecasting

112 We consider forecasting using three different models: AR model
9.7 Forecasting We consider forecasting using three different models: AR model ARDL model Exponential smoothing model

113 Forecasting with an AR Model
9.7 Forecasting 9.7.1 Forecasting with an AR Model Consider an AR(2) model for real GDP growth: The model to forecast GT+1 is: Eq. 9.62

114 Forecasting with an AR Model
9.7 Forecasting 9.7.1 Forecasting with an AR Model The growth values for the two most recent quarters are: GT = G2009Q3 = 0.8 GT-1 = G2009Q2 = -0.2 The forecast for G2009Q4 is: Eq. 9.63

115 Forecasting with an AR Model
9.7 Forecasting 9.7.1 Forecasting with an AR Model For two quarters ahead, the forecast for G2010Q1 is: For three periods out, it is: Eq. 9.64 Eq. 9.65

116 Forecasting with an AR Model
9.7 Forecasting 9.7.1 Forecasting with an AR Model Summarizing our forecasts: Real GDP growth rates for 2009Q4, 2010Q1, and 2010Q2 are approximately 0.72%, 0.93%, and 0.99%, respectively

117 Forecasting with an AR Model
9.7 Forecasting 9.7.1 Forecasting with an AR Model A 95% interval forecast for j periods into the future is given by: where is the standard error of the forecast error and df is the number of degrees of freedom in the estimation of the AR model

118 Forecasting with an AR Model
9.7 Forecasting 9.7.1 Forecasting with an AR Model The first forecast error, occurring at time T+1, is: Ignoring the error from estimating the coefficients, we get: Eq. 9.66

119 Forecasting with an AR Model
9.7 Forecasting 9.7.1 Forecasting with an AR Model The forecast error for two periods ahead is: The forecast error for three periods ahead is: Eq. 9.67 Eq. 9.68

120 Forecasting with an AR Model
9.7 Forecasting 9.7.1 Forecasting with an AR Model Because the vt’s are uncorrelated with constant variance , we can show that:

121 Forecasting with an AR Model
9.7 Forecasting Table 9.7 Forecasts and Forecast Intervals for GDP Growth 9.7.1 Forecasting with an AR Model

122 Forecasting with an ARDL Model
9.7 Forecasting 9.7.2 Forecasting with an ARDL Model Consider forecasting future unemployment using the Okun’s Law ARDL(1,1): The value of DU in the first post-sample quarter is: But we need a value for GT+1 Eq. 9.69 Eq. 9.70

123 Forecasting with an ARDL Model
9.7 Forecasting 9.7.2 Forecasting with an ARDL Model Now consider the change in unemployment Rewrite Eq as: Rearranging: Eq. 9.71

124 Forecasting with an ARDL Model
9.7 Forecasting 9.7.2 Forecasting with an ARDL Model For the purpose of computing point and interval forecasts, the ARDL(1,1) model for a change in unemployment can be written as an ARDL(2,1) model for the level of unemployment This result holds not only for ARDL models where a dependent variable is measured in terms of a change or difference, but also for pure AR models involving such variables

125 Exponential Smoothing
9.7 Forecasting 9.7.3 Exponential Smoothing Another popular model used for predicting the future value of a variable on the basis of its history is the exponential smoothing method Like forecasting with an AR model, forecasting using exponential smoothing does not utilize information from any other variable

126 Exponential Smoothing
9.7 Forecasting 9.7.3 Exponential Smoothing One possible forecasting method is to use the average of past information, such as: This forecasting rule is an example of a simple (equally-weighted) moving average model with k = 3

127 Exponential Smoothing
9.7 Forecasting 9.7.3 Exponential Smoothing Now consider a form in which the weights decline exponentially as the observations get older: We assume that 0 < α < 1 Also, it can be shown that: Eq. 9.72

128 Exponential Smoothing
9.7 Forecasting 9.7.3 Exponential Smoothing For forecasting, recognize that: We can simplify to: Eq. 9.73 Eq. 9.74

129 Exponential Smoothing
9.7 Forecasting 9.7.3 Exponential Smoothing The value of α can reflect one’s judgment about the relative weight of current information It can be estimated from historical information by obtaining within-sample forecasts: Choosing α that minimizes the sum of squares of the one-step forecast errors: Eq. 9.75 Eq. 9.76

130 Exponential Smoothing
FIGURE 9.12 (a) Exponentially smoothed forecasts for GDP growth with α = 0.38 9.7 Forecasting 9.7.3 Exponential Smoothing

131 Exponential Smoothing
FIGURE 9.12 (b) Exponentially smoothed forecasts for GDP growth with α = 0.8 9.7 Forecasting 9.7.3 Exponential Smoothing

132 Exponential Smoothing
9.7 Forecasting 9.7.3 Exponential Smoothing The forecasts for 2009Q4 from each value of α are:

133 9.8 Multiplier Analysis

134 9.8 Multiplier Analysis Multiplier analysis refers to the effect, and the timing of the effect, of a change in one variable on the outcome of another variable

135 Let’s find multipliers for an ARDL model of the form:
9.8 Multiplier Analysis Let’s find multipliers for an ARDL model of the form: We can transform this into an infinite distributed lag model: Eq. 9.77 Eq. 9.78

136 The multipliers are defined as:
9.8 Multiplier Analysis The multipliers are defined as:

137 The lag operator is defined as: Lagging twice, we have: Or:
9.8 Multiplier Analysis The lag operator is defined as: Lagging twice, we have: Or: More generally, we have:

138 Now rewrite our model as:
9.8 Multiplier Analysis Now rewrite our model as: Eq. 9.79

139 9.8 Multiplier Analysis Rearranging terms: Eq. 9.80

140 Let’s apply this to our Okun’s Law model The model:
9.8 Multiplier Analysis Let’s apply this to our Okun’s Law model The model: can be rewritten as: Eq. 9.81 Eq. 9.82

141 Define the inverse of (1 – θ1L) as (1 – θ1L)-1 such that:
9.8 Multiplier Analysis Define the inverse of (1 – θ1L) as (1 – θ1L)-1 such that:

142 Multiply both sides of Eq. 9.82 by (1 – θ1L)-1:
Multiplier Analysis Multiply both sides of Eq by (1 – θ1L)-1: Equating this with the infinite distributed lag representation: Eq. 9.83 Eq. 9.84

143 For Eqs. 9.83 and 9.84 to be identical, it must be true that:
Multiplier Analysis For Eqs and 9.84 to be identical, it must be true that: Eq. 9.85 Eq. 9.86 Eq. 9.87

144 Multiply both sides of Eq. 9.85 by (1 – θ1L) to obtain (1 – θ1L)α = δ.
Multiplier Analysis Multiply both sides of Eq by (1 – θ1L) to obtain (1 – θ1L)α = δ. Note that the lag of a constant that does not change so Lα = α Now we have:

145 Multiply both sides of Eq. 9.86 by (1 – θ1L):
Multiplier Analysis Multiply both sides of Eq by (1 – θ1L): Eq. 9.88

146 Equating coefficients of like powers in L yields:
9.8 Multiplier Analysis Rewrite Eq as: Equating coefficients of like powers in L yields: and so on Eq. 9.89

147 We can now find the β’s using the recursive equations:
9.8 Multiplier Analysis We can now find the β’s using the recursive equations: Eq. 9.90

148 9.8 Multiplier Analysis You can start from the equivalent of Eq which, in its general form, is: Given the values p and q for your ARDL model, you need to multiply out the above expression, and then equate coefficients of like powers in the lag operator Eq. 9.91

149 For the Okun’s Law model:
9.8 Multiplier Analysis For the Okun’s Law model: The impact and delay multipliers for the first four quarters are:

150 FIGURE 9.13 Delay multipliers from Okun’s law ARDL(1,1) model
9.8 Multiplier Analysis FIGURE 9.13 Delay multipliers from Okun’s law ARDL(1,1) model

151 We can estimate the total multiplier given by:
9.8 Multiplier Analysis We can estimate the total multiplier given by: and the normal growth rate that is needed to maintain a constant rate of unemployment:

152 An estimate for α is given by:
9.8 Multiplier Analysis We can show that: An estimate for α is given by: Therefore, normal growth rate is:

153 Key Words

154 Autoregressive distributed lags autoregressive error
Keywords AIC criterion AR(1) error AR(p) model ARDL(p,q) model autocorrelation Autoregressive distributed lags autoregressive error autoregressive model BIC criterion correlogram delay multiplier distributed lag weight dynamic models exponential smoothing finite distributed lag forecast error forecast intervals forecasting HAC standard errors impact multiplier infinite distributed lag interim multiplier lag length lag operator lagged dependent variable LM test multiplier analysis nonlinear least squares out-of-sample forecasts sample autocorrelations serial correlation standard error of forecast error SC criterion total multiplier T x R2 form of LM test within-sample forecasts

155 Appendices

156 The Durbin-Watson Test
For the Durbin-Watson test, the hypotheses are: The test statistic is: Eq. 9A.1

157 The Durbin-Watson Test
We can expand the test statistic as: Eq. 9A.2

158 The Durbin-Watson Test
We can now write: If the estimated value of ρ is r1 = 0, then the Durbin-Watson statistic d ≈ 2 This is taken as an indication that the model errors are not autocorrelated If the estimate of ρ happened to be r1 = 1 then d ≈ 0 A low value for the Durbin-Watson statistic implies that the model errors are correlated, and ρ > 0 Eq. 9A.3

159 The Durbin-Watson Test
FIGURE 9A.1 Testing for positive autocorrelation

160 9A The Durbin-Watson Test FIGURE 9A.2 Upper and lower critical value bounds for the Durbin-Watson test 9A.1 The Durbin-Watson Bounds Test

161 Decision rules, known collectively as the Durbin-Watson bounds test:
The Durbin-Watson Test 9A.1 The Durbin-Watson Bounds Test Decision rules, known collectively as the Durbin-Watson bounds test: If d < dLc: reject H0: ρ = 0 and accept H1: ρ > 0 If d > dUc do not reject H0: ρ = 0 If dLc < d < dUc, the test is inconclusive

162 Properties of the AR(1) Error
9B Properties of the AR(1) Error Note that: Further substitution shows that: Eq. 9B.1 Eq. 9B.2

163 Properties of the AR(1) Error
9B Properties of the AR(1) Error Repeating the substitution k times and rearranging: If we let k → ∞, then we have: Eq. 9B.3 Eq. 9B.4

164 Properties of the AR(1) Error
9B Properties of the AR(1) Error We can now find the properties of et:

165 Properties of the AR(1) Error
9B Properties of the AR(1) Error The covariance for one period apart is:

166 Properties of the AR(1) Error
9B Properties of the AR(1) Error Similarly, the covariance for k periods apart is:

167 Generalized Least Squares Estimation
9C Generalized Least Squares Estimation We are considering the simple regression model with AR(1) errors: To specify the transformed model we begin with: Rearranging terms: Eq. 9C.1 Eq. 9C.2

168 Generalized Least Squares Estimation
9C Generalized Least Squares Estimation Defining the following transformed variables: Substituting the transformed variables, we get: Eq. 9C.3

169 Generalized Least Squares Estimation
9C Generalized Least Squares Estimation There are two problems: Because lagged values of yt and xt had to be formed, only (T - 1) new observations were created by the transformation The value of the autoregressive parameter ρ is not known

170 Generalized Least Squares Estimation
9C Generalized Least Squares Estimation For the second problem, we can write Eq. 9C.1 as: For the first problem, note that: and that Eq. 9C.4

171 Generalized Least Squares Estimation
9C Generalized Least Squares Estimation Or: where Eq. 9C.5 Eq. 9C.6

172 Generalized Least Squares Estimation
9C Generalized Least Squares Estimation To confirm that the variance of e*1 is the same as that of the errors (v2, v3,…, vT), note that:


Download ppt "Regression with Time Series Data:"

Similar presentations


Ads by Google