Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: change in housing starts Example: change in housing.

Similar presentations


Presentation on theme: "1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: change in housing starts Example: change in housing."— Presentation transcript:

1 1 ECON 240C Lecture 8

2 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: change in housing starts Example: change in housing starts Polar form Polar form Inverse of B(z) Inverse of B(z) Autocovariance function Autocovariance function Yule-Walker Equations Yule-Walker Equations Partial autocorrelation function Partial autocorrelation function

3 3 Outline Cont. Parameter uncertainty Parameter uncertainty Moving average processes Moving average processes Significance of Autocorrelations Significance of Autocorrelations

4 4 Roots of the quadratic X(t) = b 1 x(t-1) + b 2 x(t-2) + wn(t) X(t) = b 1 x(t-1) + b 2 x(t-2) + wn(t) y 2 –b 1 y – b 2 = 0, from substituting y 2-u for x(t-u) y 2 –b 1 y – b 2 = 0, from substituting y 2-u for x(t-u) y = [b 1 +/- (b 1 2 + 4b 2 ) 1/2 ]/2 y = [b 1 +/- (b 1 2 + 4b 2 ) 1/2 ]/2 Complex if (b 1 2 + 4b 2 ) < 0 Complex if (b 1 2 + 4b 2 ) < 0

5 5 b1 = -0.353, b2 = -0.142 Roots: y = {-0.353 +/- [(-0.353) 2 +4(-0.142)] 1/2 }/2 y ={ -0.353 +/- (0.125 – 0.568) 1/2 }/2 y = -0.177 +/- [-0.443] 1/2 /2 * y = -0.177 + 0.333 i, -0.177 – 0.333 i

6 6 Roots in polar form y = Re + Im i = a + b i y = Re + Im i = a + b i sin  = b/(a 2 + b 2 ) 1/2 sin  = b/(a 2 + b 2 ) 1/2 cos  = a/(a 2 + b 2 ) 1/2 cos  = a/(a 2 + b 2 ) 1/2 y = (a 2 + b 2 ) 1/2 cos  + i (a 2 + b 2 ) 1/2 sin  y = (a 2 + b 2 ) 1/2 cos  + i (a 2 + b 2 ) 1/2 sin  Re Im a (a, b) b 

7 7 Roots in Polar form Re + i Im = a + i b = (a 2 + b 2 ) 1/2 [cos  + sin  ] Re + i Im = a + i b = (a 2 + b 2 ) 1/2 [cos  + sin  ] Example: modulus, (a 2 + b 2 ) 1/2 = [(-0.177) 2 +(0.333) 2 ] 1/2 = [0.031 + 0.111] 1/2 = 0.377 Example: modulus, (a 2 + b 2 ) 1/2 = [(-0.177) 2 +(0.333) 2 ] 1/2 = [0.031 + 0.111] 1/2 = 0.377 Tan  = sin  /cos  = b/a = -0.333/-0.177 = 1.88 Tan  = sin  /cos  = b/a = -0.333/-0.177 = 1.88  = tan -1 1.88 ~ 62 degrees = 0.172 fraction of a circle = 0.172*2  radians = 1.08 radians  = tan -1 1.88 ~ 62 degrees = 0.172 fraction of a circle = 0.172*2  radians = 1.08 radians Period = 2*  /  = 2  /0.172*2  = 1/0.172 =5.8 months Period = 2*  /  = 2  /0.172*2  = 1/0.172 =5.8 months 5.8 months, the time it takes to go around the circle once 5.8 months, the time it takes to go around the circle once

8 8

9 9 Difference Equation Solutions x(t) –b x(t) –b 1 x(t-1) – b 2 x(t-2) = 0 Suppose b 2 = 0, then b 1 is the root, with x(t) = b 1 x(t-1). Suppose x(0) = 100, and b 1 =1.2 then x(1) = 1.2*100, And x(2) = 1.2*x(1) = (1.2) 2 *100, And the solution is x(t) = x(0)* b 1 t In general for roots r 1 and r 2, the solution is x(t) = Ar 1 t + Br 2 t where A and B are constants

10 10 III. Autoregressive of the Second Order ARTWO(t) = b 1 *ARTWO(t-1) + b 2 *ARTWO(t- 2) + WN(t) ARTWO(t) = b 1 *ARTWO(t-1) + b 2 *ARTWO(t- 2) + WN(t) ARTWO(t) - b 1 *ARTWO(t-1) - b 2 *ARTWO(t- 2) = WN(t) ARTWO(t) - b 1 *ARTWO(t-1) - b 2 *ARTWO(t- 2) = WN(t) ARTWO(t) - b 1 *Z*ARTWO(t) - b 2 *Z*ARTWO(t) = WN(t) ARTWO(t) - b 1 *Z*ARTWO(t) - b 2 *Z*ARTWO(t) = WN(t) [1 - b 1 *Z - b 2 *Z 2 ] ARTWO(t) = WN(t) [1 - b 1 *Z - b 2 *Z 2 ] ARTWO(t) = WN(t)

11 11 Inverse of [1-b 1 z –b 2 z 2 ] ARTWO(t) = wn(t)/B(z) =wn(t)/[1-b–b ARTWO(t) = wn(t)/B(z) =wn(t)/[1-b 1 z –b 2 z 2 ] [1-b–b ARTWO(t) = A(z) wn(t) = {1/[1-b 1 z –b 2 z 2 ]}wn(t) [1-b–b So A(z) = [1 + a 1 z + a 2 z 2 + …] = 1/[1-b 1 z –b 2 z 2 ] [1-b–b [1-b 1 z –b 2 z 2 ] [1 + a 1 z + a 2 z 2 + …] = 1 -b– a 1 b 1 + a 1 z + a 2 z 2 + … -b 1 z – a 1 b 1 z 2 - b 2 z 2 … = 1 1 + (a 1 – b 1 )z + (a 2 –a 1 b 1 –b 2 ) z 2 + … = 1 So (a 1 – b 1 ) = 0, (a 2 –a 1 b 1 –b 2 ) = 0, …

12 12 Inverse of [1-b 1 z –b 2 z 2 ] A(z) = [1 + a 1 z + a 2 z 2 + …] = [1 + b 1 z + (b 1 2 +b 2 ) z 2 + …. So ARTWO(t) = wn(t) + b 1 wn(t-1) + (b 1 2 +b 2 ) wn(t-2) + …. And ARTWO(t-1) = wn(t-1) + b 1 wn(t-2) + (b 1 2 +b 2 ) wn(t-3) + ….

13 13 Autocovariance Function ARTWO(t) = b 1 *ARTWO(t-1) + b 2 *ARTWO(t-2) + WN(t) ARTWO(t) = b 1 *ARTWO(t-1) + b 2 *ARTWO(t-2) + WN(t) Using x(t) for ARTWO, Using x(t) for ARTWO, x(t) = b 1 *x(t-1) + b 2 *x(t-2) + WN(t) x(t) = b 1 *x(t-1) + b 2 *x(t-2) + WN(t) By lagging and substitution, one can show that x(t-1) depends on earlier shocks, so multiplying by x(t-1) and taking expectations By lagging and substitution, one can show that x(t-1) depends on earlier shocks, so multiplying by x(t-1) and taking expectations

14 14 Autocovariance Function x(t) = b 1 *x(t-1) + b 2 *x(t-2) + WN(t) x(t) = b 1 *x(t-1) + b 2 *x(t-2) + WN(t) x(t)*x(t-1) = b 1 *[x(t-1)] 2 + b 2 *x(t-1)*x(t-2) + x(t-1)*WN(t) x(t)*x(t-1) = b 1 *[x(t-1)] 2 + b 2 *x(t-1)*x(t-2) + x(t-1)*WN(t) Ex(t)*x(t-1) = b 1 *E[x(t-1)] 2 + b 2 *Ex(t-1)*x(t- 2) +E x(t-1)*WN(t) Ex(t)*x(t-1) = b 1 *E[x(t-1)] 2 + b 2 *Ex(t-1)*x(t- 2) +E x(t-1)*WN(t)  x, x   b 1 *  x, x   b 2 *  x, x   where Ex(t)*x(t-1), E[x(t-1)] 2, and Ex(t- 1)*x(t-2) follow by definition and E x(t- 1)*WN(t) = 0 since x(t-1) depends on earlier shocks and is independent of WN(t)  x, x   b 1 *  x, x   b 2 *  x, x   where Ex(t)*x(t-1), E[x(t-1)] 2, and Ex(t- 1)*x(t-2) follow by definition and E x(t- 1)*WN(t) = 0 since x(t-1) depends on earlier shocks and is independent of WN(t)

15 15 Autocovariance Function  x, x   b 1 *  x, x   b 2 *  x, x    x, x   b 1 *  x, x   b 2 *  x, x   dividing though by  x, x   dividing though by  x, x    x, x   b 1 *  x, x   b 2 *  x, x   so  x, x   b 1 *  x, x   b 2 *  x, x   so  x, x   b 2 *  x, x   b 1 *  x, x   and  x, x   b 2 *  x, x   b 1 *  x, x   and  x, x   b 2 ]  b 1  or  x, x   b 2 ]  b 1  or  x, x   b 1  b 2 ]  x, x   b 1  b 2 ] Note: if the parameters, b 1 and b 2 are known, then one can calculate the value of  x, x   Note: if the parameters, b 1 and b 2 are known, then one can calculate the value of  x, x  

16 16 Autocovariance Function x(t) = b 1 *x(t-1) + b 2 *x(t-2) + WN(t) x(t) = b 1 *x(t-1) + b 2 *x(t-2) + WN(t) x(t)*x(t-2) = b 1 *[x(t-1)x(t-2)] + b 2 *[x(t-2)] 2 + x(t-2)*WN(t) x(t)*x(t-2) = b 1 *[x(t-1)x(t-2)] + b 2 *[x(t-2)] 2 + x(t-2)*WN(t) Ex(t)*x(t-2) = b 1 *E[x(t-1)x(t-2)] + b 2 *E[x(t- 2)] 2 +E x(t-2)*WN(t) Ex(t)*x(t-2) = b 1 *E[x(t-1)x(t-2)] + b 2 *E[x(t- 2)] 2 +E x(t-2)*WN(t)  x, x   b 1 *  x, x   b 2 *  x, x   where Ex(t)*x(t-2), E[x(t-2)] 2, and Ex(t- 1)*x(t-2) follow by definition and E x(t- 2)*WN(t) = 0 since x(t-2) depends on earlier shocks and is independent of WN(t)  x, x   b 1 *  x, x   b 2 *  x, x   where Ex(t)*x(t-2), E[x(t-2)] 2, and Ex(t- 1)*x(t-2) follow by definition and E x(t- 2)*WN(t) = 0 since x(t-2) depends on earlier shocks and is independent of WN(t)

17 17 Autocovariance Function  x, x   b 1 *  x, x   b 2 *  x, x    x, x   b 1 *  x, x   b 2 *  x, x   dividing though by  x, x   dividing though by  x, x    x, x   b 1 *  x, x   b 2 *  x, x    x, x   b 1 *  x, x   b 2 *  x, x   Note: if the parameters, b 1 and b 2 are known, then one can calculate the value of  x, x   as we did above from  x, x   b 1  b 2 ], and then calculate  x, x   Note: if the parameters, b 1 and b 2 are known, then one can calculate the value of  x, x   as we did above from  x, x   b 1  b 2 ], and then calculate  x, x  

18 18 Autocorrelation Function  x, x   b 1 *  x, x   b 2 *  x, x    x, x   b 1 *  x, x   b 2 *  x, x   Note also the recursive nature of this formula, so  x, x   b 1 *  x, x   b 2 *  x, x   for u>=2. Note also the recursive nature of this formula, so  x, x   b 1 *  x, x   b 2 *  x, x   for u>=2. Thus we can map from the parameter space to the autocorrelation function. Thus we can map from the parameter space to the autocorrelation function. How about the other way around? How about the other way around?

19 19 Yule-Walker Equations From slide 15 above, From slide 15 above,  x, x   b 1 *  x, x   b 2 *  x, x   and so  x, x   b 1 *  x, x   b 2 *  x, x   and so  b 1 =  x, x   b 2 *  x, x    b 1 =  x, x   b 2 *  x, x   From slide 17 above, From slide 17 above,  x, x   b 1 *  x, x   b 2 *  x, x   or  x, x   b 1 *  x, x   b 2 *  x, x   or  b 2 =  x, x   b 1 *  x, x   and substituting for b 1 from line 3 above  b 2 =  x, x   b 1 *  x, x   and substituting for b 1 from line 3 above b 2 =  x, x   [  x, x   b 2 *  x, x   ]  x, x   b 2 =  x, x   [  x, x   b 2 *  x, x   ]  x, x  

20 20 Yule-Walker Equations b 2 =  x, x   {[  x, x     b 2 * [  x, x   ] 2 } b 2 =  x, x   {[  x, x     b 2 * [  x, x   ] 2 } so b 2 =  x, x   [  x, x     b 2 * [  x, x   ] 2 so b 2 =  x, x   [  x, x     b 2 * [  x, x   ] 2 and b 2  b 2 * [  x, x   ] 2 =  x, x   [  x, x    and b 2  b 2 * [  x, x   ] 2 =  x, x   [  x, x    so b 2   x, x   ] 2 =  x, x   [  x, x    so b 2   x, x   ] 2 =  x, x   [  x, x    and  b 2 = {  x, x   [  x, x      x, x   ] 2 and  b 2 = {  x, x   [  x, x      x, x   ] 2 This is the formula for the partial autocorrelation at lag two. This is the formula for the partial autocorrelation at lag two.

21 21 Partial Autocorrelation Function b 2 = {  x, x   [  x, x      x, x   ] 2 b 2 = {  x, x   [  x, x      x, x   ] 2 Note: If the process is really autoregressive of the first order, then  x, x   b 2 and  x, x   b, so the numerator is zero, i.e. the partial autocorrelation function goes to zero one lag after the order of the autoregressive process. Note: If the process is really autoregressive of the first order, then  x, x   b 2 and  x, x   b, so the numerator is zero, i.e. the partial autocorrelation function goes to zero one lag after the order of the autoregressive process. Thus the partial autocorrelation function can be used to identify the order of the autoregressive process. Thus the partial autocorrelation function can be used to identify the order of the autoregressive process.

22 22 Partial Autocorrelation Function If the process is first order autoregressive then the formula for b 1 = b is: If the process is first order autoregressive then the formula for b 1 = b is: b 1 = b =ACF(1), so this is used to calculate the PACF at lag one, i.e. PACF(1) =ACF(1) = b 1 = b. b 1 = b =ACF(1), so this is used to calculate the PACF at lag one, i.e. PACF(1) =ACF(1) = b 1 = b. For a third order autoregressive process, For a third order autoregressive process, x(t) = b 1 *x(t-1) + b 2 *x(t-2) + b 3 *x(t-3) + WN(t), we would have to derive three Yule-Walker equations by first multiplying by x(t-1) and then by x(t-2) and lastly by x(t-3), and take expectations. x(t) = b 1 *x(t-1) + b 2 *x(t-2) + b 3 *x(t-3) + WN(t), we would have to derive three Yule-Walker equations by first multiplying by x(t-1) and then by x(t-2) and lastly by x(t-3), and take expectations.

23 23 Partial Autocorrelation Function Then these three equations could be solved for b 3 in terms of  x, x   x, x   and  x, x  to determine the expression for the partial autocorrelation function at lag three. EVIEWS does this and calculates the PACF at higher lags as well. Then these three equations could be solved for b 3 in terms of  x, x   x, x   and  x, x  to determine the expression for the partial autocorrelation function at lag three. EVIEWS does this and calculates the PACF at higher lags as well.

24 24 IV. Economic Forecast Project Santa Barbara County Seminar Santa Barbara County Seminar  April 26, 2006 URL: http://www.ucsb-efp.com URL: http://www.ucsb-efp.com URL: http://www.ucsb-efp.com URL: http://www.ucsb-efp.com

25 25 V. Forecasting Trends

26 26 Lab Two: LNSP500

27 27 Note: Autocorrelated Residual

28 28 Autorrelation Confirmed from the Correlogram of the Residual

29 29 Visual Representation of the Forecast

30 30 Numerical Representation of the Forecast

31 31 One Period Ahead Forecast Note the standard error of the regression is 0.2237 Note the standard error of the regression is 0.2237 Note: the standard error of the forecast is 0.2248 Note: the standard error of the forecast is 0.2248 Diebold refers to the forecast error Diebold refers to the forecast error  without parameter uncertainty, which will just be the standard error of the regression  or with parameter uncertainty, which accounts for the fact that the estimated intercept and slope are uncertain as well

32 32 Parameter Uncertainty Trend model: y(t) = a + b*t + e(t) Trend model: y(t) = a + b*t + e(t) Fitted model: Fitted model:

33 33 Parameter Uncertainty Estimated error Estimated error

34 34 Forecast Formula

35 35 Expected Value of the Forecast E t E t

36 36 Forecast Minus its Expected Value Forecast = a + b*(t+1) + 0 Forecast = a + b*(t+1) + 0

37 37 Variance in the Forecast

38 38

39 39 Variance of the Forecast Error 0.000501 +2*(-0.00000189)*398 + 9.52x10 -9 *(398) 2 +(0.223686) 2 0.000501 - 0.00150 + 0.001508 + 0.0500354 0.0505444 SEF = (0.0505444) 1/2 = 0.22482

40 40 Numerical Representation of the Forecast

41 41 Evolutionary Vs. Stationary Evolutionary: Trend model for lnSp500(t) Evolutionary: Trend model for lnSp500(t) Stationary: Model for Dlnsp500(t) Stationary: Model for Dlnsp500(t)

42 42 Pre-whitened Time Series

43 43 Note: 0 008625 is monthly growth rate; times 12=0.1035

44 44 Is the Mean Fractional Rate of Growth Different from Zero? Econ 240A, Ch.12.2 Econ 240A, Ch.12.2 where the null hypothesis is that  = 0. where the null hypothesis is that  = 0. (0.008625-0)/(0.045661/397 1/2 ) (0.008625-0)/(0.045661/397 1/2 ) 0.008625/0.002292 = 3.76 t-statistic, so 0.008625 is significantly different from zero 0.008625/0.002292 = 3.76 t-statistic, so 0.008625 is significantly different from zero

45 45 Model for lnsp500(t) Lnsp500(t) = a +b*t +resid(t), where resid(t) is close to a random walk, so the model is: Lnsp500(t) = a +b*t +resid(t), where resid(t) is close to a random walk, so the model is: lnsp500(t) a +b*t + RW(t), and taking exponential lnsp500(t) a +b*t + RW(t), and taking exponential sp500(t) = e a + b*t + RW(t) = e a + b*t e RW(t) sp500(t) = e a + b*t + RW(t) = e a + b*t e RW(t)

46 46 Note: The Fitted Trend Line Forecasts Above the Observations

47 47

48 48 VI. Autoregressive Representation of a Moving Average Process MAONE(t) = WN(t) + a*WN(t-1) MAONE(t) = WN(t) + a*WN(t-1) MAONE(t) = WN(t) +a*Z*WN(t) MAONE(t) = WN(t) +a*Z*WN(t) MAONE(t) = [1 +a*Z] WN(t) MAONE(t) = [1 +a*Z] WN(t) MAONE(t)/[1 - (-aZ)] = WN(t) MAONE(t)/[1 - (-aZ)] = WN(t) [1 + (-aZ) + (-aZ) 2 + …]MAONE(t) = WN(t) [1 + (-aZ) + (-aZ) 2 + …]MAONE(t) = WN(t) MAONE(t) -a*MAONE(t-1) + a 2 MAONE(t-2) +.. =WN(t) MAONE(t) -a*MAONE(t-1) + a 2 MAONE(t-2) +.. =WN(t)

49 49 MAONE(t) = a*MAONE(t-1) - a 2 *MAONE(t-2) + …. +WN(t) MAONE(t) = a*MAONE(t-1) - a 2 *MAONE(t-2) + …. +WN(t)

50 50 Lab 4: Alternating Pattern in PACF of MATHREE

51 51 Part IV. Significance of Autocorrelations x, x (u) ~ N(0, 1/T), where T is # of observations

52 52 Correlogram of the Residual from the Trend Model for LNSP500(t)

53 53 Box-Pierce Statistic Is normalized, 1.e. is N(0,1) The square of N(0,1) variables is distributed Chi-square

54 54 Box-Pierce Statistic The sum of the squares of independent N(0, 1) variables is Chi-square, and if the autocorrelations are close to zero they will be independent, so under the null hypothesis that the autocorrelations are zero, we have a Chi-square statistic: that has K-p-q degrees of freedom where K is the number of lags in the sum, and p+q are the number of parameters estimated.

55 55 Application to: the Fractional Change in the Federal Funds Rate Dlnffr = lnffr-lnffr(-1) Dlnffr = lnffr-lnffr(-1) Does taking the logarithm and then differencing help model this rate?? Does taking the logarithm and then differencing help model this rate??

56 56

57 57

58 58

59 59

60 60

61 61

62 62 Correlogram of dlnffr(t)

63 63 How would you model dlnffr(t) ? Notation (p,d,q) for ARIMA models where d stands for the number of times first differenced, p is the order of the autoregressive part, and q is the order of the moving average part. Notation (p,d,q) for ARIMA models where d stands for the number of times first differenced, p is the order of the autoregressive part, and q is the order of the moving average part.

64 64 Estimated MAThree Model for dlnffr

65 65 Correlogram of Residual from (0,0,3) Model for dlnffr

66 66 Calculating the Box-Pierce Stat

67 67 EVIEWS Uses the Ljung-Box Statistic

68 68 Q-Stat at Lag 5 (T+2)/(T-5) * Box-Pierce = Ljung-Box (T+2)/(T-5) * Box-Pierce = Ljung-Box (586/581)*1.25368 = 1.135 compared to 1.132(EVIEWS) (586/581)*1.25368 = 1.135 compared to 1.132(EVIEWS)

69 69 GENR: chi=rchisq(3); dens=dchisq(chi, 3)

70 70 Correlogram of Residual from (0,0,3) Model for dlnffr

71 71

72 72 Serial Correlation Test

73 73 Estimated MAThree Model for dlnffr

74 74 Breusch-Godfrey Serial Correlation Test


Download ppt "1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: change in housing starts Example: change in housing."

Similar presentations


Ads by Google