Download presentation

Presentation is loading. Please wait.

Published byEmerald Day Modified over 2 years ago

1
Christopher Dougherty EC220 - Introduction to econometrics (chapter 13) Slideshow: stationary processes Original citation: Dougherty, C. (2012) EC220 - Introduction to econometrics (chapter 13). [Teaching Resource] © 2012 The Author This version available at: http://learningresources.lse.ac.uk/139/http://learningresources.lse.ac.uk/139/ Available in LSE Learning Resources Online: May 2012 This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. This license allows the user to remix, tweak, and build upon the work even for commercial purposes, as long as the user credits the author and licenses their new creations under the identical terms. http://creativecommons.org/licenses/by-sa/3.0/ http://creativecommons.org/licenses/by-sa/3.0/ http://learningresources.lse.ac.uk/

2
1 STATIONARY PROCESSES In this slideshow we will define what is meant by a stationary time series process. We will begin with a very simple example, the AR(1) process X t = 2 X t–1 + t where │ 2 │ < 1 and t is iid—independently and identically distributed—with zero mean and finite variance.

3
2 STATIONARY PROCESSES As noted in Chapter 11, we make a distinction between the potential values {X 1,..., X T }, before the sample is generated, and a realization of actual values {x 1,..., x T }. Statisticians write the potential values in upper case, and the actual values of a particular realization in lower case, as we have done here, to emphasize the distinction.

4
3 STATIONARY PROCESSES The figure shows an example of a realization starting with X 0 = 0, with 2 = 0.8 and the innovation et being drawn randomly for each time period from a normal distribution with zero mean and unit variance.

5
4 STATIONARY PROCESSES Because history cannot repeat itself, we will only ever see one realization of a time series process. Nevertheless, it is meaningful to ask whether we can determine the potential distribution of X at time t, given information at some earlier period, for example, time 0.

6
5 STATIONARY PROCESSES As usual, there are two approaches to answering this question: mathematical analysis and simulation. We shall do both for the time series process represented by the figure, starting with a simulation.

7
6 STATIONARY PROCESSES The figure shows 50 realizations of the process.

8
7 STATIONARY PROCESSES For the first few periods, the distribution of the realizations at time t is affected by the fact that they have a common starting point of 0. However, the initial effect soon becomes unimportant and the distribution becomes stable from one period to the next.

9
8 STATIONARY PROCESSES The figure presents a histogram of the values of X 20. Apart from the first few time points, histograms for other time points would look similar. If the number of realizations were increased, each histogram would converge to the normal distribution shown in Figure 13.3. Histogram of ensemble distribution of X 2H

10
9 STATIONARY PROCESSES The AR(1) process X t = 2 X t–1 + t, with │ 2│ < 1, is said to be stationary, the adjective referring, not to X t itself, but to the potential distribution of its realizations, ignoring transitory initial effects.

11
10 STATIONARY PROCESSES X t itself changes from period to period, but the potential distribution of its realizations at any given time point does not.

12
11 STATIONARY PROCESSES The potential distribution at time t is described as the ensemble distribution at time t, to emphasize the fact that we are talking about the distribution of a cross-section of realizations, not the ordinary distribution of a random variable.

13
12 STATIONARY PROCESSES In general, a time series process is said to be stationary if its ensemble distribution satisfies three conditions:

14
13 STATIONARY PROCESSES 1.The population mean of the distribution is independent of time. 2.The population variance of the distribution is independent of time. 3.The population covariance between its values at any two time points depends only on the distance between those points, and not on time.

15
14 STATIONARY PROCESSES This definition of stationarity is known as weak stationarity or covariance stationarity. For the definition of strong stationarity, (1) and (2) are replaced by the condition that the whole potential distribution is independent of time.

16
15 STATIONARY PROCESSES Our analysis will be unaffected by using the weak definition, and in any case the distinction disappears when, as in the present example, the limiting distribution is normal.

17
16 STATIONARY PROCESSES We will check that the process represented by X t = 2 X t–1 + t, with │ 2 │ < 1, satisfies the three conditions for stationarity. First, if the process is valid for time period t, it is also valid for time period t – 1. Conditions for weak stationarity: 1.The population mean of the distribution is independent of time. 2.The population variance of the distribution is independent of time. 3.The population covariance between its values at any two time points depends only on the distance between those points, and not on time.

18
17 STATIONARY PROCESSES Substituting into the original model, one has X t in terms of X t–2, t, and t–1. Conditions for weak stationarity: 1.The population mean of the distribution is independent of time. 2.The population variance of the distribution is independent of time. 3.The population covariance between its values at any two time points depends only on the distance between those points, and not on time.

19
18 STATIONARY PROCESSES Lagging and substituting t – 1 times, one has X t in terms of X 0 and all the innovations 1,..., t from period 1 to period t. Conditions for weak stationarity: 1.The population mean of the distribution is independent of time. 2.The population variance of the distribution is independent of time. 3.The population covariance between its values at any two time points depends only on the distance between those points, and not on time.

20
19 STATIONARY PROCESSES Hence E(X t ) = 2 t X 0 since the expectation of every innovation is zero. In the special case X 0 = 0, we then have E(X t ) = 0. Since the expectation is not a function of time, the first condition is satisfied. Conditions for weak stationarity: 1.The population mean of the distribution is independent of time. 2.The population variance of the distribution is independent of time. 3.The population covariance between its values at any two time points depends only on the distance between those points, and not on time.

21
20 STATIONARY PROCESSES If X 0 is nonzero, 2 t tends to zero as t becomes large since │ 2 │<1. Hence 2 t X 0 will tend to zero and the first condition will still be satisfied, apart from initial effects. Conditions for weak stationarity: 1.The population mean of the distribution is independent of time. 2.The population variance of the distribution is independent of time. 3.The population covariance between its values at any two time points depends only on the distance between those points, and not on time.

22
21 STATIONARY PROCESSES Next, we have to show that the variance is also not a function of time. The first term in the variance expression, 2 t X 0, can be dropped because it is a constant, using variance rule 4 from the Review Chapter. Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time.

23
22 STATIONARY PROCESSES The variance expression can be decomposed as the sum of the variances, using variance rule 1 from the Review chapter and the fact that the covariances are all zero. (The innovations are assumed to be generated independently.) Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time.

24
23 STATIONARY PROCESSES In the third line, the constants are squared when taken out of the variance expressions, using variance rule 2. Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time.

25
24 STATIONARY PROCESSES The final line involves the standard summation of a geometric progression. Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time.

26
25 STATIONARY PROCESSES Given that │ 2 │ < 1, 2 2t tends to zero as t becomes large. Thus, ignoring transitory initial effects, Thus the variance tends to a limit that is independent of time. Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time.

27
26 STATIONARY PROCESSES This is the variance of the ensemble distribution shown in the figures.

28
27 STATIONARY PROCESSES It remains for us to demonstrate that the covariance between X t and X t+s is independent of time. If the relationship is valid for time period t, it is also valid for time period t+s. Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time.

29
28 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Lagging and substituting, we can express X t+s in terms of X t+s–2 and the innovations t+s–1 and t+s.

30
29 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Lagging and substituting s times, we can express X t+s in terms of X t and the innovations t+1,..., t+s.

31
30 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Then the covariance between X t and X t+s is given by the expression shown. The second term on the right side is zero because X t is independent of the innovations after time t.

32
31 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. The first term can be written 2 s var(X t ). As we have just seen, var(X t ) is independent of t, apart from a transitory initial effect. Hence the third condition for stationarity is also satisfied.

33
32 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Suppose next that the process includes an intercept 1. How does this affect its properties? Is it still stationary?

34
33 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Lagging and substituting, we can express X t in terms of X t–2 and the innovations t and t–1.

35
34 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Lagging and substituting t times, we can express X t in terms of X 0 and the innovations 1,..., t.

36
35 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Taking expectations, E(X t ) tends to 1 /(1 – 2 ) since the term 2 t tends to zero. Thus the expectation is now non-zero, but it remains independent of time.

37
36 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. The variance is unaffected by the addition of a constant in the expression for X t (variance rule 4). Thus it remains independent of time, apart from initial effects.

38
37 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Finally, we need to consider the covariance of X t and X t+s. If the relationship is valid for time period t, it is also valid for time period t+s.

39
38 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Lagging and substituting, we can express X t+s in terms of X t+s–1, the innovations t+s–1 and..., t+s, and a term involving 1.

40
39 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Lagging and substituting s times, we can express X t+s in terms of X t, the innovations t+1,..., t+s, and a term involving 1.

41
40 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. The covariance of X t and X t+s is not affected by the inclusion of this term because it is a constant. Hence the covariance is the same as before and remains independent of t.

42
41 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. We have seen that the process X t = 1 + 2 X t–1 + t has a limiting ensemble distribution with mean 1 /(1 – 2 ) and variance 2 / (1 – 2 2 ). However, the process exhibits transient time- dependent initial effects associated with the starting point X 0.

43
42 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. We can get rid of the transient effects by determining X 0 as a random draw from the ensemble distribution. 0 is a random draw from the distribution of at time zero. (Checking that X 0 has the ensemble mean and variance is left as an exercise.)

44
43 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. If we determine X 0 in this way, the expectation and variance of the process both become strictly independent of time.

45
44 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Substituting for X 0, X t is equal to 1 /(1 – 2 ) plus a linear combination of the innovations 0,..., t.

46
45 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. Hence E(X t ) is a constant and strictly independent of t for all t.

47
46 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. The right side of the equation can be decomposed as the sum of the variances because all the covariances are zero, the innovations being generated independently. As always (variance rule 2), the multiplicative constants are squared in the decomposition.

48
47 STATIONARY PROCESSES Conditions for weak stationarity: 1.The mean of the distribution is independent of time. 2.The variance of the distribution is independent of time. 3.The covariance between its values at any two time points depends only on the distance between those points, and not on time. The sum of the variances attributable to the innovations 1,..., t has already been derived above. Taking account of the variance of 0, the total is now strictly independent of time.

49
48 STATIONARY PROCESSES The figure shows 50 realizations with X 0 treated in this way. This is the counterpart of the ensemble distribution shown near the beginning of this sequence, with 2 = 0.8 as in that figure. As can be seen, the initial effects have disappeared.

50
49 STATIONARY PROCESSES The other difference in the figures results from the inclusion of a nonzero intercept. In the earlier figure, 1 = 0. In this figure, 1 = 1.0 and the mean of the ensemble distribution is 1 / (1 – 2 ) = 1 / (1 – 0.8) = 5.

51
50 STATIONARY PROCESSES Which is the more appropriate assumption: X 0 fixed or X 0 a random draw from the ensemble distribution? If the process really has started at time 0, then X 0 = 0 is likely to be the obvious choice.

52
51 STATIONARY PROCESSES However, if the sample of observations is a time slice from a series that had been established well before the time of the first observation, then it will usually make sense to treat X 0 as a random draw from the ensemble distribution.

53
52 STATIONARY PROCESSES As will be seen in another sequence, evaluation of the power of tests for nonstationarity can be sensitive to the assumption regarding X 0, and typically the most appropriate way of characterizing a stationary process is to avoid transient initial effects by treating X 0 as a random draw from the ensemble distribution.

54
Copyright Christopher Dougherty 2011. These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 13.1 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre http://www.oup.com/uk/orc/bin/9780199567089/http://www.oup.com/uk/orc/bin/9780199567089/. Individuals studying econometrics on their own and who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx or the University of London International Programmes distance learning course 20 Elements of Econometrics www.londoninternational.ac.uk/lsewww.londoninternational.ac.uk/lse. 11.07.25

Similar presentations

OK

Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:

Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Pharynx anatomy and physiology ppt on cells Ppt on pathophysiology of obesity Ppt on mineral resources in india Ppt on bank management system in java Ppt on indian stock market Ppt on traffic light control Ppt on mode of transport Ppt on synchronous reluctance motor construction Ppt on bionics research Ppt on use of computer in animation