Presentation is loading. Please wait.

Presentation is loading. Please wait.

ARMA models Gloria González-Rivera University of California, Riverside

Similar presentations


Presentation on theme: "ARMA models Gloria González-Rivera University of California, Riverside"— Presentation transcript:

1 ARMA models Gloria González-Rivera University of California, Riverside
and Jesús Gonzalo U. Carlos III de Madrid

2 White Noise A sequence of uncorrelated random variables is called a white noise process. k

3 The Wold Decomposition
If {Zt} is a nondeterministic stationary time series, then

4 Some Remarks on the Wold Decomposition

5 What the Wold theorem does not say
The at need not be normally distributed, and hence need not be iid Though P[at|Zt-j]=0, it need not be true that E[at|Zt-j]=0 (think on the possible consequences???) The shocks a need not be the “true” shocks to the system. When will this happen??? The uniqueness result only states that the Wold representation is the unique linear representation where the shocks are linear forecast errors. Non-linear representations, or representations in terms of non-forecast error shocks are perfectly possible.

6 Birth of the ARMA models
Under general conditions the infinite lag polynomial of the Wold Decomposition can be approximated by the ratio of two finite lag polynomials: Therefore AR(p) MA(q)

7 MA(1) processes Let a zero-mean white noise process Expectation Variance Autocovariance

8 MA(1) processes (cont) Autocovariance of higher order Autocorrelation MA(1) process is covariance-stationary because MA(1) process is ergodic because If were Gaussian, then would be ergodic for all moments

9 Both processes share the same autocorrelation function
Plot the function 0.5 -1 1 -0.5 Both processes share the same autocorrelation function MA(1) is not uniquely identifiable, except for

10 Invertibility Definition: A MA(q) process defined by the equation
is said to be invertible if there exists a sequence of constants and Theorem: Let {Zt} be a MA(q). Then {Zt} is invertible if and only if The coefficients {pj} are determined by the relation

11 Identification of the MA(1)
If we identify the MA(1) through the autocorrelation structure, we need to decide with value of q to choose, the one greater than one or othe one less than one. Requiring the condition of invertibility (think why????) we will choose the value q<1. Another reason to choose the value less than one can be found by paying attention to the error variance of the two “equivalent” representations:

12 covariance-stationary
MA(q) Moments MA(q) is covariance-stationary and ergodic for the same reasons as in a MA(1)

13 Is it covariance-stationary?
MA(infinite) Is it covariance-stationary? The process is covariance-stationary provided that mention the change of notation from theta to psi (square summable sequence)

14 Some interesting results
Proposition 1. (absolutely summable) (square summable) Proposition 2. Ergodic for the mean

15 Proof 1. (1) (2) It is finite because N is finite It is finite because is absolutely summable then

16 Proof 2.

17 AR(1) Using backward substitution geometric progression Remember: is the condition for stationarity and ergodicity

18 AR(1) (cont) Hence, this AR(1) process has a stationary solution if Alternatively, consider the solution of the characteristic equation: i.e. the roots of the characteristic equation lie outside of the unit circle Mean of a stationary AR(1) Variance of a stationary AR(1)

19 Autocovariance of a stationary AR(1)
Rewrite the process as Autocorrelation of a stationary AR(1) ACF PACF: from Yule-Walker equations Make a graph of the autocorrelations of an AR(1)

20 Causality and Stationarity
Definition: An AR(p) process defined by the equation is said to be causal, or a causal function of {at}, if there exists a sequence of constants and Causality is equivalent to the condition Definition: A stationary solution {Zt} of the equation exists (and is also the unique stationary solution) if and only if From now on we will be dealing only with causal AR models

21 AR(2) Stationarity Study of the roots of the characteristic equation (a) Multiply by -1 (b) Divide by

22 For a stationary causal solution is required that
Necessary conditions for a stationary causal solution Roots can be real or complex. (1) Real roots (2) Complex roots

23 1 real -2 -1 1 2 complex -1

24 Mean of AR(2) Variance and Autocorrelations of AR(2)

25 different shapes according to the roots, real or complex
Difference equation different shapes according to the roots, real or complex Show correlograms of AR(2) Ask the students to prove the PACF Partial autocorrelations: from Yule-Walker equations

26 AR(p) All p roots of the characteristic equation outside of the unit circle stationarity ACF System to solve for the first p autocorrelations: p unknowns and p equations ACF decays as mixture of exponentials and/or damped sine waves, Depending on real/complex roots PACF

27 Relationship between AR(p) and MA(q)
Stationary AR(p) Example

28 Write an example, i.e. MA(2), and proceed as in the previous example
Invertible MA(q) Ask the students to calculate the pi from a MA(2) Write an example, i.e. MA(2), and proceed as in the previous example

29 ARMA (p,q)

30 Autocorrelations of ARMA(p,q)
taking expectations: Picture the autocorrelograms of ARMA PACF

31 ARMA(1,1)

32 ACF of ARMA(1,1) taking expectations

33 ACF PACF

34 ACF and PACF of an ARMA(1,1)

35 ACF and PACF of an MA(2)

36 ACF and PACF of an AR(2)

37 Problems P1: Determine which of the following ARMA processes are casual and which of them are invertible (in each case at denotes a white noise): P2: Show that the two MA(1) processes have the same autocovariances functions.

38 Problems (cont) P.3: Let {Zt} denote the unique stationary solution of the autoregressive equations Where Then is given by the expression Define the new sequence These calculations show that {Zt} is the (unique stationary) solution of the causal AR equations

39 Problems (cont) P4: Let Yt be the AR(1) plus noise time series defined by Yt =Zt + Wt, where for all s and t. Show that {Yt} is stationary and find its autocovariance functions. Show that the time series is an MA(1). Conclude from the previous point that {Yt} is an ARMA(1,1) and express the three parameters of this model in terms of

40 Appendix: Lag Operator L
Definition Properties Examples

41 Appendix: Inverse Operator
Definition Note that : this definition does not hold because the limit does not exist Example:

42 Appendix: Inverse Operator (cont)
Suppose you have the ARMA model and want to find the MA representation You could try to crank out directly, but that’s not much fun. Instead you could find and matching terms in Lj to make sure this works. Example: Suppose Multiplying both polynomials and matching powers of L, which you can easily solver recursively for the TRY IT!!!

43 Appendix: Factoring Lag Polynomials
Suppose we need to invert the polynomial We can do that by factoring it: Now we need to invert each factor and multiply: Check the last expression!!!!

44 Appendix: Partial Fraction Tricks
There is a prettier way to express the last inversion by using the partial fraction tricks. Find the constants a and b such that The numerator on the right hand side must be 1, so

45 Appendix: More on Invertibility
Consider a MA(1) Definition A MA process is said to be invertible if it can be written as an AR( ) For a MA(1) to be invertible we require For a MA(q) to be invertible, all roots of the characteristic equation should lie outside of the unit circle MA processes have an invertible and a non-invertible representations Invertible representation optimal forecast depends on past information Non-invertible representation forecast depends on the future!!!


Download ppt "ARMA models Gloria González-Rivera University of California, Riverside"

Similar presentations


Ads by Google