Dates for term tests Friday, February 07 Friday, March 07

Slides:



Advertisements
Similar presentations
FINANCIAL TIME-SERIES ECONOMETRICS SUN LIJIAN Feb 23,2001.
Advertisements

Autocorrelation Functions and ARIMA Modelling
State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Autoregressive Integrated Moving Average (ARIMA) models
Stationary Time Series
Time Series Presented by Vikas Kumar vidyarthi Ph.D Scholar ( ),CE Instructor Dr. L. D. Behera Department of Electrical Engineering Indian institute.
ELG5377 Adaptive Signal Processing
General Linear Model With correlated error terms  =  2 V ≠  2 I.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Using SAS for Time Series Data
Model Building For ARIMA time series
Model specification (identification) We already know about the sample autocorrelation function (SAC): Properties: Not unbiased (since a ratio between two.
Slide 1 DSCI 5340: Predictive Modeling and Business Forecasting Spring 2013 – Dr. Nick Evangelopoulos Lecture 7: Box-Jenkins Models – Part II (Ch. 9) Material.
Time Series Building 1. Model Identification
R. Werner Solar Terrestrial Influences Institute - BAS Time Series Analysis by means of inference statistical methods.
OPTIMUM FILTERING.
The General Linear Model. The Simple Linear Model Linear Regression.
An Introduction to Time Series Ginger Davis VIGRE Computational Finance Seminar Rice University November 26, 2003.
Modeling Cycles By ARMA
1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: capumfg Example: capumfg Polar form Polar form.
1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: change in housing starts Example: change in housing.
ARIMA Forecasting Lecture 7 and 8 - March 14-16, 2011
Non-Seasonal Box-Jenkins Models
Orthogonality and Least Squares
1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: change in housing starts Example: change in housing.
BOX JENKINS METHODOLOGY
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
ARMA models Gloria González-Rivera University of California, Riverside
Time Series Analysis.
STAT 497 LECTURE NOTES 2.
Linear Stationary Processes. ARMA models. This lecture introduces the basic linear models for stationary processes. Considering only stationary processes.
TIME SERIES ANALYSIS Time Domain Models: Red Noise; AR and ARMA models LECTURE 7 Supplementary Readings: Wilks, chapters 8.
Continuous Distributions The Uniform distribution from a to b.
Autoregressive Integrated Moving Average (ARIMA) Popularly known as the Box-Jenkins methodology.
Models for Non-Stationary Time Series The ARIMA(p,d,q) time series.
Intro. ANN & Fuzzy Systems Lecture 26 Modeling (1): Time Series Prediction.
It’s About Time Mark Otto U. S. Fish and Wildlife Service.
Data analyses 2008 Lecture Last Lecture Basic statistics Testing Linear regression parameters Skill.
K. Ensor, STAT Spring 2005 Model selection/diagnostics Akaike’s Information Criterion (AIC) –A measure of fit plus a penalty term for the number.
The process has correlation sequence Correlation and Spectral Measure where, the adjoint of is defined by The process has spectral measure where.
CHAPTER 5 SIGNAL SPACE ANALYSIS
K. Ensor, STAT Spring 2004 Memory characterization of a process How would the ACF behave for a process with no memory? What is a short memory series?
Expectation. Let X denote a discrete random variable with probability function p(x) (probability density function f(x) if X is continuous) then the expected.
Time Series Basics Fin250f: Lecture 8.1 Spring 2010 Reading: Brooks, chapter
3.Analysis of asset price dynamics 3.1Introduction Price – continuous function yet sampled discretely (usually - equal spacing). Stochastic process – has.
MULTIVARIATE TIME SERIES & FORECASTING 1. 2 : autocovariance function of the individual time series.
Auto Regressive, Integrated, Moving Average Box-Jenkins models A stationary times series can be modelled on basis of the serial correlations in it. A non-stationary.
Estimation Method of Moments (MM) Methods of Moment estimation is a general method where equations for estimating parameters are found by equating population.
Linear Filters. denote a bivariate time series with zero mean. Let.
1 Chapter 5 : Volatility Models Similar to linear regression analysis, many time series exhibit a non-constant variance (heteroscedasticity). In a regression.
Chap 9 Regression with Time Series Data: Stationary Variables
Review and Summary Box-Jenkins models Stationary Time series AR(p), MA(q), ARMA(p,q)
The Box-Jenkins (ARIMA) Methodology
Geology 5600/6600 Signal Analysis 11 Sep 2015 © A.R. Lowry 2015 Last time: The Central Limit theorem : The sum of a sequence of random variables tends.
Introduction to stochastic processes
STAT 497 LECTURE NOTES 3 STATIONARY TIME SERIES PROCESSES
Models for Non-Stationary Time Series
Time Series Analysis.
Chapter 6: Autoregressive Integrated Moving Average (ARIMA) Models
Model Building For ARIMA time series
ECON 240C Lecture 7.
Modern Spectral Estimation
Chapter 6: Forecasting/Prediction
Box-Jenkins models Stationary Time series AR(p), MA(q), ARMA(p,q)
Linear Filters.
The Spectral Representation of Stationary Time Series
CH2 Time series.
BOX JENKINS (ARIMA) METHODOLOGY
Presentation transcript:

Dates for term tests Friday, February 07 Friday, March 07

The Moving Average Time series of order q, MA(q) Let {xt|t  T} be defined by the equation. where {ut|t  T} denote a white noise time series with variance s2. Then {xt|t  T} is called a Moving Average time series of order q. (denoted by MA(q))

The mean value for an MA(q) time series The autocovariance function for an MA(q) time series The autocorrelation function for an MA(q) time series

Comment The autocorrelation function for an MA(q) time series “cuts off” to zero after lag q. q

The Autoregressive Time series of order p, AR(p) Let {xt|t  T} be defined by the equation. where {ut|t  T} is a white noise time series with variance s2. Then {xt|t  T} is called a Autoregressive time series of order p. (denoted by AR(p))

The mean value of a stationary AR(p) series The Autocovariance function s(h) of a stationary AR(p) series Satisfies the equations:

Satisfies the equations: The Autocorrelation function r(h) of a stationary AR(p) series Satisfies the equations: with for h > p and

or: where r1, r2, … , rp are the roots of the polynomial and c1, c2, … , cp are determined by using the starting values of the sequence r(h).

Conditions for stationarity Autoregressive Time series of order p, AR(p)

For a AR(p) time series, consider the polynomial with roots r1, r2 , … , rp then {xt|t  T} is stationary if |ri| > 1 for all i. If |ri| < 1 for at least one i then {xt|t  T} exhibits deterministic behaviour. If |ri| ≥ 1 and |ri| = 1 for at least one i then {xt|t  T} exhibits non-stationary random behaviour.

since: and |r1 |>1, |r2 |>1, … , | rp | > 1 for a stationary AR(p) series then i.e. the autocorrelation function, r(h), of a stationary AR(p) series “tails off” to zero.

Special Cases: The AR(1) time Let {xt|t  T} be defined by the equation.

Consider the polynomial with root r1= 1/b1 {xt|t  T} is stationary if |r1| > 1 or |b1| < 1 . If |ri| < 1 or |b1| > 1 then {xt|t  T} exhibits deterministic behaviour. If |ri| = 1 or |b1| = 1 then {xt|t  T} exhibits non-stationary random behaviour.

Special Cases: The AR(2) time Let {xt|t  T} be defined by the equation.

Consider the polynomial where r1 and r2 are the roots of b(x) {xt|t  T} is stationary if |r1| > 1 and |r2| > 1 . This is true if b1+b2 < 1 , b2 –b1 < 1 and b2 > -1. These inequalities define a triangular region for b1 and b2. If |ri| < 1 or |b1| > 1 then {xt|t  T} exhibits deterministic behaviour. If |ri| ≤ 1 for i = 1,2 and |ri| = 1 for at least on i then {xt|t  T} exhibits non-stationary random behaviour.

Patterns of the ACF and PACF of AR(2) Time Series In the shaded region the roots of the AR operator are complex b2

The Mixed Autoregressive Moving Average Time Series of order p,q The ARMA(p,q) series

The Mixed Autoregressive Moving Average Time Series of order p, ARMA(p,q) Let b1, b2, … bp , a1, a2, … ap , d denote p + q +1 numbers (parameters). Let {ut|t  T} denote a white noise time series with variance s2. independent mean 0, variance s2. Let {xt|t  T} be defined by the equation. Then {xt|t  T} is called a Mixed Autoregressive- Moving Average time series - ARMA(p,q) series.

Mean value, variance, autocovariance function, autocorrelation function of an ARMA(p,q) series

Similar to an AR(p) time series, for certain values of the parameters b1, …, bp an ARMA(p,q) time series may not be stationary. An ARMA(p,q) time series is stationary if the roots (r1, r2, … , rp ) of the polynomial b(x) = 1 – b1x – b2x2 - … - bp xp satisfy | ri| > 1 for all i.

Assume that the ARMA(p,q) time series {xt|t  T} is stationary: Let m = E(xt). Then or

The Autocovariance function, s(h), of a stationary mixed autoregressive-moving average time series {xt|t  T} be determined by the equation: Thus

Hence

We need to calculate:

h sux(h) -1 -2 -3

The autocovariance function s(h) satisfies: For h = 0, 1. … , q: for h > q:

We then use the first (p + 1) equations to determine: s(0), s(1), s(2), … , s(p) We use the subsequent equations to determine: s(h) for h > p.

Example:The autocovariance function, s(h), for an ARMA(1,1) time series: For h = 0, 1: or for h > 1:

Substituting s(0) into the second equation we get: or Substituting s(1) into the first equation we get:

for h > 1:

The Backshift Operator B

Consider the time series {xt : t  T} and Let M denote the linear space spanned by the set of random variables {xt : t  T} (i.e. all linear combinations of elements of {xt : t  T} and their limits in mean square). M is a vector space Let B be an operator on M defined by: Bxt = xt-1. B is called the backshift operator.

Note: We can also define the operator Bk with Bkxt = B(B(...Bxt)) = xt-k. The polynomial operator p(B) = c0I + c1B + c2B2 + ... + ckBk can also be defined by the equation. p(B)xt = (c0I + c1B + c2B2 + ... + ckBk)xt . = c0Ixt + c1Bxt + c2B2xt + ... + ckBkxt = c0xt + c1xt-1 + c2xt-2 + ... + ckxt-k

The power series operator p(B) = c0I + c1B + c2B2 + ... can also be defined by the equation. p(B)xt = (c0I + c1B + c2B2 + ... )xt = c0Ixt + c1Bxt + c2B2xt + ... = c0xt + c1xt-1 + c2xt-2 + ... If p(B) = c0I + c1B + c2B2 + ... and q(B) = b0I + b1B + b2B2 + ... are such that p(B)q(B) = I i.e. p(B)q(B)xt = Ixt = xt than q(B) is denoted by [p(B)]-1.

Other operators closely related to B: F = B-1 ,the forward shift operator, defined by Fxt = B-1xt = xt+1 and D = I - B ,the first difference operator, defined by Dxt = (I - B)xt = xt - xt-1 .

The Equation for a MA(q) time series xt= a0ut + a1ut-1 +a2ut-2 +... +aqut-q + m can be written xt= a(B) ut + m where a(B) = a0I + a1B +a2B2 +... +aqBq

The Equation for a AR(p) time series xt= b1xt-1 +b2xt-2 +... +bpxt-p + d + ut can be written b(B) xt= d + ut where b(B) = I - b1B - b2B2 -... - bpBp

The Equation for a ARMA(p,q) time series xt= b1xt-1 +b2xt-2 +... +bpxt-p + d + ut + a1ut-1 +a2ut-2 +... +aqut-q can be written b(B) xt= a(B) ut + d where a(B) = a0I + a1B +a2B2 +... +aqBq and b(B) = I - b1B - b2B2 -... - bpBp

Some comments about the Backshift operator B It is a useful notational device, allowing us to write the equations for MA(q), AR(p) and ARMA(p, q) in a very compact form; It is also useful for making certain computations related to the time series described above;

The partial autocorrelation function A useful tool in time series analysis

The partial autocorrelation function Recall that the autocorrelation function of an AR(p) process satisfies the equation: rx(h) = b1rx(h-1) + b2rx(h-2) + ... +bprx(h-p) For 1 ≤ h ≤ p these equations (Yule-Walker) become: rx(1) = b1 + b2rx(1) + ... +bprx(p-1) rx(2) = b1rx(1) + b2 + ... +bprx(p-2) ... rx(p) = b1rx(p-1)+ b2rx(p-2) + ... +bp.

In matrix notation: These equations can be used to find b1, b2, … , bp, if the time series is known to be AR(p) and the autocorrelation rx(h)function is known.

If the time series is not autoregressive the equations can still be used to solve for b1, b2, … , bp, for any value of p ≥ 1. In this case are the values that minimizes the mean square error:

Definition: The partial auto correlation function at lag k is defined to be: Using Cramer’s Rule

Comment: The partial auto correlation function, Fkk is determined from the auto correlation function, r(h) The partial auto correlation function at lag k, Fkk is the last auto-regressive parameter, . if the series was assumed to be an AR(k) series. If the series is an AR(p) series then An AR(p) series is also an AR(k) series with k > p with the auto regressive parameters zero after p.

Some more comments: The partial autocorrelation function at lag k, Fkk, can be interpreted as a corrected autocorrelation between xt and xt-k conditioning on the intervening variables xt-1, xt-2, ... ,xt-k+1 . If the time series is an AR(p) time series than Fkk = 0 for k > p If the time series is an MA(q) time series than rx(h) = 0 for h > q

A General Recursive Formula for Autoregressive Parameters and the Partial Autocorrelation function (PACF)

Let denote the autoregressive parameters of order k satisfying the Yule Walker equations:

Then it can be shown that: and

Proof: The Yule Walker equations:

In matrix form:

The equations for

and The matrix A reverses order

The equations may be written Multiplying the first equations by or

Substituting this into the second equation or and

Hence and or

Some Examples

Example 1: MA(1) time series Suppose that {xt|t  T} satisfies the following equation: xt = 12.0 + ut + 0.5 ut – 1 where {ut|t  T} is white noise with s = 1.1. Find: The mean of the series, The variance of the series, The autocorrelation function. The partial autocorrelation function.

Solution Now {xt|t  T} satisfies the following equation: xt = 12.0 + ut + 0.5 ut – 1 Thus: The mean of the series, m = 12.0 The autocovariance function for an MA(1) is

Thus: The variance of the series, s(0) = 1.5125 and The autocorrelation function is:

The partial auto correlation function at lag k is defined to be: Thus

Graph: Partial Autocorrelation function Fkk

Exercise: Use the recursive method to calculate Fkk and

Exercise: Use the recursive method to calculate Fkk and

Example 2: AR(2) time series Suppose that {xt|t  T} satisfies the following equation: xt = 0.4 xt – 1 + 0.1 xt – 2 + 1.2 + ut where {ut|t  T} is white noise with s = 2.1. Is the time series stationary? Find: The mean of the series, The variance of the series, The autocorrelation function. The partial autocorrelation function.

The mean of the series The autocorrelation function. Satisfies the Yule Walker equations

hence

the variance of the series The partial autocorrelation function.

The partial autocorrelation function of an AR(p) time series “cuts off” after p.

Example 3: ARMA(1, 2) time series Suppose that {xt|t  T} satisfies the following equation: xt = 0.4 xt – 1 + 3.2 + ut + 0.3 ut – 1 + 0.2 ut – 2 where {ut|t  T} is white noise with s = 1.6. Is the time series stationary? Find: The mean of the series, The variance of the series, The autocorrelation function. The partial autocorrelation function.

xt = 0.4 xt – 1 + 3.2 + ut + 0.3 ut – 1 + 0.2 ut – 1 white noise std. dev,. s = 1.6. Is the time series stationary? b(x) = 1 – b1x = 1 – 0.4x has root r1 =1/0.4 =2.5 Since |r1| > 1, the time series is stationary Find: The mean of the series.

The autocovariance function s(h) satisfies: For h = 0, 1, 2 for h > q: i.e. For h = 0, 1, 2 for h > q:

etc. where

We use the first two equations to find s0 and s1 Then we use the third equation to find s2

The autocovariance, autocorrelation functions

Spectral Theory for a stationary time series