Presentation is loading. Please wait.

Presentation is loading. Please wait.

STAT 497 LECTURE NOTES 8 ESTIMATION.

Similar presentations


Presentation on theme: "STAT 497 LECTURE NOTES 8 ESTIMATION."— Presentation transcript:

1 STAT 497 LECTURE NOTES 8 ESTIMATION

2 ESTIMATION After specifying the order of a stationary ARMA process, we need to estimate the parameters. We will assume (for now) that: 1. The model order (p and q) is known, and 2. The data has zero mean. If (2) is not a reasonable assumption, we can subtract the sample mean , fit a zero-mean ARMA model: Then use as the model for Yt.

3 ESTIMATION Method of Moment Estimation (MME)
Ordinary Least Squares (OLS) Estimation Maximum Likelihood Estimation (MLE) Least Squares Estimation Conditional Unconditional

4 THE METHOD OF MOMENT ESTIMATION
It is also known as Yule-Walker estimation. Easy but not efficient estimation method. Works for only AR models for large n. BASIC IDEA: Equating sample moment(s) to population moment(s), and solve these equation(s) to obtain the estimator(s) of unknown parameter(s).

5 THE METHOD OF MOMENT ESTIMATION
Let n is the variance/covariance matrix of X with the given parameter values. Yule-Walker for AR(p): Regress Xt onto Xt−1, . . ., Xt−p. Durbin-Levinson algorithm with  replaced by . Yule-Walker for ARMA(p,q): Method of moments. Not efficient.

6 THE YULE-WALKER ESTIMATION
For a stationary (causal) AR(p)

7 THE YULE-WALKER ESTIMATION
To find the Yule-Walker estimators, we are using, These are forecasting equations. We can use Durbin-Levinson algorithm.

8 THE YULE-WALKER ESTIMATION
If If {Xt} is an AR(p) process, Hence, we can use the sample PACF to test for AR order, and we can calculate approximate confidence intervals for the parameters.

9 THE YULE-WALKER ESTIMATION
If Xt is an AR(p) process, and n is large, 100(1)% approximate confidence interval for j is

10 THE YULE-WALKER ESTIMATION
AR(1) Find the MME of . It is known that 1 = .

11 THE YULE-WALKER ESTIMATION
So, the MME of  is Also, is unknown. Therefore, using the variance of the process, we can obtain MME of

12 THE YULE-WALKER ESTIMATION

13 THE YULE-WALKER ESTIMATION
AR(2) Find the MME of all unknown parameters. Using the Yule-Walker Equations

14 THE YULE-WALKER ESTIMATION
So, equate population autocorrelations to sample autocorrelations, solve for 1 and 2.

15 THE YULE-WALKER ESTIMATION
Using these we can obtain the MME of To obtain MME of , use the process variance formula.

16 THE YULE-WALKER ESTIMATION
AR(1) AR(2)

17 THE YULE-WALKER ESTIMATION
Again using the autocorrelation of the series at lag 1, Choose the root so that the root satisfying the invertibility condition

18 THE YULE-WALKER ESTIMATION
For real roots, If , unique real roots but non-invertible. If , no real roots exists and MME fails. If , unique real roots and invertible.

19 THE YULE-WALKER ESTIMATION
This example shows that the MMEs for MA and ARMA models are complicated. More generally, regardless of AR, MA or ARMA models, the MMEs are sensitive to rounding errors. They are usually used to provide initial estimates needed for a more efficient nonlinear estimation method. The moment estimators are not recommended for final estimation results and should not be used if the process is close to being nonstationary or noninvertible.

20 THE MAXIMUM LIKELIHOOD ESTIMATION
Assume that By this assumption we can use the joint pdf instead of which cannot be written as multiplication of marginal pdfs because of the dependency between time series observations.

21 MLE METHOD For the general stationary ARMA(p,q) model or

22 MLE The joint pdf of (a1,a2,…, an) is given by
Let Y=(Y1,…,Yn) and assume that initial conditions Y*=(Y1-p,…,Y0)’ and a*=(a1-q,…,a0)’ are known.

23 MLE The conditional log-likelihood function is given by
Initial Conditions:

24 MLE Then, we can find the estimators of =(1,…,p), =(1,…, q) and  such that the conditional likelihood function is maximized. Usually, numerical nonlinear optimization techniques are required. After obtaining all the estimators, where d.f.=  of terms used in SS   of parameters = (np)  (p+q+1) = n  (2p+q+1).

25 MLE AR(1)

26 MLE The Jacobian will be

27 MLE Then, the likelihood function can be written as

28 MLE Hence, The log-likelihood function:

29 MLE Here, S*() is the conditional sum of squares and S() is the unconditional sum of squares. To find the value of  where the likelihood function is maximized, Then,

30 MLE If we neglect ln(12), then MLE=conditional LSE.
If we neglect both ln(12) and , then

31 MLE Asymptotically unbiased, efficient, consistent, sufficient for large sample sizes but hard to deal with joint pdf.

32 CONDITIONAL LEST SQUARES ESTIMATION

33 CONDITIONAL LSE If the process mean is different than zero

34 CONDITIONAL LSE MA(1) Non-linear in terms of parameters LS problem
S*() cannot be minimized analytically Numerical nonlinear optimization methods like Newton-Raphson or Gauss-Newton,... *There are similar problem is ARMA case.

35 UNCONDITIONAL LSE This nonlinear in .
We need nonlinear optimization techniques.

36 BACKCASTING METHOD Obtain the backward form of ARMA(p,q)
Instead of forecasting, backcast the past values of Yt and at, t  0. Obtain the unconditional log-likelihood function, then obtain the estimators.

37 EXAMPLE If there are only 2 observations in time series (not realistic) Find the MLE of  and

38 EXAMPLE US Quarterly Beer Production from 1975 to 1997
> par(mfrow=c(1,3)) > plot(beer) > acf(as.vector(beer),lag.max=36) > pacf(as.vector(beer),lag.max=36)

39 EXAMPLE (contd.) > library(uroot) Warning message: package 'uroot' was built under R version > HEGY.test(wts =beer, itsd = c(1, 1, c(1:3)), regvar = 0,selectlags = list(mode = "bic", Pmax = 12)) Null hypothesis: Unit root. Alternative hypothesis: Stationarity HEGY statistics: Stat. p-value tpi_ tpi_ Fpi_3: > CH.test(beer) Canova & Hansen test Null hypothesis: Stationarity. Alternative hypothesis: Unit root. L-statistic: Critical values:

40 EXAMPLE (contd.) > plot(diff(beer),ylab='First Difference of Beer Production',xlab='Time') > acf(as.vector(diff(beer)),lag.max=36) > pacf(as.vector(diff(beer)),lag.max=36)

41 EXAMPLE (contd.) > HEGY.test(wts =diff(beer), itsd = c(1, 1, c(1:3)), regvar = 0,selectlags = list(mode = "bic", Pmax = 12)) HEGY test Null hypothesis: Unit root. Alternative hypothesis: Stationarity HEGY statistics: Stat. p-value tpi_ tpi_ Fpi_3: Fpi_2: NA Fpi_1: NA

42 EXAMPLE (contd.) > fit1=arima(beer,order=c(3,1,0),seasonal=list(order=c(2,0,0), period=4)) > fit1 Call: arima(x = beer, order = c(3, 1, 0), seasonal = list(order = c(2, 0, 0), period = 4)) Coefficients: ar1 ar2 ar3 sar1 sar s.e sigma^2 estimated as 1.79: log likelihood = , aic = > fit2=arima(beer,order=c(3,1,0),seasonal=list(order=c(3,0,0), period=4)) > fit2 arima(x = beer, order = c(3, 1, 0), seasonal = list(order = c(3, 0, 0), period = 4)) ar1 ar2 ar3 sar1 sar2 sar s.e sigma^2 estimated as 1.646: log likelihood = , aic =


Download ppt "STAT 497 LECTURE NOTES 8 ESTIMATION."

Similar presentations


Ads by Google