Download presentation

Presentation is loading. Please wait.

Published byCassie Buckson Modified over 2 years ago

1
Chapter 2. Unobserved Component models Esther Ruiz 2006-2007 PhD Program in Business Administration and Quantitative Analysis Financial Econometrics

2
2.1 Description and properties Unobserved component models assume that the variables of interest are made of components with a direct interpretation that cannot be directly observed Applications in finance: “Fads” model of Potterba and Summers (1998). There are two types of traders: informed (μ t ) and uninformed (ε t ). The observed price is y t

3
Models for Ex ante interest differentials proposed by Cavaglia (1992): We observe the ex post interest differential which is equal to the ex ante interest differential plus the cross-country differential in inflation

4
Factor models simplify the computation of the covariance matrix in mean-variance portfolio allocation and are central in two asset pricing theories: CAPM and APT

5
Term structure of interest rates model proposed by Rossi (2004): The observed yields are given by the theoretical rates implied by a no arbitrage condition plus a stochastic disturbance

6
Modelling volatility There are two main types of models to represent the dynamic evolution of volatilities: i) GARCH models that assume the volatility is a non- linear funcion of past returns

7
σ is the one-step ahead (conditional) variance and, therefore, can be observed given observations up to time t-1. As a result, classical inference procedures can be implemented.

8
Example: Consider the following GARCH(1,1) model for the IBEX35 returns The returns corresponding to the first two days in the sample are: 0.21and -0.38

9
In this case, there are not unobserved components but consider the model for fundamental prices with GARCH errors In this case, the variances of the noises cannot be observed with information available at time t-1

10
ii) Stochastic volatility models assume that the volatility represents the arrival of new information into the market and, consequently, it is unobserved Both models are able to represent a) Excess kurtosis b) Autocorrelations of squares: small and persistent

11
Although the properties of SV models are more attractive and closer to the empirical properties observed in real financial returns, their estimation is more complicated because the volatility, σ t, cannot be observed one-step- ahead.

12
2.2 State space models The Kalman filter allows the estimation of the underlying unobserved components. To implement the Kalman filter we are writting the unobserved model of interest in a general form known as “state space model”. The state space model is given by where the matrices Z t, H t, T t and Q t can evolve over time as far as they are known at time t-1.

13
Consider, for example, the random walk plus noise model proposed to represent fundamental prices in the market. In this case, the measurement equation is given by Therefore, Z t =1, the state α t is the underlying level μ t and H t = The transition equation is given by and T t =1 and Q t =

14
Unobserved component models depend on several disturbances. Provided de model is linear, the components can be combined to give a model with a single disturbance: reduced form. The reduced form is an ARIMA model with restrictions in the parameters.

15
Consider the random walk plus noise model In this case The mean and variance of are given by

16
The autocorrelation function is given by signal to noise ratio. The reduced form is an IMA(1,1) model with negative parameter where

17
When, q=0, reduces to a non-invertible MA(1) model, i.e. y t is a white noise process. On the other hand, as q increases, the autocorrelations of order one, and consequently, θ, decreases. In the limit, if, is a white noise and y t is a random walk.

18
Although we are focusing on univariate series, the results are valid for multivariate systems. The Kalman filter is made up of two sets of equations: i) Prediction equations: one-step ahead predictions of the states and their corresponding variances For example, in the random walk plus noise model:

19
ii) Updated equations: Each new observation changes (updates) our estimates of the states obtained using past information where

20
The updating equations can be derived using the properties of the multivariate normal distribution. Consider the distribution of α t and y t conditional on past information up to and including time t-1. The conditional mean and variance are:

21
The conditional covariance can be easily derived by writting

22
Consider, once more the random walk plus noise model. In this case,

23
The Kalman filter needs some initial conditions for the state and its covariance matrix at time t=0. There are several alternatives. One of the simplest consists on assuming that the state at time zero es equal to its marginal mean and P 0 is the marginal variance. However, when the state is not stationary this solution is not factible. In this case, one can initiallize the filter by assuming what is known as a diffuse prior (we do not have any information about what happens at time zero: m 0 =0 and P 0 =∞.

24
Now we are in the position to run the Kalman filter. Consider, for example, the random walk plus noise model and that we want to obtain estimates of the underlying level of the series. In this case, the equations are given by

26
Consider, for example, that we have observations of a time series generated by a random walk with and 1.14, 0.59, 1.58,….

29
The Kalman filter gives: i) One-step ahead and updated estimates of the unobserved states and their associated mean squared errors: a t/t-1, P t/t-1, a t and P t ii) One-step ahead estimates of y t iii) One-step ahead errors (innovations) and their variances, ν t and F t

30
Smoothing algorithms There are also other algorithms known as smoothing algorithms that generate estimates of the unobserved states based on the whole sample The smoothers are very useful because they generate estimates of the disturbances associated with each of the components of the model: auxiliary residuals.

31
For example, in the random walk plus noise model:

34
The auxiliary residuals are useful to: i) identify outliers in different components; Harvey and Koopman (1992) ii) test whether the components are heteroscedastic; Broto and Ruiz (2005a,b) This test is based on looking at the differences between the autocorrelations of squares and the squared autocorrelations of each of the auxiliary residuals; Maravall (1983).

35
Prediction One of the main objectives when dealing with time series analysis is the prediction of future values of the series of interest. This can easily be done in the context of state space models by running the prediction equations without the updating equations: In the context of the random walk plus noise model:

36
Estimation of the parameters Up to now, we have assumed that the parameters of the state space model are known. However, in practice, we need to estimate them. In Gaussian state space models, the estimation can be done by Maximum Likelihood. In this case, we can write The expression of the log-likelihood is then given by

37
The asymptotic properties of the ML estimator are the usual ones as far as the parameters lie on the interior of the parameter space. However, in many models of interest, the parameters are variances, and it is of interest to know whether they are zero (we have deterministic components). In some cases, the asymptotic distribution could still be related with the Normal but is modified as to take into account of the boundary; see Harvey (1989).

38
If the model is not conditionally Gaussian, then maximizing the Gaussian log-likelihood, we obtain what is known as the Quasi-Maximum Likelihood (QML) estimator. In this case, the estimator looses its eficiency. Furthermore, droping the Normality assumption tends to affect the asymptotic distribution of all the parameters. In this case, the asymptotic distribution is given by where

39
Unobserved component models for financial time series We are considering two particular applications of unobserved component models with financial data: i) Dealing with stochastic volatility ii) Heteroscedastic components

40
Stochastic volatility models Understanding and modelling stock volatility is necessary to traders for hedging against risk, in options pricing theory or as a simple risk measure in many asset pricing models. The simplest stochastic volatility model is given by

41
Taking logarithms of squared returns, we obtain a linear although non-Gaussian state space model Because, log(y t ) 2 is not truly Gaussian, the Kalman filter yields minimium mean square linear estimators (MMSLE) of h t and future observations rather than minimum mean square estimators (MMSE).

42
If y 1, y 2,…,y T is the returns series, we transform the data by The Kalman filter is given by

43
SP500

46
Canadian/Dollar

49
Unobserved heteroscedastic components Unobserved component models with heteroscedastic disturbances have been extensively used in the analysis of financial time series; for example, multivariate models with common factors where both the idiosyncratic and common factors may be heteroscedastic as in Harvey, Ruiz and Sentana (1992) or univariate models where the components may be heteroscedastic as in Broto and Ruiz (2005b).

50
To simplify the exposition, we are focusing on the random walk plus noise model with ARCH(1) disturbances given by where

51
The model can be written in state space form as follows where

52
The model is not conditionally Gaussian, since knowledge of past observations does not, in general, imply knowledge of past disturbances. Nevertheless we can proceed on the basis that the model can be treated as though it were conditionally Gaussian, and we will refer to the Kalman filter as being quasi-optimal.

53
Nikkei

54
QML estimates of the Random walk plus noise parameters: Diagnostics of innovations

56
Diagnostics of auxiliary residuals

58
Hewlett Packard

59
QML estimates of the Random walk plus noise parameters: Diagnostics of innovations

61
Diagnostics of auxiliary residuals

Similar presentations

OK

Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)

Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on grammar translation method Ppt on electrical power transmission system Ppt on spiritual leadership inc Ppt on reflection of light for class 8 Ppt on fdi in india download Mrna differential display ppt online A ppt on loch ness monster south Ppt on being creative Ppt on layers of atmosphere Slideshare net download ppt on pollution