# State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.

## Presentation on theme: "State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The."— Presentation transcript:

State Space Models

Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The observation equation) x t = B t x t-1 + u t (The state equation) The time series { y t :t T} is said to have state-space representation.

Note: { u t :t T} and { v t :t T} denote two vector valued time series that satisfying: 1.E(u t ) = E(v t ) = 0. 2.E(u t u s ˊ ) = E(v t v s ˊ ) = 0 if t s. 3.E(u t u t ˊ ) = u and E(v t v t ˊ ) = v. 4.E(u t v s ˊ ) = E(v t u s ˊ ) = 0 for all t and s.

Example: One might be tracking an object with several radar stations. The process {x t :t T} gives the position of the object at time t. The process { y t :t T} denotes the observations at time t made by the several radar stations. As in the Hidden Markov Model we will be interested in determining position of the object, {x t :t T}, from the observations, {y t :t T}, made by the several radar stations

Example: Many of the models we have considered to date can be thought of a State- Space models Autoregressive model of order p:

Define Then andState equation Observation equation

Hidden Markov Model: Assume that there are m states. Also that there the observations Y t are discreet and take on n possible values. Suppose that the m states are denoted by the vectors:

Suppose that the n possible observations taken at each state are

Let and Note

Let So that The State Equation with

Also Hence and where diag(v) = the diagonal matrix with the components of the vector v along the diagonal

then Since and Thus

We have defined Hence Let

Then with The Observation Equation and

Hence with these definitions the state sequence of a Hidden Markov Model satisfies: with The Observation Equation and The State Equation with and The observation sequence satisfies:

Kalman Filtering

We are now interested in determining the state vector x t in terms of some or all of the observation vectors y 1, y 2, y 3, …, y T. We will consider finding the best linear predictor. We can include a constant term if in addition one of the observations (y 0 say) is the vector of 1s. We will consider estimation of x t in terms of 1.y 1, y 2, y 3, …, y t-1 (the prediction problem) 2.y 1, y 2, y 3, …, y t (the filtering problem) 3.y 1, y 2, y 3, …, y T (t < T, the smoothing problem)

For any vector x define: where is the best linear predictor of x (i), the i th component of x, based on y 0, y 1, y 2, …, y s. The best linear predictor of x (i) is the linear function that of x, based on y 0, y 1, y 2, …, y s that minimizes

Remark: The best predictor is the unique vector of the form: Where C 0, C 1, C 2, …,C s, are selected so that:

Remark: If x, y 1, y 2, …,y s are normally distributed then:

Let u and v, be two random vectors than is the optimal linear predictor of u based on v if Remark

State Space Models

Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The observation equation) x t = B t x t-1 + u t (The state equation) The time series { y t :t T} is said to have state-space representation.

Note: { u t :t T} and { v t :t T} denote two vector valued time series that satisfying: 1.E(u t ) = E(v t ) = 0. 2.E(u t u s ˊ ) = E(v t v s ˊ ) = 0 if t s. 3.E(u t u t ˊ ) = u and E(v t v t ˊ ) = v. 4.E(u t v s ˊ ) = E(v t u s ˊ ) = 0 for all t and s.

Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t x t = Bx t-1 + u t Let Kalman Filtering: and

Then where One also assumes that the initial vector x 0 has mean and covariance matrix an that

The covariance matrices are updated with

Summary: The Kalman equations 1. 2. 3. 4. 5. with and

Now hence Proof: Note proving (4)

Let Given y 0, y 1, y 2, …, y t-1 the best linear predictor of d t using e t is:

Hence where and Now (5)

Also hence (2)

Thus where Also (4) (5) (2)

The proof that will be left as an exercise. Hence (3) (1)

Example: What is observe is the time series Suppose we have an AR(2) time series {u t |t T} and {v t |t T} are white noise time series with standard deviations u and v.

then This model can be expressed as a state-space model by defining:

can be written The equation: Note:

The Kalman equations 1. 2. 3. 4. 5. Let

The Kalman equations 1.

2.

3.

4.

5.

Now consider finding These can be found by successive backward recursions for t = T, T – 1, …, 2, 1 Kalman Filtering (smoothing): where

The covariance matrices satisfy the recursions

1. The backward recursions 2. 3. In the example: - calculated in forward recursion

Download ppt "State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The."

Similar presentations