Presentation is loading. Please wait.

Presentation is loading. Please wait.

State Space Models.

Similar presentations


Presentation on theme: "State Space Models."— Presentation transcript:

1 State Space Models

2 Let { xt:t  T} and { yt:t  T} denote two vector valued time series that satisfy the system of equations: yt = Atxt + vt (The observation equation) xt = Btxt-1 + ut (The state equation) The time series { yt:t  T} is said to have state-space representation.

3 Note: { ut:t  T} and { vt:t  T} denote two vector valued time series that satisfying:
E(ut) = E(vt) = 0. E(utus) = E(vtvs) = 0 if t ≠ s. E(utut) = Su and E(vtvt) = Sv. E(utvs) = E(vtus ) = 0 for all t and s.

4 Example: One might be tracking an object with several radar stations
Example: One might be tracking an object with several radar stations. The process {xt:t  T} gives the position of the object at time t. The process { yt:t  T} denotes the observations at time t made by the several radar stations. As in the Hidden Markov Model we will be interested in determining position of the object, {xt:t  T}, from the observations, {yt:t  T} , made by the several radar stations

5 Example: Many of the models we have considered to date can be thought of a State-Space models
Autoregressive model of order p:

6 Define Then Observation equation and State equation

7 Hidden Markov Model: Assume that there are m states
Hidden Markov Model: Assume that there are m states. Also that there the observations Yt are discreet and take on n possible values. Suppose that the m states are denoted by the vectors:

8 Suppose that the n possible observations taken at each state are

9 Let and Note

10 Let So that The State Equation with

11 Also Hence and where diag(v) = the diagonal matrix with the components of the vector v along the diagonal

12 Since then and Thus

13 We have defined Hence Let

14 Then The Observation Equation with and

15 Hence with these definitions the state sequence of a Hidden Markov Model satisfies:
The State Equation with and The observation sequence satisfies: The Observation Equation with and

16 Kalman Filtering

17 We will consider finding the “best” linear predictor.
We are now interested in determining the state vector xt in terms of some or all of the observation vectors y1, y2, y3, … , yT. We will consider finding the “best” linear predictor. We can include a constant term if in addition one of the observations (y0 say) is the vector of 1’s. We will consider estimation of xt in terms of y1, y2, y3, … , yt-1 (the prediction problem) y1, y2, y3, … , yt (the filtering problem) y1, y2, y3, … , yT (t < T, the smoothing problem)

18 For any vector x define:
where is the best linear predictor of x(i), the ith component of x, based on y0, y1, y2, … , ys. The best linear predictor of x(i) is the linear function that of x, based on y0, y1, y2, … , ys that minimizes

19 Remark: The best predictor is the unique vector of the form:
Where C0, C1, C2, … ,Cs, are selected so that:

20 Remark: If x, y1, y2, … ,ys are normally distributed then:

21 Remark Let u and v, be two random vectors than is the optimal linear predictor of u based on v if

22 Kalman Filtering: Let { xt:t  T} and { yt:t  T} denote two vector valued time series that satisfy the system of equations: yt = Atxt + vt xt = Bxt-1 + ut Again and

23 Then where One also assumes that the initial vector x0 has mean m and covariance matrix S an that

24 The covariance matrices are updated
with

25 Summary: The Kalman equations
1. 2. 3. 4. 5. with and

26 Proof: Now hence Note

27 Let Let Given y0, y1, y2, … , yt-1 the best linear predictor of dt using et is:

28 Hence where and Now

29 Also hence

30 Thus where Also

31 Hence The proof that will be left as an exercise.

32 Example: Suppose we have an AR(2) time series What is observe is the time series {ut|t  T} and {vt|t  T} are white noise time series with standard deviations su and sv.

33 This model can be expressed as a state-space model by defining:
then

34 The equation: can be written Note:

35 The Kalman equations 1. 2. 3. 4. 5. Let

36 The Kalman equations 1.

37 2.

38 3.

39 4.

40 5.

41

42 Kalman Filtering (smoothing):
Now consider finding These can be found by successive backward recursions for t = T, T – 1, … , 2, 1 where

43 The covariance matrices satisfy the recursions

44 The backward recursions
1. 2. 3. In the example: - calculated in forward recursion


Download ppt "State Space Models."

Similar presentations


Ads by Google