Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 27: Linear Filtering - Part I: Kalman Filter Standard Kalman filtering – Linear dynamics.

Similar presentations


Presentation on theme: "Chapter 27: Linear Filtering - Part I: Kalman Filter Standard Kalman filtering – Linear dynamics."— Presentation transcript:

1 Chapter 27: Linear Filtering - Part I: Kalman Filter Standard Kalman filtering – Linear dynamics

2 Kalman Filter Model dynamics – discrete time  x k+1 = M(x k ) + W k+1 or Mx k + w k+1  x k : true state  w k : model error  x k ∊ R n, M: R n  R n, M ∊ R n x n Observation  z k = h(x k ) + v k or Hx k + v k  h: R n  R m, H ∊ R m x n, v k ∊ R n  E(v k ) = 0, COV(v k ) = R k

3 Filtering, Smoothing, Prediction F N = {z i | 1 ≤ I ≤ N } [Wiener, 1942] [Kolmogorov, 1942] k < N Smoothing k = N filtering k > N Prediction Go to page 464 – classification

4 Statement of Problem – Linear case x 0 ~ N(m 0, P 0 ) x k+1 = M k x k + w k+1, w k ~ N(0, Q k ) z k = H k x k + v k, v k ~ N(0, R k ) Given F k = { z j | 1 ≤ j ≤ k }, find the best estimate of x k that minimizes the mean squared error E[(x k - ) T( x k - )] = tr[ E(x k - ) (x k - ) T ] = tr[ ] If is also unbiased => it is min. variance!

5 Model Forecast Step At time k = 0, F 0 – initial information is given  Given, the predictable part of x 1 is  Error in prediction  0 1

6 Forecast covariance Covariance  Predicted Observation  E[z 1 / x 1 = x 1 f ] = E[H 1 x 1 +v 1 |x 1 =x 1 f ] = H 1 x 1 f  COV(z 1 |x 1 f ) = E{[z 1 – E(z 1 |x 1 f )][z 1 – E(z 1 |x 1 f )] T } = E(v 1 v 1 T ) = R 1

7 Basic idea At time k = 1 Fast Forward  Time k-1 to k 0 x 1 f z 1 P 1 f R 1 1 k-1 k x k f z k P k f R k

8 Forecast from k-1 to k   ∴  ∴

9 Observations at time k Model predicted observations 

10 Data Assimilation Step prior Kalman Gain Innovation

11 Posterior estimate – also known as analysis Substitute and simplify 

12 Covaiance of analysis => where

13 Conditions on the Kalman gain – minimization of total variance K k = P k f H k T D k -1 = P k f H k T [H k P k f H k T + R k ] -1 1. 2.

14 on the Kalman gain Comments on the Kalman gain 3. An Interpretation of K k : n = m, H k = I  Let P k f = Diag [ P 11 f P 22 f … P nn f ] R k = Diag [ R 11 R 22 … R nn ]  K k = P k f H k T [H k P k f H k T + R k ] -1 = P k f [P k f + R k ] -1 = Diag[ P 11 f /(P 11 f +R 11 ), P 22 f /(P 22 f +R 22 ), ….., P nn f /(P nn f +R nn )]  = x k f + K k [z-Hx k f ] = x k f + K k [z-x k f ] = (I-K k )x k f + K k z   ∴ If P ii f is large, z i,k has a larger weight

15 Comments – special cases 4. is independent of observations 5. No observations  x k f = P k f = for all k ≥ 0  x k f = M k-1 x k-1 f = M k-1 M k-2 M k-3 …..M 1 M 0 x 0 f  P k f = M k-1 P k-1 f M k-1 T + Q k = M(k-1:0)P 0 M T (k-1:0) + ∑ M(k-1:j+1)Q j+1 M T (k-1:j+1) where M(i,j) = M i M i-1 M i-2 ….M j, Q j ≡ 0  P k+1 f = M(k-1:0)P 0 M T (k-1:0)

16 Special cases - continued 6. No Dynamics  M k =I, W k ≡ 0, Q k ≡ 0  x k+1 = x k = x  z k = H k x + v k  x k f = with = E(x 0 )  P k f = with = P 0  =>  Same as (17.2.11) – (17.2.12) Static case

17 Special cases 7. When observations are perfect  R k ≡ 0  => Kk = P k f H k T [H k P k f H k T + R k ] -1 = P k f H k T [H k P k f H k T ] -1  H k : m x n, P k f : n x n, H k T : n x m  => [H k P k f H k T ] -1 : m x m  Recall:  From (27.2.19):

18 Special cases  ∴ ( I- K k H k ) = ( I – K k H k ) 2, idempotent  Fact: Idempotent matrices are singular  => Rank of ( I- K k H k ) ≤ n – 1  ∴ Rank( ) ≤ min {Rank( I- K k H k ), Rank( P k f )} ≤ n – 1  ∴ Rank of ≤ n – 1  ∴ When R k is small, this will cause computational instability

19 Special cases 8. Residual Checking  r k = z k – H k x k f = innovation  = x k f + K k r k  r k = z k –H k x k f = H k x k + v k – H k x k f = H k (x k – x k f ) + v k = H k e k f + v k  ∴ COV(r k ) = H k P k f H k T + R k  ∴ By computing r k and its covariance, we can check if the filter is working O.K.

20 10. Computational Cost

21

22 Example 27.2.1 Scalar Dynamics with No Observation a > 0, w k ~ N( 0, q), x 0 ~ N(m 0, P 0 ) x k = ax k-1 + w k E(x k ) = a k E(x 0 ) = a k m 0 P k = Var(x k ) = Var(ax k-1 +w k ) = a 2 P k-1 + q ∴ P k = a 2k P 0 + q[(a 2k -1)/(a 2 – 1)]

23 Scalar dynamics Note: For a given m 0, P 0, q, the behavior of the moments depends on a 1. 0 < a < 1  lim E(x k )  0, lim P k = q/(1-a 2 ) 2. 1 < a < ∞  lim E(x k ) = 0, lim P k = ∞ 3. a = 1  x k = x 0 + ∑w k  E(x k ) = m 0  P k = P 0 + kq

24 Example 27.2.2 Kalman Filtering x k+1 = ax k + w k+1, w k+1 ~(0,q) z k = hx k + v k, v k ~N(0,r) x k+1 f = a P k+1 f = a 2 + q = x k f + K k [z k – hx k f ] K k = P k f h[h 2 P k f + r] -1 = hr -1 = P k f – (P k f ) 2 h 2 [h 2 P k f + r] -1 = P k f r[h 2 P k f +r] -1

25 Recurrences: Analysis of Stability HW. 1 x k+1 f = a(1-K k h)x k f + aK k z k e k+1 f = a(1-K k h)e k f + aK k v k + w k+1 P k+1 f = a 2 +q = P k f r(h 2 P k f +r) -1

26 Example continued P k+1 f = a 2 P k f r / (h 2 P k f +r) + q P k+1 f / r = a 2 P k f / (h 2 P k f + r) + q/r = a 2 (P k f /r) / [h 2 (P k f /r) + 1] + q/r P k+1 = a 2 P k /(h 2 P k +1) + α  α = q/r (ratio)  P k = P k f /r Riccati equation ( First-order, scalar, nonlinear)

27 Asymptotic Properties Let h = 1 P k+1 = a 2 P k /(P k +1) + α Let δ k = P k+1 – P k = a 2 P k /(P k +1) – P k + α = [-P k 2 + P k (a 2 - 1 + α) + α]/(P k +1) ∴ δ k = g(P k )/(P k +1)  where g(P k ) = -P k 2 + P k (a 2 + α - 1) + α

28 Example continued When P k+1 = P k, => δ k = 0 => equilibrium => δ k = 0 if g(P k ) = 0 -P k 2 + P k (a 2 + α – 1) + α = 0 -P k 2 + β P k + α = 0 Evaluate the derivative of g(.) at P * and P * g’(P k ) = -2P k + β β

29 Example continued ∴ P * is an attractor – stable P * is a repellor – unstable Thus, => P * = a 2 P * /(P * +1) + α

30 Rate of Convergence Let y k = p k – p *, p k+1 = a 2 p k /(p k +1) + α y k+1 = pk+1 – p* = [a 2 p k /(p k +1) + α] - [a 2 p * /(p * +1) + α] = a 2 p k /(p k +1) - a 2 p * /(p * +1) = a 2 (p k – p * )/[(1+p k )(1+p * )] = a 2 y k /[(1+p k )(1+p * )] ∴ 1/y k+1 = [(1+p k )(1+p * )]/ a 2 y k = [(1+y k +p * )(1+p * )] / a 2 y k = [(1+p * )/a] 2 /y k + (1+p * )/a 2

31 Rate convergence - continued z k = 1/y k => z k+1 = cz k + b where c = [(1+p * )/a] 2 and b = (1+p * )/a 2 Iterating: ∴ when c> 1 (ie) c = [(1+p * )/a] 2 >1 When this is true, y k  0 at exp. rate

32 Rate of convergence - continued From (27.2.38) h = 1 Analysis Covariance Converges

33 Stability of the Filter h = 1 e k+1 f = a(1-K k h)e k f + aK k v k + w k+1 -- homo part K k = p k f /(p k f +r) = p k /(p k +1) 1 – K k = 1/(p k +1) ∴


Download ppt "Chapter 27: Linear Filtering - Part I: Kalman Filter Standard Kalman filtering – Linear dynamics."

Similar presentations


Ads by Google