Presentation is loading. Please wait.

Presentation is loading. Please wait.

Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.

Similar presentations


Presentation on theme: "Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read."— Presentation transcript:

1 Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAAA

2 Kalman Filter = special case of a Bayes’ filter with dynamics model and sensory model being linear Gaussian: Above can also be written as follows: Overview 2 Note: I switched time indexing on u to be in line with typical control community conventions (which is different from the probabilistic robotics book).

3 Time update Assume we have current belief for : Then, after one time step passes: X t+1 XtXt

4 Now we can choose to continue by either of (i) mold it into a standard multivariate Gaussian format so we can read of the joint distribution’s mean and covariance (ii) observe this is a quadratic form in x_{t} and x_{t+1} in the exponent; the exponent is the only place they appear; hence we know this is a multivariate Gaussian. We directly compute its mean and covariance. [usually simpler!] Time Update: Finding the joint

5 We follow (ii) and find the means and covariance matrices in Time Update: Finding the joint [Exercise: Try to prove each of these without referring to this slide!]

6 Time Update Recap Assume we have Then we have Marginalizing the joint, we immediately get X t+1 XtXt

7 Generality! Assume we have Then we have Marginalizing the joint, we immediately get WV

8 Observation update Assume we have: Then: And, by conditioning on (see lecture slides on Gaussians) we readily get: Z t+1 X t+1

9 At time 0: For t = 1, 2, … Dynamics update: Measurement update: Often written as: Complete Kalman Filtering Algorithm (Kalman gain) “innovation”

10 10 Kalman Filter Summary Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k 2.376 + n 2 ) Optimal for linear Gaussian systems!

11 Nonlinear systems Extended Kalman Filter, Unscented Kalman Filter Very large systems with sparsity structure Sparse Information Filter Very large systems with low-rank structure Ensemble Kalman Filter Kalman filtering over SE(3) How to estimate A t, B t, C t, Q t, R t from data ( z 0:T, u 0:T ) EM algorithm How to compute (note the capital “T”) Smoothing Forthcoming Extensions

12 Square-root Kalman filter --- keeps track of square root of covariance matrices --- equally fast, numerically more stable (bit more complicated conceptually) If A t = A, Q t = Q, C t = C, R t = R If system is “observable” then covariances and Kalman gain will converge to steady-state values for t -> 1 Can take advantage of this: pre-compute them, only track the mean, which is done by multiplying Kalman gain with “innovation” System is observable if and only if the following holds true: if there were zero noise you could determine the initial state after a finite number of time steps Observable if and only if: rank( [ C ; CA ; CA 2 ; CA 3 ; … ; CA n-1 ]) = n Typically if a system is not observable you will want to add a sensor to make it observable Kalman filter can also be derived as the (recursively computed) least-squares solutions to a (growing) set of linear equations Things to be aware of that we won’t cover

13 13 Discrete Kalman Filter Estimates the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation with a measurement

14 14 Components of a Kalman Filter Matrix (nxn) that describes how the state evolves from t to t-1 without controls or noise. Matrix (nxl) that describes how the control u t changes the state from t to t-1. Matrix (kxn) that describes how to map the state x t to an observation z t. Random variables representing the process and measurement noise that are assumed to be independent and normally distributed with covariance R t and Q t respectively.

15 15 Kalman Filter Updates in 1D

16 16 Kalman Filter Updates in 1D

17 17 Kalman Filter Updates in 1D

18 18 Kalman Filter Updates

19 19 Linear Gaussian Systems: Initialization Initial belief is normally distributed:

20 20 Dynamics are linear function of state and control plus additive noise: Linear Gaussian Systems: Dynamics

21 21 Linear Gaussian Systems: Dynamics

22 22 Observations are linear function of state plus additive noise: Linear Gaussian Systems: Observations

23 23 Linear Gaussian Systems: Observations

24 24 Kalman Filter Algorithm Algorithm Kalman_filter(  t-1,  t-1, u t, z t ): Prediction: Correction: Return  t,  t

25 25 The Prediction-Correction-Cycle Prediction

26 26 The Prediction-Correction-Cycle Correction

27 27 The Prediction-Correction-Cycle Correction Prediction


Download ppt "Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read."

Similar presentations


Ads by Google