Presentation is loading. Please wait.

Presentation is loading. Please wait.

EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint.

Similar presentations


Presentation on theme: "EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint."— Presentation transcript:

1 EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAAA

2 Kalman Filter = special case of a Bayes’ filter with dynamics model and sensory model being linear Gaussian: Kalman Filter 2

3 At time 0: For t = 1, 2, … Dynamics update: Measurement update: Kalman Filtering Algorithm

4 4 Nonlinear Dynamical Systems Most realistic robotic problems involve nonlinear functions: Versus linear setting:

5 5 Linearity Assumption Revisited y y x x p(x) p(y)

6 6 Non-linear Function “Gaussian of p(y)” has mean and variance of y under p(y) y y x x p(x) p(y)

7 7 EKF Linearization (1)

8 8 EKF Linearization (2) p(x) has high variance relative to region in which linearization is accurate.

9 9 EKF Linearization (3) p(x) has small variance relative to region in which linearization is accurate.

10 10 Dynamics model: for x t “close to” ¹ t we have: Measurement model: for x t “close to” ¹ t we have: EKF Linearization: First Order Taylor Series Expansion

11 Numerically compute F t column by column: Here e i is the basis vector with all entries equal to zero, except for the i’t entry, which equals 1. If wanting to approximate F t as closely as possible then ² is chosen to be a small number, but not too small to avoid numerical issues EKF Linearization: Numerical

12 Given: samples {(x (1), y (1) ), (x (2), y (2) ), …, (x (m), y (m) )} Problem: find function of the form f(x) = a 0 + a 1 x that fits the samples as well as possible in the following sense: Ordinary Least Squares

13 Recall our objective: Let’s write this in vector notation:, giving: Set gradient equal to zero to find extremum: Ordinary Least Squares (See the Matrix Cookbook for matrix identities, including derivatives.)

14 For our example problem we obtain a = [4.75; 2.00] Ordinary Least Squares a 0 + a 1 x

15 More generally: In vector notation:, gives: Set gradient equal to zero to find extremum (exact same derivation as two slides back): Ordinary Least Squares

16 So far have considered approximating a scalar valued function from samples {(x (1), y (1) ), (x (2), y (2) ), …, (x (m), y (m) )} with A vector valued function is just many scalar valued functions and we can approximate it the same way by solving an OLS problem multiple times. Concretely, letthen we have: In our vector notation: This can be solved by solving a separate ordinary least squares problem to find each row of Vector Valued Ordinary Least Squares Problems

17 Solving the OLS problem for each row gives us: Each OLS problem has the same structure. We have Vector Valued Ordinary Least Squares Problems

18 Approximate x t+1 = f t ( x t, u t ) with affine function a 0 + F t x t by running least squares on samples from the function: {( x t (1), y (1) = f t ( x t (1), u t ), ( x t (2), y (2) = f t ( x t (2), u t ), …, ( x t (m), y (m) = f t ( x t (m), u t )} Similarly for z t+1 = h t ( x t ) Vector Valued Ordinary Least Squares and EKF Linearization

19 OLS vs. traditional (tangent) linearization: OLS and EKF Linearization: Sample Point Selection traditional (tangent) OLS

20 Perhaps most natural choice: reasonable way of trying to cover the region with reasonably high probability mass OLS Linearization: choosing samples points

21 Numerical (based on least squares or finite differences) could give a more accurate “regional” approximation. Size of region determined by evaluation points. Computational efficiency: Analytical derivatives can be cheaper or more expensive than function evaluations Development hint: Numerical derivatives tend to be easier to implement If deciding to use analytical derivatives, implementing finite difference derivative and comparing with analytical results can help debugging the analytical derivatives Analytical vs. Numerical Linearization

22 At time 0: For t = 1, 2, … Dynamics update: Measurement update: EKF Algorithm

23 23 EKF Algorithm Extended_Kalman_filter (  t-1,  t-1, u t, z t ): Prediction: Correction: Return  t,  t

24 24 Localization Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robot’s position. Problem classes Position tracking Global localization Kidnapped robot problem (recovery) “Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities.” [Cox ’91]

25 25 Landmark-based Localization

26 26 EKF_localization (  t-1,  t-1, u t, z t, m): Prediction: Motion noise Jacobian of g w.r.t location Predicted mean Predicted covariance Jacobian of g w.r.t control

27 27 1.EKF_localization (  t-1,  t-1, u t, z t, m): Correction: 6. 7. 8. 9. 10. Predicted measurement mean Pred. measurement covariance Kalman gain Updated mean Updated covariance Jacobian of h w.r.t location

28 28 EKF Prediction Step

29 29 EKF Observation Prediction Step

30 30 EKF Correction Step

31 31 Estimation Sequence (1)

32 32 Estimation Sequence (2)

33 33 Comparison to GroundTruth

34 34 EKF Summary Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k 2.376 + n 2 ) Not optimal! Can diverge if nonlinearities are large! Works surprisingly well even when all assumptions are violated!

35 35 Linearization via Unscented Transform EKF UKF

36 36 UKF Sigma-Point Estimate (2) EKF UKF

37 37 UKF Sigma-Point Estimate (3) EKF UKF

38 UKF Sigma-Point Estimate (4)

39 Assume we know the distribution over X and it has a mean \bar{x} Y = f(X) EKF approximates f by first order and ignores higher-order terms UKF uses f exactly, but approximates p(x). UKF intuition why it can perform better [Julier and Uhlmann, 1997]

40 When would the UKF significantly outperform the EKF? Analytical derivatives, finite-difference derivatives, and least squares will all end up with a horizontal linearization  they’d predict zero variance in Y = f(X) Self-quiz x y

41 Beyond scope of course, just including for completeness. A crude preliminary investigation of whether we can get EKF to match UKF by particular choice of points used in the least squares fitting

42 Picks a minimal set of sample points that match 1 st, 2 nd and 3 rd moments of a Gaussian: \bar{x} = mean, P xx = covariance, i  i’th column, x 2 < n · : extra degree of freedom to fine-tune the higher order moments of the approximation; when x is Gaussian, n+ · = 3 is a suggested heuristic L = \sqrt{P_{xx}} can be chosen to be any matrix satisfying: L L T = P xx Original unscented transform [Julier and Uhlmann, 1997]

43 43 Unscented Transform Sigma points Weights Pass sigma points through nonlinear function Recover mean and covariance

44 Dynamics update: Can simply use unscented transform and estimate the mean and variance at the next time from the sample points Observation update: Use sigma-points from unscented transform to compute the covariance matrix between x t and z t. Then can do the standard update. Unscented Kalman filter

45 [Table 3.4 in Probabilistic Robotics]

46 UKF Summary Highly efficient: Same complexity as EKF, with a constant factor slower in typical practical applications Better linearization than EKF: Accurate in first two terms of Taylor expansion (EKF only first term) + capturing more aspects of the higher order terms Derivative-free: No Jacobians needed Still not optimal!

47 47 Unscented Transform Sigma points Weights Pass sigma points through nonlinear function Recover mean and covariance

48 48 UKF_localization (  t-1,  t-1, u t, z t, m): Prediction: Motion noise Measurement noise Augmented state mean Augmented covariance Sigma points Prediction of sigma points Predicted mean Predicted covariance

49 49 UKF_localization (  t-1,  t-1, u t, z t, m): Correction: Measurement sigma points Predicted measurement mean Pred. measurement covariance Cross-covariance Kalman gain Updated mean Updated covariance

50 50 1.EKF_localization (  t-1,  t-1, u t, z t, m): Correction: 6. 7. 8. 9. 10. Predicted measurement mean Pred. measurement covariance Kalman gain Updated mean Updated covariance Jacobian of h w.r.t location

51 51 UKF Prediction Step

52 52 UKF Observation Prediction Step

53 53 UKF Correction Step

54 54 EKF Correction Step

55 55 Estimation Sequence EKF PF UKF

56 56 Estimation Sequence EKF UKF

57 57 Prediction Quality EKF UKF

58 58 [Arras et al. 98]: Laser range-finder and vision High precision (<1cm accuracy) Kalman Filter-based System [Courtesy of Kai Arras]

59 59 Multi- hypothesis Tracking

60 60 Belief is represented by multiple hypotheses Each hypothesis is tracked by a Kalman filter Additional problems: Data association: Which observation corresponds to which hypothesis? Hypothesis management: When to add / delete hypotheses? Huge body of literature on target tracking, motion correspondence etc. Localization With MHT

61 61 Hypotheses are extracted from LRF scans Each hypothesis has probability of being the correct one: Hypothesis probability is computed using Bayes’ rule Hypotheses with low probability are deleted. New candidates are extracted from LRF scans. MHT: Implemented System (1) [Jensfelt et al. ’00]

62 62 MHT: Implemented System (2) Courtesy of P. Jensfelt and S. Kristensen

63 63 MHT: Implemented System (3) Example run Map and trajectory # hypotheses #hypotheses vs. time P(H best ) Courtesy of P. Jensfelt and S. Kristensen


Download ppt "EKF, UKF Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint."

Similar presentations


Ads by Google