Marginalization and sliding window estimation

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Introduction to Data Assimilation NCEO Data-assimilation training days 5-7 July 2010 Peter Jan van Leeuwen Data Assimilation Research Center (DARC) University.
Real-Time Template Tracking
Mobile Robot Localization and Mapping using the Kalman Filter
Probabilistic Robotics
Reducing Drift in Parametric Motion Tracking
Probabilistic Robotics
Probabilistic Robotics
Probabilistic Robotics SLAM. 2 Given: The robot’s controls Observations of nearby features Estimate: Map of features Path of the robot The SLAM Problem.
(Includes references to Brian Clipp
The GraphSLAM Algorithm Daniel Holman CS 5391: AI Robotics March 12, 2014.
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Probabilistic Robotics
Simultaneous Localization and Mapping
Probabilistic Robotics: Kalman Filters
Robotic Mapping: A Survey Sebastian Thrun, 2002 Presentation by David Black-Schaffer and Kristof Richmond.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Dimensional reduction, PCA
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Independent Component Analysis (ICA) and Factor Analysis (FA)
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Maximum Likelihood (ML), Expectation Maximization (EM)
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
An Integrated Pose and Correspondence Approach to Image Matching Anand Rangarajan Image Processing and Analysis Group Departments of Electrical Engineering.
SLAM: Simultaneous Localization and Mapping: Part II BY TIM BAILEY AND HUGH DURRANT-WHYTE Presented by Chang Young Kim These slides are based on: Probabilistic.
ROBOT MAPPING AND EKF SLAM
Slam is a State Estimation Problem. Predicted belief corrected belief.
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Kalman filter and SLAM problem
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
/09/dji-phantom-crashes-into- canadian-lake/
Computer vision: models, learning and inference Chapter 19 Temporal models.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München.
3D SLAM for Omni-directional Camera
Young Ki Baik, Computer Vision Lab.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Range-Only SLAM for Robots Operating Cooperatively with Sensor Networks Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang Reading Assignment.
Lecture 2: Statistical learning primer for biologists
Bundle Adjustment A Modern Synthesis Bill Triggs, Philip McLauchlan, Richard Hartley and Andrew Fitzgibbon Presentation by Marios Xanthidis 5 th of No.
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Tracking with dynamics
Extended Kalman Filter
Sebastian Thrun Michael Montemerlo
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
SLAM : Simultaneous Localization and Mapping
Landmark-Based Visual-Inertial Odometry and SLAM
CSE-473 Mobile Robot Mapping.
Differential Calculus of 3D Orientations
Probabilistic Robotics Graph SLAM
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Probability Theory and Parameter Estimation I
Paper – Stephen Se, David Lowe, Jim Little
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Robust Range Only Beacon Localization
Simultaneous Localization and Mapping
Anastasios I. Mourikis and Stergios I. Roumeliotis
Latent Variables, Mixture Models and EM
Introduction to particle filter
CARNEGIE MELLON UNIVERSITY
Introduction to particle filter
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal
Principles of the Global Positioning System Lecture 11
Probabilistic Robotics
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

Marginalization and sliding window estimation Leo Koppel 13/06/2017

Consider a multivariate Gaussian distribution Marginalization and sliding window| Introduction Consider a multivariate Gaussian distribution Marginal distribution, Y is marginalized out Marginal distribution, X is marginalized out “SLAM is tracking a normal distribution through a large state space” —Sibley, “A Sliding Window Filter for SLAM”

Basics of Gaussian distributions Marginalization The SLAM problem Marginalization and sliding window| Overview Basics of Gaussian distributions Marginalization The SLAM problem Applications: sliding window filters Homework problems

Basics of Gaussian distributions Marginalization and sliding window Part 1 Basics of Gaussian distributions What does covariance mean? What does inverse covariance mean?

Consider a Gaussian distribution with zero mean, Marginalization and sliding window| Gaussian basics Consider a Gaussian distribution with zero mean, A is the inverse of the covariance, K (Since mean is zero) Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006.

Marginalization and sliding window| Gaussian basics An example: 𝑦 2 is the outside temperature, 𝑦 1 is the temperature inside building 1, and 𝑦 3 is the temperature inside building 3. Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006.

What is the covariance matrix? Marginalization and sliding window| Gaussian basics What is the covariance matrix? Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006.

What is the covariance matrix? Marginalization and sliding window| Gaussian basics What is the covariance matrix? Eventually… Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006.

What is the inverse covariance matrix? Marginalization and sliding window| Gaussian basics What is the inverse covariance matrix? Recall: So write out P(y): and solve for A Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006.

What is the inverse covariance matrix? Marginalization and sliding window| Gaussian basics What is the inverse covariance matrix? Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006.

What is the inverse covariance matrix? Marginalization and sliding window| Gaussian basics What is the inverse covariance matrix? Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006.

What does the inverse covariance tell us? Marginalization and sliding window| Gaussian basics What does the inverse covariance tell us? The meaning of 0: “conditional on all the other variables, these two variables i and j are independent.” Note K-113 = 0 does not mean they are uncorrelated. Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006.

Marginalization and sliding window| Gaussian basics From: Mackay, David. “The humble Gaussian distribution,” 2006.

Marginalization and sliding window| Gaussian basics Answer: A and B can be covariance. C and D can be inverse covariance. See element (1,3): zero in covariance means 1, 3 are uncorrelated (false) zero in inverse covariance means 1, 3 are independent conditional on 2 (true) From: Mackay, David. “The humble Gaussian distribution,” 2006.

Marginalization Part 2 How do we remove a variable in covariance form? Marginalization and sliding window Part 2 Marginalization How do we remove a variable in covariance form? How do we remove a variable in information form? What is the Schur complement?

Taking away a variable Marginarine Marginalization and sliding window| Marginalization, covariance form Taking away a variable Yes, this is it! Marginalization! Marginarine Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006. BMK Wikimedia

What about the inverse covariance matrix? Marginalization and sliding window| Marginalization, information form What about the inverse covariance matrix? It is not so easy if you don’t have the coloured terms! Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006.

Marginalization and sliding window| Schur complement Consider a matrix where A and D are square and invertible. We can turn it into a left- or right- triangular matrix: The Schur complement is Combining these, we get a block diagonal matrix: Source: Sibley, Gabe. “A Sliding Window Filter for SLAM.” (2006)

Marginalization and sliding window| Schur complement We can decompose the matrix as Then the inverse is: Source: Sibley, Gabe. “A Sliding Window Filter for SLAM.” (2006)

Marginalization and sliding window| Schur complement Schur derivation #1: show that marginalization of covariance is extracting a block Split the state into two sections, where A is cov(a), B is cov(b), C is cross-terms Write the covariance matrix as Then Substitute for the inverse covariance: Sources: Sibley, Gabe. “A Sliding Window Filter for SLAM.” (2006) Huang, Gary B. “Conditional and marginal distributions of a multivariate Gaussian.” (2010)

Marginalization and sliding window| Schur complement See: Huang, Gary B. “Conditional and marginal distributions of a multivariate Gaussian.” (2010)

In the information form, it requires the Schur complement. Marginalization and sliding window| Schur complement This shows that marginalizing a multivariate Gaussian distribution is as simple as taking a block of the covariance. In the information form, it requires the Schur complement. For formal proof see: Schön, Thomas B., and Fredrik Lindsten. "Manipulating the multivariate gaussian density." Division of Automatic Control, Linköping University, Sweden, Tech. Rep (2011).

? Marginalization and sliding window| Schur complement Schur derivation #2: what is marginalization in information form? Original covariance Original information Marginalized covariance Marginalized information ?

Marginalization and sliding window| Schur complement From before, Multiply it out: That Schur was fun!

The marginal distribution 𝑃 𝑎 is given by: Marginalization and sliding window| Marginalization, information form To recap: This matrix represents the probability distribution 𝑃 𝑎, 𝑏 in information form: The marginal distribution 𝑃 𝑎 is given by: For another derivation see: Thrun, Sebastian, and Michael Montemerlo. “The Graph SLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures.” Robotics Research 25.5–6 (2006): 403–429.

 Let’s apply that to the example: Marginalization and sliding window| Marginalization, information form Let’s apply that to the example:  Adapted from: Mackay, David. “The humble Gaussian distribution,” 2006.

Marginalization and sliding window| Covariance and information duality Marginalization is easy in the covariance form, but hard in the information form. Conditioning is easy in information form, but hard in covariance form. Source: Walter, Matthew, Ryan Eustice, and John Leonard. "A provably consistent method for imposing sparsity in feature-based SLAM information filters." Robotics Research. Springer Berlin Heidelberg, 2007. 214-234.

The SLAM problem Part 3 How is SLAM a least squares problem? Marginalization and sliding window Part 3 The SLAM problem How is SLAM a least squares problem? Where does marginalization apply?

Consider a sequence of robot poses Marginalization and sliding window| Back to batch Consider a sequence of robot poses With a motion model And measurement model with random noise

Estimate over the whole m poses: Marginalization and sliding window| Back to batch Estimate over the whole m poses: The PDF for the whole path is Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. “A sliding window filter for incremental SLAM.” Unifying perspectives in computational and robot vision. Springer US, 2008. 103-112.

Adding the measurement model, Marginalization and sliding window| Back to batch Adding the measurement model, The PDF for the measurements is Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. “A sliding window filter for incremental SLAM.” Unifying perspectives in computational and robot vision. Springer US, 2008. 103-112.

We also have a prior for the first pose Marginalization and sliding window| Back to batch We also have a prior for the first pose The posterior probability of the system is We can put measurements, motion, and prior together: Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. “A sliding window filter for incremental SLAM.” Unifying perspectives in computational and robot vision. Springer US, 2008. 103-112.

Take log of posterior probability: Marginalization and sliding window| Back to batch Take log of posterior probability: This is a non-linear least squares problem. Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. “A sliding window filter for incremental SLAM.” Unifying perspectives in computational and robot vision. Springer US, 2008. 103-112.

Marginalization and sliding window| Demo: MAP vs EKF

Extend to an actual SLAM problem, with landmarks. A few changes Marginalization and sliding window| Extending to SLAM Extend to an actual SLAM problem, with landmarks. A few changes Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. “A sliding window filter for incremental SLAM.” Unifying perspectives in computational and robot vision. Springer US, 2008. 103-112.

The Gauss-Newton solution: an iterative method Marginalization and sliding window| SLAM as a NLS problem The Gauss-Newton solution: an iterative method where G is Jacobian of g(x) Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. “A sliding window filter for incremental SLAM.” Unifying perspectives in computational and robot vision. Springer US, 2008. 103-112.

The Gauss-Newton solution: an iterative method Marginalization and sliding window| SLAM as a NLS problem The Gauss-Newton solution: an iterative method where G is Jacobian of g(x) Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. “A sliding window filter for incremental SLAM.” Unifying perspectives in computational and robot vision. Springer US, 2008. 103-112.

Marginalization and sliding window| SLAM as a NLS problem Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. “A sliding window filter for incremental SLAM.” Unifying perspectives in computational and robot vision. Springer US, 2008. 103-112.

Marginalization and sliding window| SLAM marginalization The same as our simple example (slide 24), except marginalizing out the start of the state instead of the end. Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. "Sliding window filter with application to planetary landing." Journal of Field Robotics 27.5 (2010): 587-608.

Applications Part 4 Sliding Window Filter Marginalization and sliding window Part 4 Applications Sliding Window Filter Multi-State Constraint Kalman Filter Sliding Window Factor Graphs

The Sliding Window Filter algorithm Marginalization and sliding window| Sliding Window Filter The Sliding Window Filter algorithm Keep a state vector of k poses. On each step: Add new pose parameters Remove old parameters Add new landmark parameters Update parameters Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. "Sliding window filter with application to planetary landing." Journal of Field Robotics 27.5 (2010): 587-608.

The Sliding Window Filter algorithm Marginalization and sliding window| Sliding Window Filter The Sliding Window Filter algorithm Keep a state vector of k poses. On each step: Add new pose parameters Apply process model and use Gauss Newton to update information matrix Remove old parameters If more than k poses, marginalize out oldest pose using Schur complement Marginalize out any landmarks no longer visible Add new landmark parameters Update parameters Solve the least squares problem with all measurements (Gauss-Newton plus some outlier rejection method) Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. "Sliding window filter with application to planetary landing." Journal of Field Robotics 27.5 (2010): 587-608.

Marginalization and sliding window| Graphical example Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. "Sliding window filter with application to planetary landing." Journal of Field Robotics 27.5 (2010): 587-608.

Marginalize out first pose Marginalization and sliding window| Graphical example with extra graphs Initial system Marginalize out first pose Marginalize out first landmark Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. "Sliding window filter with application to planetary landing." Journal of Field Robotics 27.5 (2010): 587-608.

Marginalization and sliding window| Effect of window size Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. "Sliding window filter with application to planetary landing." Journal of Field Robotics 27.5 (2010): 587-608.

The SWF’s purpose is O(1) complexity. Marginalization and sliding window| SWF results The SWF’s purpose is O(1) complexity. No marginalization Full SLAM State vector size Time per frame (s) With marginalization SWF Frame Frame Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. "Sliding window filter with application to planetary landing." Journal of Field Robotics 27.5 (2010): 587-608.

Sibley argues that optimization is “greatly superior to filtering.” Marginalization and sliding window| SWF results Sibley argues that optimization is “greatly superior to filtering.” Source: Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. "Sliding window filter with application to planetary landing." Journal of Field Robotics 27.5 (2010): 587-608.

Multi-State Constraint Kalman Filter (MSCKF) Marginalization and sliding window| MSCKF Multi-State Constraint Kalman Filter (MSCKF) Filtering method (EKF) Goal is pose estimation only (not mapping) Keeps a window of camera poses Covariance form Mourikis, A I, and S I Roumeliotis. “A Multi-State Kalman Filter for Vision-Aided Inertial Navigation.” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) April (2007)

Multi-State Constraint Kalman Filter (MSCKF) Marginalization and sliding window| MSCKF Multi-State Constraint Kalman Filter (MSCKF) Sliding window filter / SLAM MSCKF

Marginalization and sliding window| MSCKF

Sliding-Window Factor Graphs (SWFG) Marginalization and sliding window| SWFG Sliding-Window Factor Graphs (SWFG) Extension of iSAM2 Keep a map of 3D landmarks A short-term smoother handles constraints from landmark measurements A long-term smoother handles loop closure Old states marginalized out of the graph Chiu, Han-pang et al. “Robust Vision-Aided Navigation Using Sliding-Window Factor Graphs.” (2013)

Interesting three-stage handling of landmarks Marginalization and sliding window| SWFG Interesting three-stage handling of landmarks 1. Feature initialized as binary factor with 3D information marginalized out 2. Landmark stored as variable; re-projection factors added to all poses from which it’s observed 3. Landmark marginalized out into binary factors. Chiu, Han-pang et al. “Robust Vision-Aided Navigation Using Sliding-Window Factor Graphs.” (2013)

Marginalization and sliding window Part 5 Homework problems

Homework problem #1 (easy) Marginalization and sliding window| Homework Homework problem #1 (easy) Answer questions 3-5 from “The humble Gaussian distribution” Mackay, David. “The humble Gaussian distribution,” 2006.

Homework problem #1 (easy) Marginalization and sliding window| Homework Homework problem #1 (easy) Answer questions 3-5 from “The humble Gaussian distribution” Mackay, David. “The humble Gaussian distribution,” 2006.

Homework problem #2 (hard) Marginalization and sliding window| Homework Homework problem #2 (hard) Consider Example 2 from “The humble Gaussian distribution”: a) Find the covariance and inverse covariance matrix b) Marginalize out y1 in each form Mackay, David. “The humble Gaussian distribution,” 2006.

References Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. “A sliding window filter for incremental SLAM.” Unifying perspectives in computational and robot vision. Springer US, 2008. 103-112. Sibley, Gabe, Larry Matthies, and Gaurav Sukhatme. "Sliding window filter with application to planetary landing." Journal of Field Robotics 27.5 (2010): 587-608. Schön, Thomas B., and Fredrik Lindsten. "Manipulating the multivariate gaussian density." Division of Automatic Control, Linköping University, Sweden, Tech. Rep (2011). Mackay, David. “The humble Gaussian distribution,” 2006. Huang, Gary B. “Conditional and marginal distributions of a multivariate Gaussian.” (2010) Walter, Matthew, Ryan Eustice, and John Leonard. "A provably consistent method for imposing sparsity in feature-based SLAM information filters." Robotics Research. Springer Berlin Heidelberg, 2007. 214-234. Thrun, Sebastian, and Michael Montemerlo. “The Graph SLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures.” The International Journal of Robotics Research 25.5–6 (2006): 403–429. Mourikis, A I, and S I Roumeliotis. “A Multi-State Kalman Filter for Vision-Aided Inertial Navigation.” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) April (2007): 10–14 Chiu, Han-pang et al. “Robust Vision-Aided Navigation Using Sliding-Window Factor Graphs.” (2013)