Particle Filters++ TexPoint fonts used in EMF.

Slides:



Advertisements
Similar presentations
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal University of Málaga (Spain) Dpt. of System Engineering and Automation May Pasadena,
Advertisements

EKF, UKF TexPoint fonts used in EMF.
Mapping with Known Poses
Beam Sensor Models Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Localization David Johnson cs6370. Basic Problem Go from thisto this.
Probabilistic Robotics: Kalman Filters
Particle Filters.
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
gMapping TexPoint fonts used in EMF.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probability: Review TexPoint fonts used in EMF.
Probabilistic Robotics
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Particle Filter/Monte Carlo Localization
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
Monte Carlo Localization
1 CMPUT 412 Autonomous Map Building Csaba Szepesvári University of Alberta TexPoint fonts used in EMF. Read the TexPoint manual before you delete this.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Maximum Likelihood (ML), Expectation Maximization (EM)
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
Smoother Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint.
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Gaussians Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint.
SLAM: Simultaneous Localization and Mapping: Part II BY TIM BAILEY AND HUGH DURRANT-WHYTE Presented by Chang Young Kim These slides are based on: Probabilistic.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
HCI / CprE / ComS 575: Computational Perception
1 Miodrag Bolic ARCHITECTURES FOR EFFICIENT IMPLEMENTATION OF PARTICLE FILTERS Department of Electrical and Computer Engineering Stony Brook University.
Particle Filter & Search
Markov Localization & Bayes Filtering
Computer vision: models, learning and inference Chapter 19 Temporal models.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Probabilistic Robotics: Monte Carlo Localization
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Approximate Dynamic Programming Methods for Resource Constrained Sensor Management John W. Fisher III, Jason L. Williams and Alan S. Willsky MIT CSAIL.
Bayes’ Nets: Sampling [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available.
Mobile Robot Localization (ch. 7)
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Tracking with dynamics
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
10-1 Probabilistic Robotics: FastSLAM Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann,
Particle filters for Robot Localization An implementation of Bayes Filtering Markov Localization.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Probabilistic Robotics
Introduction to particle filter
Non-parametric Filters
Particle Filter/Monte Carlo Localization
Particle filters for Robot Localization
Introduction to particle filter
Non-parametric Filters
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Non-parametric Filters
Probabilistic Map Based Localization
Non-parametric Filters: Particle Filters
Non-parametric Filters
Non-parametric Filters: Particle Filters
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

Particle Filters++ TexPoint fonts used in EMF. Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAAA

Sequential Importance Resampling (SIR) Particle Filter Algorithm particle_filter( St-1, ut , zt): For Generate new samples Sample index j(i) from the discrete distribution given by wt-1 Sample from Compute importance weight Update normalization factor Insert For Normalize weights Return St

Outline Improved Sampling Issue with vanilla particle filter when noise dominated by motion model Importance Sampling Optimal Proposal Examples Resampling Particle Deprivation Noise-free Sensors Adapting Number of Particles: KLD Sampling

Noise Dominated by Motion Model [Grisetti, Stachniss, Burgard, T-RO2006]  Most particles get (near) zero weights and are lost.

Importance Sampling Theoretical justification: for any function f we have: f could be: whether a grid cell is occupied or not, whether the position of a robot is within 5cm of some (x,y), etc.

Importance Sampling Task: sample from density p(.) Solution: sample from “proposal density” ¼(.) Weight each sample x(i) by p(x(i)) / ¼(x(i)) E.g.: Requirement: if ¼(x) = 0 then p(x) = 0. p ¼

Particle Filters Revisited Algorithm particle_filter( St-1, ut , zt): For Generate new samples Sample index j(i) from the discrete distribution given by wt-1 Sample from Compute importance weight Update normalization factor Insert For Normalize weights Return St

Optimal Sequential Proposal ¼(.)  Applying Bayes rule to the denominator gives: Substitution and simplification gives

Optimal proposal ¼(.) Optimal =  Challenges: Typically difficult to sample from Importance weight: typically expensive to compute integral

Example 1: ¼(.) = Optimal proposal Nonlinear Gaussian State Space Model Then: with And:

Example 2: ¼(.) = Motion Model  the “standard” particle filter

Example 3: Approximating Optimal ¼ for Localization [Grisetti, Stachniss, Burgard, T-RO2006] One (not so desirable solution): use smoothed likelihood such that more particles retain a meaningful weight --- BUT information is lost Better: integrate latest observation z into proposal ¼

Example 3: Approximating Optimal ¼ for Localization: Generating One Weighted Sample Initial guess Execute scan matching starting from the initial guess , resulting in pose estimate . Sample K points in region around . Proposal distribution is Gaussian with mean and covariance: Sample from (approximately optimal) proposal distribution. Weight =

Scan Matching Compute E.g., using gradient descent

Example 3: Example Particle Distributions [Grisetti, Stachniss, Burgard, T-RO2006] Particles generated from the approximately optimal proposal distribution. If using the standard motion model, in all three cases the particle set would have been similar to (c).

Resampling Consider running a particle filter for a system with deterministic dynamics and no sensors Problem: While no information is obtained that favors one particle over another, due to resampling some particles will disappear and after running sufficiently long with very high probability all particles will have become identical. On the surface it might look like the particle filter has uniquely determined the state. Resampling induces loss of diversity. The variance of the particles decreases, the variance of the particle set as an estimator of the true belief increases.

Resampling Solution I Effective sample size: Example: All weights = 1/N  Effective sample size = N All weights = 0, except for one weight = 1  Effective sample size = 1 Idea: resample only when effective sampling size is low Normalized weights

Resampling Solution I (ctd)

Resampling Solution II: Low Variance Sampling M = number of particles r \in [0, 1/M] Advantages: More systematic coverage of space of samples If all samples have same importance weight, no samples are lost Lower computational complexity

Resampling Solution III Loss of diversity caused by resampling from a discrete distribution Solution: “regularization” Consider the particles to represent a continuous density Sample from the continuous density E.g., given (1-D) particles sample from the density:

Particle Deprivation = when there are no particles in the vicinity of the correct state Occurs as the result of the variance in random sampling. An unlucky series of random numbers can wipe out all particles near the true state. This has non-zero probability to happen at each time  will happen eventually. Popular solution: add a small number of randomly generated particles when resampling. Advantages: reduces particle deprivation, simplicity. Con: incorrect posterior estimate even in the limit of infinitely many particles. Other benefit: initialization at time 0 might not have gotten anything near the true state, and not even near a state that over time could have evolved to be close to true state now; adding random samples will cut out particles that were not very consistent with past evidence anyway, and instead gives a new chance at getting close the true state.

Particle Deprivation: How Many Particles to Add? Simplest: Fixed number. Better way: Monitor the probability of sensor measurements which can be approximated by: Average estimate over multiple time-steps and compare to typical values when having reasonable state estimates. If low, inject random particles.

Noise-free Sensors Consider a measurement obtained with a noise-free sensor, e.g., a noise-free laser-range finder---issue? All particles would end up with weight zero, as it is very unlikely to have had a particle matching the measurement exactly. Solutions: Artificially inflate amount of noise in sensors Better proposal distribution (see first section of this set of slides).

Adapting Number of Particles: KLD-Sampling E.g., typically more particles need at the beginning of localization run Idea: Partition the state-space When sampling, keep track of number of bins occupied Stop sampling when a threshold that depends on the number of occupied bins is reached If all samples fall in a small number of bins  lower threshold

z_{1-\delta}: the upper 1- \delta quantile of the standard normal distribution \delta = 0.01 and \epsilon = 0.05 works well in practice

KLD-sampling

KLD-sampling