Particle Filter/Monte Carlo Localization

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Visual Tracking CMPUT 615 Nilanjan Ray. What is Visual Tracking Following objects through image sequences or videos Sometimes we need to track a single.
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
Particles Filter, MCL & Augmented MCL
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Localization David Johnson cs6370. Basic Problem Go from thisto this.
Probabilistic Robotics: Kalman Filters
Particle Filters.
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Sérgio Pequito Phd Student
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Robust Monte Carlo Localization for Mobile Robots
Monte Carlo Localization
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Particle Filters.
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Particle Filters++ TexPoint fonts used in EMF.
HCI / CprE / ComS 575: Computational Perception
Bayesian Filtering for Robot Localization
Markov Localization & Bayes Filtering
Localization and Mapping (3)
Computer vision: models, learning and inference Chapter 19 Temporal models.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Probabilistic Robotics: Monte Carlo Localization
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
ECGR4161/5196 – July 26, 2011 Read Chapter 5 Exam 2 contents: Labs 0, 1, 2, 3, 4, 6 Homework 1, 2, 3, 4, 5 Book Chapters 1, 2, 3, 4, 5 All class notes.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
TKK | Automation Technology Laboratory Partially Observable Markov Decision Process (Chapter 15 & 16) José Luis Peralta.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
The Hardware Design of the Humanoid Robot RO-PE and the Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering.
Mobile Robot Localization (ch. 7)
Robot Mapping Short Introduction to Particle Filters and Monte Carlo Localization.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
CSE-473 Project 2 Monte Carlo Localization. Localization as state estimation.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use dynamics models.
Probabilistic Robotics
OBJECT TRACKING USING PARTICLE FILTERS. Table of Contents Tracking Tracking Tracking as a probabilistic inference problem Tracking as a probabilistic.
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Monte Carlo Localization for Mobile Robots Frank Dellaert 1, Dieter Fox 2, Wolfram Burgard 3, Sebastian Thrun 4 1 Georgia Institute of Technology 2 University.
10-1 Probabilistic Robotics: FastSLAM Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann,
Particle filters for Robot Localization An implementation of Bayes Filtering Markov Localization.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
1 Slides from D. Fox. W. Burgard, C. Stachniss, M. Bennewitz, K. Arras, S. Thrun, J. Xiao Particle Filter/Monte Carlo Localization.
Probabilistic Robotics
Particle Filter/Monte Carlo Localization
Particle filters for Robot Localization
A Short Introduction to the Bayes Filter and Related Models
Particle Filtering.
EE-565: Mobile Robotics Non-Parametric Filters Module 2, Lecture 5
Probabilistic Map Based Localization
Non-parametric Filters: Particle Filters
Non-parametric Filters: Particle Filters
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

Particle Filter/Monte Carlo Localization Advanced Mobile Robotics Probabilistic Robotics: Particle Filter/Monte Carlo Localization Dr. Jizhong Xiao Department of Electrical Engineering CUNY City College jxiao@ccny.cuny.edu

Probabilistic Robotics Bayes Filter Implementations Particle filters Monte Carlo Localization

Sample-based Localization (sonar)

Particle Filter Definition: Particle filter is a Bayesian based filter that sample the whole robot work space by a weight function derived from the belief distribution of previous stage. Basic principle: Set of state hypotheses (“particles”) Survival-of-the-fittest

Represent belief by random samples Why Particle Filters Represent belief by random samples Estimation of non-Gaussian, nonlinear processes Particle filtering  non-parametric inference algorithm - suited to track non-linear dynamics. - efficiently represent non-Gaussian distributions

Function Approximation Particle sets can be used to approximate functions The more particles fall into an interval, the higher the probability of that interval

Particle Filter Projection Samples drawn from Gauss random var Passed thr’ non-linear ftn Resultin samples are distributed to the random var

Rejection Sampling Let us assume that f(x)<1 for all x Sample x from a uniform distribution Sample c from [0,1] if f(x) > c keep the sample otherwise reject the sample

Importance Sampling Principle We can even use a different distribution g to generate samples from f By introducing an importance weight w, we can account for the “differences between g and f ” w = f / g f is often called target g is often called proposal

Importance Sampling Samples cannot be drawn conveniently from the target distribution f. Importance factor Instead, the importance sampler draws samples from the proposal distribution g, which has a simpler form. A sample of f is obtained by attaching the weight f/g to each sample x.

Particle Filter Basics Sample: Randomly select M particles based on weights (same particle may be picked multiple times) Predict: Move particles according to deterministic dynamics Measure: Get a likelihood for each new sample by making a prediction about the image’s local appearance and comparing; then update weight on particle accordingly

Particle Filter Basics Particle filters represent a distribution by a set of samples drawn from the posterior distribution. The denser a sub-region of the state space is populated by samples, the more likely it is that true state falls into this region. Such a representation is approximate, but it is nonparametric, and therefore can represent a much broader space of distributions than Gaussians. Weight of particle are given through the measurement model. Re-sampling allows to redistribute particles approximately according to the posterior . The re-sampling step is a probabilistic implementation of the Darwinian idea of survival of the fittest: it refocuses the particle set to regions in state space with high posterior probability By doing so, it focuses the computational resources of the filter algorithm to regions in the state space where they matter the most.

Importance Sampling with Resampling: Landmark Detection Example

Distributions

Distributions Wanted: samples distributed according to p(x| z1, z2, z3)

This is Easy! We can draw samples from p(x|zl) by adding noise to the detection parameters.

Importance Sampling with Resampling

Importance Sampling with Resampling Weighted samples After resampling

Particle Filter Algorithm Importance factor for xit: draw xit from p(xt | xit-1,ut-1) draw xit-1 from Bel(xt-1) Prediction (Action) Correction (Measurement)

Particle Filter Algorithm Each particle is a hypothesis as to what the true world state may be at time t Sampling from the state transition distribution Importance factor, incorporates the measurement into particle set Re-sampling, importance sampling

Particle Filter Algorithm Line 4 – hypothetical state - sampling from the state transition distribution - set of particles obtained after M iterations is the filter’s representation of the posterior Line 5 – importance factor - incorporate measurement into the particle set - set of weighted particles represents Bayes filter posterior Line 8 to 11 – Importance sampling Particle distribution changes - incorporating importance weights into the re-sampling process. Survival of the fittest: After resampling step, refocuses particle set to the regions in state space with higher posterior probability, distributed according to Darwins theory – resampling

Particle Filter Algorithm (from Fox’s slides) Algorithm particle_filter( St-1, ut-1 zt): For Generate new samples Sample index j(i) from the discrete distribution given by wt-1 Sample from using and Compute importance weight Update normalization factor Insert For Normalize weights

Resampling Given: Set S of weighted samples. Wanted : Random sample, where the probability of drawing xi is given by wi. Typically done n times with replacement to generate new sample set S’.

Resampling Algorithm Initialize threshold Algorithm systematic_resampling(S,n): For Generate cdf Initialize threshold For Draw samples … While ( ) Skip until next threshold reached Insert Increment threshold Return S’ Also called stochastic universal sampling

Particle Filters Pose particles drawn at random and uniformly over the entire pose space

Sensor Information: Importance Sampling After robot senses the door, MCL Assigns importance factors to each particle

Robot Motion After incorporating the robot motion and after resampling, leads to new particle set with uniform importance weights, but with an increased number of particles near the three likely places

Sensor Information: Importance Sampling New measurement assigns non-uniform importance weights to the particle sets, most of the cumulative probability mass is centered on the second door

Robot Motion Further motion leads to another re-sampling step, and a step in which a new particle set is generated according to the motion model

Practical Considerations Particles represent discrete approximation of continuous beliefs 1. Density Extraction (Estimation) extracting a continuous density from samples Density Estimation Methods Gaussian approximation – unimodal distribution Histogram - Multi-model distributions The probability of each bin is by summing the weights of the particles that fall in its range Kernel Density Estimation – multi-modal, smooth each particle is used as the kernel placing a Gaussian kernel at each particle overall density is a mixture of the kernel density Sp. Appl & computational resources

Different Ways of Extracting Densities from Particles Gaussian Approximation Histogram Approximation Kernel density Estimate

Properties of Particle Filters Sampling Variance Variation due to random sampling The sampling variance decreases with the number of samples Higher number of samples result in accurate approximations with less variability If enough samples are chosen, the observations made by a robot – sample based belief “close enough” to the true belief.

Drawbacks In order to explore a significant part of the state space, the number of particles should be very large which induces complexity problems not adapted to a real-time implementation. PF methods are very sensitive to non consistent measures or high measurement errors. More particles ) Better approximation (and more expensive), but there’s no formula for the “right amount

Importance Sampling with Resampling Reducing sampling error 1. Variance reduction Reduce the frequency of resampling Maintains importance weight in memory & updates them as follows: 2. Low variance sampling Computes a single random number Any number points to exactly only one particle, Resampling too often increases the risk of losing diversity. Too infrequently many samples might be wasted in regions of low probability, Instead of choosing M random numbers and selecting those particles accordingly, this procedure computes a single random number and selects samples according to this number but still with a probability proportional to the sample weight

Low variance resampling Draw a random number r in the interval (0; M-1 ) Select particles by repeatedly adding the fixed amount of M-1 to r and by choosing the particle that corresponds to the resulting number

Low Variance Resampling Procedure Principle: Selection involves sequential stochastic process Select samples w.r.t. a single random number,r but with a probability proportional to sample weight

Advantages of Particle Filters: can deal with non-linearities can deal with non-Gaussian noise mostly parallelizable easy to implement PFs focus adaptively on probable regions of state-space

Summary Particle filters are an implementation of recursive Bayesian filtering They represent the posterior by a set of weighted samples. In the context of localization, the particles are propagated according to the motion model. They are then weighted according to the likelihood of the observations. In a re-sampling step, new particles are drawn with a probability proportional to the likelihood of the observation

Monte Carlo Localization One of the most popular particle filter methods for robot localization Reference: Dieter Fox, Wolfram Burgard, Frank Dellaert, Sebastian Thrun, “Monte Carlo Localization: Efficient Position Estimation for Mobile Robots”, Proc. 16th National Conference on Artificial Intelligence, AAAI’99, July 1999

MCL in action “Monte Carlo” Localization -- refers to the resampling of the distribution each time a new observation is integrated

Monte Carlo Localization the probability density function is represented by samples randomly drawn from it it is also able to represent multi-modal distributions, and thus localize the robot globally considerably reduces the amount of memory required and can integrate measurements at a higher rate state is not discretized and the method is more accurate than the grid-based methods easy to implement

potential observations z Robot modeling map m and location x p( z | x, m ) sensor model p( | x, m ) = .75 p( | x, m ) = .05 potential observations z

Robot modeling p( z | x, m ) sensor model map m and location r p( z | x, m ) sensor model p( xnew | xold, u, m ) action model p( | x, m ) = .75 p( | x, m ) = .05 potential observations o “probabilistic kinematics” -- encoder uncertainty red lines indicate commanded action the cloud indicates the likelihood of various final states

Robot modeling: how-to p(z | x, m ) sensor model p( xnew | xold, u, m ) action model (0) Model the physics of the sensor/actuators (with error estimates) theoretical modeling (1) Measure lots of sensing/action results and create a model from them empirical modeling take N measurements, find mean (m) and st. dev. (s) and then use a Gaussian model or, some other easily-manipulated model... 0 if |x-m| > s 0 if |x-m| > s p( x ) = p( x ) = 1 otherwise 1- |x-m|/s otherwise (2) Make something up...

Motion Model Reminder Start

Proximity Sensor Model Reminder Sonar sensor Laser sensor

Monte Carlo Localization Start by assuming p( x0 ) is the uniform distribution. take K samples of x0 and weight each with an importance factor of 1/K

Monte Carlo Localization Start by assuming p( x ) is the uniform distribution. take K samples of x0 and weight each with an importance factor of 1/K Get the current sensor observation, z1 For each sample point x0 multiply the importance factor by p(z1 | x0, m)

Monte Carlo Localization Start by assuming p( x0 ) is the uniform distribution. take K samples of x0 and weight each with an importance factor of 1/K Get the current sensor observation, z1 For each sample point x0 multiply the importance factor by p(z1 | x0, m) Normalize (make sure the importance factors add to 1) You now have an approximation of p(x1 | z1, …, m) and the distribution is no longer uniform

Monte Carlo Localization Start by assuming p( x0 ) is the uniform distribution. take K samples of x0 and weight each with an importance factor of 1/K Get the current sensor observation, z1 For each sample point x0 multiply the importance factor by p(z1 | x0, m) Normalize (make sure the importance factors add to 1) You now have an approximation of p(x1 | z1, …, m) and the distribution is no longer uniform Create x1 samples by dividing up large clumps each point spawns new ones in proportion to its importance factor

Monte Carlo Localization Start by assuming p( x0 ) is the uniform distribution. take K samples of x0 and weight each with an importance factor of 1/K Get the current sensor observation, z1 For each sample point x0 multiply the importance factor by p(z1 | x0, m) Normalize (make sure the importance factors add to 1) You now have an approximation of p(x1 | z1, …, m) and the distribution is no longer uniform Create x1 samples by dividing up large clumps each point spawns new ones in proportion to its importance factor The robot moves, u1 For each sample x1, move it according to the model p(x2 | u1, x1, m)

Monte Carlo Localization Start by assuming p( x0 ) is the uniform distribution. take K samples of x0 and weight each with an importance factor of 1/K Get the current sensor observation, z1 For each sample point x0 multiply the importance factor by p(z1 | x0, m) Normalize (make sure the importance factors add to 1) You now have an approximation of p(x1 | z1, …, m) and the distribution is no longer uniform Create x1 samples by dividing up large clumps each point spawns new ones in proportion to its importance factor The robot moves, u1 For each sample x1, move it according to the model p(x2 | u1, x1, m)

MCL in action “Monte Carlo” Localization -- refers to the resampling of the distribution each time a new observation is integrated

Initial Distribution

After Incorporating Ten Ultrasound Scans

After Incorporating 65 Ultrasound Scans

Estimated Path

Using Ceiling Maps for Localization

Vision-based Localization h(x) z P(z|x)

Under a Light Measurement z: P(z|x):

Next to a Light Measurement z: P(z|x):

Elsewhere Measurement z: P(z|x):

Global Localization Using Vision

Robots in Action: Albert

Application: Rhino and Albert Synchronized in Munich and Bonn [Robotics And Automation Magazine, to appear]

Localization for AIBO robots

Limitations The approach described so far is able to track the pose of a mobile robot and to globally localize the robot. How can we deal with localization errors (i.e., the kidnapped robot problem)?

Approaches Randomly insert samples (the robot can be teleported at any point in time). Insert random samples proportional to the average likelihood of the particles (the robot has been teleported with higher probability when the likelihood of its observations drops).

Random Samples Vision-Based Localization 936 Images, 4MB, .6secs/image Trajectory of the robot:

Odometry Information

Image Sequence

Resulting Trajectories Position tracking:

Resulting Trajectories Global localization:

Global Localization

Kidnapping the Robot

Summary Particle filters are an implementation of recursive Bayesian filtering They represent the posterior by a set of weighted samples. In the context of localization, the particles are propagated according to the motion model. They are then weighted according to the likelihood of the observations. In a re-sampling step, new particles are drawn with a probability proportional to the likelihood of the observation.

References Dieter Fox, Wolfram Burgard, Frank Dellaert, Sebastian Thrun, “Monte Carlo Localization: Efficient Position Estimation for Mobile Robots”, Proc. 16th National Conference on Artificial Intelligence, AAAI’99, July 1999 Dieter Fox, Wolfram Burgard, Sebastian Thrun, “Markov Localization for Mobile Robots in Dynamic Environments”, J. of Artificial Intelligence Research 11 (1999) 391-427 Sebastian Thrun, “Probabilistic Algorithms in Robotics”, Technical Report CMU-CS-00-126, School of Computer Science, Carnegie Mellon University, Pittsburgh, USA, 2000

Thank You