Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Partially Observable Markov Decision Process (POMDP)
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Probabilistic Robotics
Probabilistic Robotics
Probabilistic Robotics SLAM. 2 Given: The robot’s controls Observations of nearby features Estimate: Map of features Path of the robot The SLAM Problem.
(Includes references to Brian Clipp
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Probabilistic Robotics
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
Localization David Johnson cs6370. Basic Problem Go from thisto this.
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Reliable Range based Localization and SLAM Joseph Djugash Masters Student Presenting work done by: Sanjiv Singh, George Kantor, Peter Corke and Derek Kurth.
Robotic Mapping: A Survey Sebastian Thrun, 2002 Presentation by David Black-Schaffer and Kristof Richmond.
Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Probabilistic Robotics
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Probabilistic Robotics
Monte Carlo Localization
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Bayesian Filtering for Robot Localization
Mobile Robot controlled by Kalman Filter
Markov Localization & Bayes Filtering
/09/dji-phantom-crashes-into- canadian-lake/
Computer vision: models, learning and inference Chapter 19 Temporal models.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Young Ki Baik, Computer Vision Lab.
Simultaneous Localization and Mapping
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.1: Bayes Filter Jürgen Sturm Technische Universität München.
Магистерский проект 2008 Магистрант: Матрунич Сергей Анатольевич Научный руководитель: Зимянин Л.Ф. Simultaneous localization and mapping.
Mobile Robot Localization (ch. 7)
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
State Estimation and Kalman Filtering
Robotics Club: 5:30 this evening
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
Tracking with dynamics
Fast SLAM Simultaneous Localization And Mapping using Particle Filter A geometric approach (as opposed to discretization approach)‏ Subhrajit Bhattacharya.
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network Arnaud Doucet Nando de Freitas Kevin Murphy Stuart Russell.
Reasoning over Time  Often, we want to reason about a sequence of observations  Speech recognition  Robot localization  User attention  Medical monitoring.
Particle Filter for Robot Localization Vuk Malbasa.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Probabilistic Robotics
Simultaneous Localization and Mapping
Introduction to particle filter
Introduction to particle filter
A Short Introduction to the Bayes Filter and Related Models
Probabilistic Map Based Localization
Hidden Markov Models Markov chains not so useful for most agents
Principle of Bayesian Robot Localization.
Simultaneous Localization and Mapping
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006

Outline  Introduction  SLAM using Kalman filter  SLAM using particle filter Particle filter SLAM by particle filter  My work : searching problem

Introduction: SLAM SLAM: Simultaneous Localization and Mapping A robot is exploring an unknown, static environment. Given:  The robot’s controls  Observations of nearby features Estimate:  Location of the robot -- localization where I am ?  Detail map of the environment – mapping What does the world look like? The controls and observations are both noisy.

Introduction: SLAM Markov assumption x0x0 x1x1 x2x2 x t-1 xtxt … o1o1 o t-1 … a1a1 a t-1 … o2o2 otot a2a2 atat m State transition: Observation function:

Introduction: SLAM Method: Sequentially estimate the probability distribution p(x t ) and update the map. Prior: p(x 0 ) p(x 1 ) p(m 1 ) or m 1 … p(x t-1 ) p(m t-1 ) or m t-1 p(x t ) p(m t ) or m t Prior distribution on x t after taking action a t

Introduction: SLAM Map representations 1. Landmark-based map representation 2. Grid-based map representation Track the positions of a fixed number of predetermined sparse landmarks. Observation: estimated distance from each landmark. Map is represented by a fine spatial grid, each grid square is either occupied or empty. Observation: estimated distance from an obstacle using a laser range finder.

Introduction: SLAM Methods: 1. Parametric method – Kalman filter 2. Sample-based method – particle filter The robot’s trajectory estimate is a tracking problem Represent the distribution of robot location x t (and map m t ) by a Normal distribution Sequentially update μ t and Σ t Represent the distribution of robot location x t (and map m t ) by a large amount of simulated samples. Resample x t (and m t ) at each time step

Introduction: SLAM Why is SLAM a hard problem? Robot location and map are both unknown. Location error Map error  The small error will quickly accumulated over time steps.  The errors come from inaccurate measurement of actual robot motion (noisy action) and the distance from obstacle/landmark (noisy observation). When the robot closes a physical loop in the environment, serious misalignment errors could happen.

SLAM: Kalman filter Update equation: Assume: Prior p(x 0 ) is a normal distribution Observation function p(o|x) is a normal distribution Then: Posterior p(x 1 ), …, p(x t ) are all normally distributed. Mean μ t and covariance matrix Σ t can be derived analytically. Sequentially update μ t and Σ t for each time step t

SLAM: Kalman filter Assume: State transition Observation function Kalman filter: Propagation (motion model): Update (sensor model):

SLAM: Kalman filter The hidden state for landmark-based SLAM: localizationmapping Map with N landmarks: (3+2N)-dimentional Gaussian State vector x t can be grown as new landmarks are discovered.

SLAM: particle filter Update equation: Idea:  Normal distribution assumption in Kalman filter is not necessary  A set of samples approximates the posterior distribution and will be used at next iteration.  Each sample maintains its own map; or all samples maintain a single map.  The map(s) is updated upon observation, assuming that the robot location is given correctly.

SLAM: particle filter Assume it is difficult to sample directly from But get samples from another distribution is easy. We sample from, with normalized weight for each x i t as The set of (particles) is an approximation of Particle filter: Resample x t from,with replacement, to get a sample set with uniform weights

SLAM: particle filter Particle filter (cont’d):

SLAM: particle filter Choose appropriate Choose Then Transition probability Observation function

SLAM: particle filter Algorithm: Let state x t represent the robot’s location, 1. Propagate each state through the state transition probability. This samples a new state given the previous state. 2. Weight each new state according to the observation function 3. Normalize the weights, get. 4. Resampling: sample N s new states from are the updated robot location from

SLAM: particle filter State transition probability: are the expected robot moving distance (angle) by taking action a t. Observation probability: Measured distance (observation) for sensor k Map distance from location x t to the obstacle

SLAM: particle filter Lots of work on SLAM using particle filter are focused on:  Reducing the cumulative error  Fast SLAM (online)  Way to organize the data structure (saving robot path and map). Maintain single map: cumulative error Multiple maps: memory and computation time In Parr’s paper:  Use ancestry tree to record particle history  Each particle has its own map (multiple maps)  Use observation tree for each grid square (cell) to record the map corresponding to each particle.  Update ancestry tree and observation tree at each iteration.  Cell occupancy is represented by a probabilistic approach.

SLAM: particle filter

Searching problem  The agent doesn’t have map, doesn’t know the underlying model, doesn’t know where the target is.  Agent has 2 sensors: Camera: tell agent “occupied” or “empty” cells in 4 orientations, noisy sensor. Acoustic sensor: find the orientation of the target, effective only within certain distance.  Noisy observation, noisy action. Assumption:

Searching problem Differences from SLAM  Rough map is enough; an accurate map is not necessary.  Objective is to find the target. Robot need to actively select actions to find the target as soon as possible. Similar to SLAM  To find the target, agent need build map and estimate its location.

Searching problem Model:  Environment is represented by a rough grid;  Each grid square (state) is either occupied or empty.  The agent moves between the empty grid squares.  Actions: walk to any one of the 4 directions, or “stay”. Could fail in walking with certain probability.  Observations: observe 4 orientations of its neighbor grid squares: “occupied” or “empty”. Could make a wrong observation with certain probability.  State, action and observation are all discrete.

Searching problem In each step, the agent updates its location and map:  Belief state: the agent believes which state it is currently in. It is a distribution over all the states in the current map.  The map: The agent thinks what the environment is. For each state (grid square), a 2-dimentional Dirichlet distribution is used to represent the probability of “empty” and “occupied”. The hyperparameters of Dirichlet distribution are updated based on current observation and belief state.

Searching problem Belief state update: where Probability of successful moving from s j to s when taking action a From map representation and with Neighbor of s in orientation j the set of neighbor states of s

Searching problem Map update (Dirichlet distribution): Assume at step t-1, the hyperparameter for state S is At step t, the hyperparameter for state S is updated as and are the posterior after observing o given that the agent is in the neighbor of state s. If the probability of wrong observation for any orientation is p 0, then priorp 0 if o is “occupied” 1-p 0 if o is “empty” can be computed similarly.

Searching problem Belief state update: Map representation update: a=“right” a=“up” a=“right” a=“up”

Searching problem Choose actions: Each state is assigned a reward R(s) according to following rules: Less explored grids have higher reward. Try to walk to the “empty” grid square. Consider neighbor of s with priority. xx