City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.

Slides:



Advertisements
Similar presentations
(Includes references to Brian Clipp
Advertisements

Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Bayesian Robot Programming & Probabilistic Robotics Pavel Petrovič Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Localization David Johnson cs6370. Basic Problem Go from thisto this.
Probabilistic Robotics: Kalman Filters
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Robotic Mapping: A Survey Sebastian Thrun, 2002 Presentation by David Black-Schaffer and Kristof Richmond.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Part 3 of 3: Beliefs in Probabilistic Robotics. References and Sources of Figures Part 1: Stuart Russell and Peter Norvig, Artificial Intelligence, 2.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Probabilistic Robotics
Particle Filter/Monte Carlo Localization
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
Monte Carlo Localization
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Probabilistic Robotics Bayes Filter Implementations Particle filters.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
HCI / CprE / ComS 575: Computational Perception
ROBOT MAPPING AND EKF SLAM
Bayesian Filtering for Robot Localization
Mobile Robot controlled by Kalman Filter
Markov Localization & Bayes Filtering
Localization and Mapping (3)
/09/dji-phantom-crashes-into- canadian-lake/
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Probabilistic Robotics: Monte Carlo Localization
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
ECGR4161/5196 – July 26, 2011 Read Chapter 5 Exam 2 contents: Labs 0, 1, 2, 3, 4, 6 Homework 1, 2, 3, 4, 5 Book Chapters 1, 2, 3, 4, 5 All class notes.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Mobile Robot Localization (ch. 7)
Michael Isard and Andrew Blake, IJCV 1998 Presented by Wen Li Department of Computer Science & Engineering Texas A&M University.
Robotics Club: 5:30 this evening
CSE-473 Project 2 Monte Carlo Localization. Localization as state estimation.
Probabilistic Robotics
Tracking with dynamics
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Monte Carlo Localization for Mobile Robots Frank Dellaert 1, Dieter Fox 2, Wolfram Burgard 3, Sebastian Thrun 4 1 Georgia Institute of Technology 2 University.
10-1 Probabilistic Robotics: FastSLAM Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann,
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Mobile Robotics. Fundamental Idea: Robot Pose 2D world (floor plan) 3 DOF Very simple model—the difficulty is in autonomy.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics
Probabilistic Robotics
Markov ó Kalman Filter Localization
State Estimation Probability, Bayes Filtering
Particle Filter/Monte Carlo Localization
A Short Introduction to the Bayes Filter and Related Models
EE-565: Mobile Robotics Non-Parametric Filters Module 2, Lecture 5
Probabilistic Map Based Localization
Probabilistic Robotics Bayes Filter Implementations FastSLAM
Presentation transcript:

City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics Probabilistic Robotics (II)

City College of New York 2 Outline Localization Methods –Markov Localization (review) –Kalman Filter Localization –Particle Filter (Monte Carlo Localization) Current Research Topics –SLAM –Multi-robot Localization

City College of New York 3 Robot Navigation Fundamental problems to provide a mobile robot with autonomous capabilities: Where am I going What’s the best way there? Where have I been?  how to create an environmental map with imperfect sensors? Where am I?  how a robot can tell where it is on a map? What if you’re lost and don’t have a map? Mapping Localization Robot SLAM Path Planning Mission Planning

City College of New York 4 Representation of the Environment Environment Representation –Continuos Metric  x,y,  –Discrete Metric  metric grid –Discrete Topological  topological grid

City College of New York 5 Localization Methods Markov Localization: –Central idea: represent the robot’s belief by a probability distribution over possible positions, and uses Bayes’ rule and convolution to update the belief whenever the robot senses or moves –Markov Assumption: past and future data are independent if one knows the current state Kalman Filtering –Central idea: posing localization problem as a sensor fusion problem –Assumption: Gaussian distribution function Particle Filtering –Monte-Carlo method SLAM (simultaneous localization and mapping) Multi-robot localization

City College of New York 6 Probability Theory Bayes Formula : )()|()()|(),(   )( )()|( )(  yP xPxyP yxP xPxyPyPyxPyxP Normalization :

City College of New York 7 Probability Theory Bayes Rule with background knowledge :  Law of Total Probability :

City College of New York 8 Bayes Filters: Framework Given: –Stream of observations z and action data u: –Sensor model P(z|x). –Action model P(x|u,x’). –Prior probability of the system state P(x). Wanted: –Estimate of the state X of a dynamical system. –The posterior of the state is also called Belief:

City College of New York 9 Markov Assumption Underlying Assumptions Static world, Independent noise Perfect model, no approximation errors State transition probability Measurement probability   Markov Assumption: –past and future data are independent if one knows the current state

City College of New York 10 Bayes Filters Bayes z = observation u = action x = state Markov Total prob. Markov

City College of New York 11 Bayes Filter Algorithm 1. Algorithm Bayes_filter( Bel(x),d ): 2.  0 3. If d is a perceptual data item z then 4. For all x do For all x do Else if d is an action data item u then 10. For all x do Return Bel’(x)

City College of New York 12 Bayes Filters are Familiar! Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs)

City College of New York 13 Localization Given –Map of the environment. –Sequence of sensor measurements. Wanted –Estimate of the robot’s position. Problem classes –Position tracking –Global localization –Kidnapped robot problem (recovery) Determining the pose of a robot relative to a given map of the environment

City College of New York 14 Markov Localization 1. Algorithm Markov_Localization 2. For all do Endfor 6. Return Bel(x)

City College of New York 15

City College of New York 16 Localization Methods Markov Localization: –Central idea: represent the robot’s belief by a probability distribution over possible positions, and uses Bayes’ rule and convolution to update the belief whenever the robot senses or moves –Markov Assumption: past and future data are independent if one knows the current state Kalman Filtering –Central idea: posing localization problem as a sensor fusion problem –Assumption: Gaussian distribution function Particle Filtering –Monte-Carlo method SLAM (simultaneous localization and mapping) Multi-robot localization

City College of New York 17 Kalman Filter Localization

City College of New York 18 Introduction to Kalman Filter (1) Two measurements Weighted leas-square Finding minimum error After some calculation and rearrangements Gaussian probability density function Take weight as: Best estimate for the robot position

City College of New York 19 Introduction to Kalman Filter (2) In Kalman Filter notation Best estimate of the state at time K+1 is equal to the best prediction of value before the new measurement is taken, plus a weighting of value times the difference between and the best prediction at time k

City College of New York 20 Introduction to Kalman Filter (3) Dynamic Prediction (robot moving) u = velocity w = noise Motion Combining fusion and dynamic prediction

City College of New York 21 Kalman Filter for Mobile Robot Localization Five steps: 1) Position Prediction, 2) Observation, 3) Measurement prediction, 4) Matching, 5) Estimation

City College of New York 22 Kalman Filter for Mobile Robot Localization Robot Position Prediction –In the first step, the robots position at time step k+1 is predicted based on its old location (time step k) and its movement due to the control input u(k): f: Odometry function

City College of New York 23 Robot Position Prediction: Example Odometry

City College of New York 24 Kalman Filter for Mobile Robot Localization Observation –The second step is to obtain the observation Z(k+1) (measurements) from the robot’s sensors at the new location at time k+1 –The observation usually consists of a set n 0 of single observations z j (k+1) extracted from the different sensors signals. It can represent raw data scans as well as features like lines, doors or any kind of landmarks. –The parameters of the targets are usually observed in the sensor frame {S}. Therefore the observations have to be transformed to the world frame {W} or the measurement prediction have to be transformed to the sensor frame {S}. This transformation is specified in the function h i (seen later).

City College of New York 25 Observation: Example Raw Date of Laser Scanner Extracted Lines in Model Space Sensor (robot) frame

City College of New York 26 Kalman Filter for Mobile Robot Localization Measurement Prediction –In the next step we use the predicted robot position and the map M(k) to generate multiple predicted observations z t. –They have to be transformed into the sensor frame –We can now define the measurement prediction as the set containing all n i predicted observations –The function h i is mainly the coordinate transformation between the world frame and the sensor frame

City College of New York 27 Measurement Prediction: Example For prediction, only the walls that are in the field of view of the robot are selected. This is done by linking the individual lines to the nodes of the path

City College of New York 28 Measurement Prediction: Example The generated measurement predictions have to be transformed to the robot frame {R} According to the figure in previous slide the transformation is given by and its Jacobian by

City College of New York 29 Kalman Filter for Mobile Robot Localization Matching –Assignment from observations z j (k+1) (gained by the sensors) to the targets z t (stored in the map) –For each measurement prediction for which an corresponding observation is found we calculate the innovation: and its innovation covariance found by applying the error propagation law: –The validity of the correspondence between measurement and prediction can e.g. be evaluated through the Mahalanobis distance:

City College of New York 30 Matching: Example

City College of New York 31 Matching: Example To find correspondence (pairs) of predicted and observed features we use the Mahalanobis distance with

City College of New York 32 Estimation: Applying the Kalman Filter Kalman filter gain: Update of robot’s position estimate: The associate variance

City College of New York 33 Estimation: 1D Case For the one-dimensional case with we can show that the estimation corresponds to the Kalman filter for one-dimension presented earlier.

City College of New York 34 Estimation: Example Kalman filter estimation of the new robot position : –By fusing the prediction of robot position (magenta) with the innovation gained by the measurements (green) we get the updated estimate of the robot position (red)

City College of New York 35

City College of New York 36 Monte Carlo Localization One of the most popular particle filter method for robot localization Current Research Topics Reference: –Dieter Fox, Wolfram Burgard, Frank Dellaert, Sebastian Thrun, “Monte Carlo Localization: Efficient Position Estimation for Mobile Robots”, Proc. 16th National Conference on Artificial Intelligence, AAAI’99, July 1999

City College of New York 37 MCL in action “Monte Carlo” Localization -- refers to the resampling of the distribution each time a new observation is integrated

City College of New York 38 Monte Carlo Localization the probability density function is represented by samples randomly drawn from it it is also able to represent multi-modal distributions, and thus localize the robot globally considerably reduces the amount of memory required and can integrate measurements at a higher rate state is not discretized and the method is more accurate than the grid-based methods easy to implement

City College of New York 39 “Probabilistic Robotics” Bayes’ rule p( A | B ) = p( B | A ) p( A ) p( B ) Definition of marginal probability p( A ) = p( A | B ) p(B)  all B p( A ) = p( A  B )  all B Definition of conditional probability - Sebastian Thrun

City College of New York 40 Setting up the problem The robot does (or can be modeled to) alternate between sensing -- getting range observations o 1, o 2, o 3, …, o t-1, o t acting -- driving around (or ferrying?) a 1, a 2, a 3, …, a t-1 “local maps” whence?

City College of New York 41 Setting up the problem The robot does (or can be modeled to) alternate between sensing -- getting range observations o 1, o 2, o 3, …, o t-1, o t acting -- driving around (or ferrying?) a 1, a 2, a 3, …, a t-1 We want to know r t -- the position of the robot at time t but we’ll settle for p ( r t ) -- the probability distribution for r t ! What kind of thing is p ( r t ) ?

City College of New York 42 Setting up the problem The robot does (or can be modeled to) alternate between sensing -- getting range observations o 1, o 2, o 3, …, o t-1, o t acting -- driving around (or ferrying?) a 1, a 2, a 3, …, a t-1 We want to know r t -- the position of the robot at time t We do know m -- the map of the environment but we’ll settle for p ( r t ) -- the probability distribution for r t ! What kind of thing is p ( r t ) ? p( o | r, m ) p( r new | r old, a, m ) -- the sensor model -- the accuracy of desired action a

City College of New York 43 Robot modeling p( o | r, m ) sensor model map m and location r p( | r, m ) =.75 p( | r, m ) =.05 potential observations o

City College of New York 44 Robot modeling p( o | r, m ) sensor model p( r new | r old, a, m ) action model “probabilistic kinematics” -- encoder uncertainty red lines indicate commanded action the cloud indicates the likelihood of various final states map m and location r p( | r, m ) =.75 p( | r, m ) =.05 potential observations o

City College of New York 45 Robot modeling: how-to p( o | r, m ) sensor model p( r new | r old, a, m ) action model (0) Model the physics of the sensor/actuators (with error estimates) (1) Measure lots of sensing/action results and create a model from them theoretical modeling empirical modeling take N measurements, find mean (m) and st. dev. (  ) and then use a Gaussian model or, some other easily-manipulated model... (2) Make something up... p( x ) = 0 if |x-m| >  1 otherwise 1- |x-m|/  otherwise 0 if |x-m| > 

City College of New York 46 Monte Carlo Localization Start by assuming p( r 0 ) is the uniform distribution. take K samples of r 0 and weight each with an importance factor of 1/K

City College of New York 47 Monte Carlo Localization Start by assuming p( r 0 ) is the uniform distribution. Get the current sensor observation, o 1 For each sample point r 0 multiply the importance factor by p(o 1 | r 0, m) take K samples of r 0 and weight each with an importance factor of 1/K

City College of New York 48 Monte Carlo Localization Start by assuming p( r 0 ) is the uniform distribution. Get the current sensor observation, o 1 For each sample point r 0 multiply the importance factor by p(o 1 | r 0, m) take K samples of r 0 and weight each with an importance factor of 1/K Normalize (make sure the importance factors add to 1) You now have an approximation of p(r 1 | o 1, …, m) and the distribution is no longer uniform

City College of New York 49 Monte Carlo Localization Start by assuming p( r 0 ) is the uniform distribution. Get the current sensor observation, o 1 For each sample point r 0 multiply the importance factor by p(o 1 | r 0, m) take K samples of r 0 and weight each with an importance factor of 1/K Normalize (make sure the importance factors add to 1) You now have an approximation of p(r 1 | o 1, …, m) Create r 1 samples by dividing up large clumps and the distribution is no longer uniform each point spawns new ones in proportion to its importance factor

City College of New York 50 Monte Carlo Localization Start by assuming p( r 0 ) is the uniform distribution. Get the current sensor observation, o 1 For each sample point r 0 multiply the importance factor by p(o 1 | r 0, m) take K samples of r 0 and weight each with an importance factor of 1/K Normalize (make sure the importance factors add to 1) You now have an approximation of p(r 1 | o 1, …, m) Create r 1 samples by dividing up large clumps and the distribution is no longer uniform The robot moves, a 1 each point spawns new ones in proportion to its importance factor For each sample r 1, move it according to the model p(r 2 | a 1, r 1, m)

City College of New York 51 Monte Carlo Localization Start by assuming p( r 0 ) is the uniform distribution. Get the current sensor observation, o 1 For each sample point r 0 multiply the importance factor by p(o 1 | r 0, m) take K samples of r 0 and weight each with an importance factor of 1/K Normalize (make sure the importance factors add to 1) You now have an approximation of p(r 1 | o 1, …, m) Create r 1 samples by dividing up large clumps and the distribution is no longer uniform The robot moves, a 1 each point spawns new ones in proportion to its importance factor For each sample r 1, move it according to the model p(r 2 | a 1, r 1, m)

City College of New York 52 MCL in action “Monte Carlo” Localization -- refers to the resampling of the distribution each time a new observation is integrated

City College of New York 53 Discussion: MCL

City College of New York 54 References 1.Dieter Fox, Wolfram Burgard, Frank Dellaert, Sebastian Thrun, “Monte Carlo Localization: Efficient Position Estimation for Mobile Robots”, Proc. 16th National Conference on Artificial Intelligence, AAAI’99, July Dieter Fox, Wolfram Burgard, Sebastian Thrun, “Markov Localization for Mobile Robots in Dynamic Environments”, J. of Artificial Intelligence Research 11 (1999) Sebastian Thrun, “Probabilistic Algorithms in Robotics”, Technical Report CMU-CS , School of Computer Science, Carnegie Mellon University, Pittsburgh, USA, 2000