SLAM Simultaneous Localization And Mapping 1. SLAM is an Approach Not an Algorithm Different hardware that can be used There are many steps involved in.

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

EKF, UKF TexPoint fonts used in EMF.
Discussion topics SLAM overview Range and Odometry data Landmarks
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
(Includes references to Brian Clipp
IR Lab, 16th Oct 2007 Zeyn Saigol
Lab 2 Lab 3 Homework Labs 4-6 Final Project Late No Videos Write up
Probabilistic Robotics
Observers and Kalman Filters
Simultaneous Localization & Mapping - SLAM
1 Observers Data Only Fault Detection Bo Wahlberg Automatic Control Lab & ACCESS KTH, SWEDEN André C. Bittencourt Department of Automatic Control UFSC,
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Laser Scan Matching in Polar Coordinates with Application to SLAM
Active SLAM in Structured Environments Cindy Leung, Shoudong Huang and Gamini Dissanayake Presented by: Arvind Pereira for the CS-599 – Sequential Decision.
Reliable Range based Localization and SLAM Joseph Djugash Masters Student Presenting work done by: Sanjiv Singh, George Kantor, Peter Corke and Derek Kurth.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
Overview and Mathematics Bjoern Griesbach
ROBOT MAPPING AND EKF SLAM
Kalman filter and SLAM problem
Mobile Robot controlled by Kalman Filter
Using Incomplete Online Metric Maps for Topological Exploration with the Gap Navigation Tree Liz Murphy and Paul Newman Mobile Robotics Research Group,
Markov Localization & Bayes Filtering
/09/dji-phantom-crashes-into- canadian-lake/
Computer vision: models, learning and inference Chapter 19 Temporal models.
3D SLAM for Omni-directional Camera
Computer vision: models, learning and inference Chapter 19 Temporal models.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Young Ki Baik, Computer Vision Lab.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
An Introduction To The Kalman Filter By, Santhosh Kumar.
The Development of a Relative Point SLAM Algorithm and a Relative Plane SLAM Algorithm.
Tracking with dynamics
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Robust Localization Kalman Filter & LADAR Scans
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
Kalman Filter and Data Streaming Presented By :- Ankur Jain Department of Computer Science 7/21/03.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
SLAM : Simultaneous Localization and Mapping
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
(5) Notes on the Least Squares Estimate
Using Sensor Data Effectively
Paper – Stephen Se, David Lowe, Jim Little
Probabilistic Robotics
PSG College of Technology
Simultaneous Localization and Mapping
Dongwook Kim, Beomjun Kim, Taeyoung Chung, and Kyongsu Yi
A Short Introduction to the Bayes Filter and Related Models
Motion Models (cont) 2/16/2019.
Probabilistic Map Based Localization
Bayes and Kalman Filter
Introduction to Sensor Interpretation
Kalman Filter: Bayes Interpretation
Introduction to Sensor Interpretation
Presentation transcript:

SLAM Simultaneous Localization And Mapping 1

SLAM is an Approach Not an Algorithm Different hardware that can be used There are many steps involved in SLAM; – Each can implemented using a number of different algorithms. I will present a complete solution for SLAM using EKF (Extended Kalman Filter) – Complete does not mean perfect – I will cover all the basic steps required to get an implementation up and running SLAM as such has not been completely solved and there is still considerable research going on in the field. 2

A High-Level Description of SLAM 1.Update the current state estimate using the odometry data – Addition of the controls of the robot to the old state estimate 2.Update the estimated state from re-observing landmarks – Using the estimate of the current position it is possible to estimate where the landmark should be – There is usually some difference, this is called the innovation – The uncertainty of each observed landmark is also updated 3.Add new landmarks to the current state (the map of the world) – This is done using information about the current position and adding information about the relation between the new landmark and the old landmarks 3

Modeling a Unicycle Robot Most robots can be modeled as a Unicycle The robot is a point (zero radius) Increase the size of all objects by the radius of the robot (Called State Space) Environment Measurement Devices – Laser Scanner Error < +/-50mm Odometry Sensors – Wheel Encoders Error < 2cm per meter moved Error < 2 Deg per 45 degrees rotated 4

Use the Environment to Update the Position of the Robot The odometry of the robot is often erroneous, but can be used as an estimate Laser scans of the environment can be used to correct the position of the robot Extract features from the environment and re-observe when the robot moves Apply an EKF (Extended Kalman Filter) to update where the robot thinks it is – The EKF keeps track of an estimate of the uncertainty in the robots position and also the uncertainty in the landmarks it has seen in the environment. 5

At any point the EKF will have an estimate of the robots current position At periodic small time slices – odometry changes The robot moves Noise in the measurements – The uncertainty pertaining to the robots new position is updated in the EKF The Laser Scanner extracts Landmarks from the environment Associate these landmarks to observations of landmarks it previously has seen Re-observed landmarks are then used to update the robots position in the EKF Landmarks which have not previously been seen are added to the EKF as new observations so they can be re-observed later 6

Coordinate Systems Measurements are taken in the LCS Need converted to WCS 7 World Coordinate System (WCS) X Y Y’ X’ Local Coordinate System (LCS) Φ θ

In More Detail The robot initially measures using its sensors 8 World Coordinate System (WCS) X Y Robot Landmark Sensor Measurement

The Robot Moves The robot thinks it is now here based on its odometry 9 World Coordinate System (WCS) X Y Robot Landmark

The Robot Once Again Measures the Location of the Landmarks Sensors don’t match the odometry Both have noise Which is most correct? 10 World Coordinate System (WCS) X Y Robot Landmark Odometry Sensors Sensor Measurement

In Reality the Robot is Here The difference between the sensor- estimated robot position and the actual robot position, The innovation is basically 11 World Coordinate System (WCS) X Y Robot Landmark Odometry Sensors Reality

The Extended Kalman Filter quickly calculates a good approximation Extended Kalman Filter (EKF) Quickly zeros in on a good guess for the actual position Takes in to account variances 12 World Coordinate System (WCS) X Y Robot Landmark EKF Reality

The World is a Noisy Place Inherent Sensor Inaccuracies – Echoes & Reflections – Precision & Slip (especially wheel encoders) – Temperature & Humidity (especially sonic sensors) – Repeatability System Inaccuracies – Propagation delay – Synchronization (all sensors don’t measure at the same instant) – Interpolation/extrapolation can help – Duty Cycle Are there others? 13

LANDMARKS & LANDMARK EXTRACTION 14

Landmarks: Easily Re-observed and Distinguishable From the Environment The type of landmarks used will often depend on the environment in which the robot is operating Inside – Walls – Corners – Doors – Furniture ??? – Lights ??? Outside – Walls – Corners – Doors – Trees – Sidewalks Must choose appropriate sensors to detect the type of landmarks available in the environment 15

Key Traits to Look For in a Landmark Easily re-observable Distinguishable from each other Plentiful in the environment (but not too plentiful) Stationary 16

We will look at 2 Algorithms Differ based on the types of landmarks Useful with Laser Sensors Spike – Uses extrema to find landmarks – Rely on the landscape changing a lot – Not good if people are in the area RANSAC (Random Sampling Consensus) – Extract lines from laser scanner point clouds – In indoor environments straight lines are often observed by laser scans as these are characteristic of straight walls 17

SPIKES Find where two consecutive ranges differ by more than a certain amount OR, have three values next to each other, A, B and C – Subtracting B from A and B from C and adding the two numbers – Better for finding spikes as it will find actual spikes and not just permanent changes in range 18 The red dots are table legs extracted as landmarks

RANSAC is More Computationally Intensive Randomly take two of the laser readings Using a least squares approximation to find the best fit line that runs through these point Count how many laser readings lie close to this best fit lines 19

Pseudocode for the RANSAC Algorithm While there are still unassociated laser readings and the number of readings is larger than the consensus and we have done less than N trials { – Select a random laser data reading – Randomly sample S data readings within D degrees of this laser data reading (for example, choose 5 sample readings that lie within 10 degrees of the randomly selected laser data reading) – Using these S samples and the original reading calculate a least squares best fit line – Determine how many laser data readings lie within X centimeters of this best fit line – If the number of laser data readings on the line is above some consensus C { calculate new least squares best fit line based on all the laser readings determined to lie on the old best fit line Add this best fit line to the lines we have extracted Remove the number of readings lying on the line from the total set of unassociated readings } 20

An Example Result 21 The green lines are the best fit lines representing the landmarks. The red dots represent the landmarks approximated to points. By changing the RANSAC parameters you could also extract the small wall segments. Just above the robot is a person. RANSAC is robust against people in the laser scan. A Person

EKF Assumes Landmarks Come as Range and Bearing in the LCS One can easily translate a line into a fixed point – Take another fixed point in the WCS (the origin?) – Calculate the point on the line closest to this fixed point – Using the robots position and the position of this fixed point on the line it is trivial to calculate a range and bearing 22 Extracted Landmark (0, 0) WCS Point Representing the Landmark

DATA ASSOCIATION The problem of data association is that of matching observed landmarks from different (laser) scans with each other. 23

The Following Problems can Arise in Data Association You might not re-observe landmarks every time step You might observe something as being a landmark but fail to ever see it again You might wrongly associate a landmark to a previously seen landmark 24

A Policy That Deals With These Issues Assumptions: – There is a “Database” to persist landmarks – We won’t actually consider a landmark worthwhile unless we have seen it “N” times 25

The Nearest Neighbor Approach 1.When you get a new laser scan use landmark extraction to extract all visible landmarks 2.Associate each extracted landmark to the closest landmark we have seen more than N times in the database 3.Pass each of these pairs of associations (extracted landmark, landmark in database) through a validation gate a.If the pair passes the validation gate it must be the same landmark we have re-observed so increment the number of times we have seen it in the database b.If the pair fails the validation gate add this landmark as a new landmark in the database and set the number of times we have seen it to 1 26

The “Database” Stores the Landmarks We can determine if an observed landmark is in the database by checking if the landmark lies within the area of uncertainty – This area can actually be drawn graphically and is known as an error ellipse – By setting a constant λ an observed landmark is associated to a landmark if the following formula holds: – Where v i is the innovation and S i is the innovation covariance 27

THE EXTENDED KALMAN FILTER (EKF) The Extended Kalman Filter is used to estimate the state (position) of the robot from odometry data and landmark observations. 28

Kalman Filter Gives a Bound on the Uncertainty of an Observation Also known as linear quadratic estimation (LQE) An algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more precise than those based on a single measurement alone, by using Bayesian inference and estimating a joint probability distribution over the variables for each timeframealgorithmstatistical noiseBayesian inferencejoint probability distribution 29

What is “Extended”? Kalman filter is linear – Assumes noise distribution is zero- mean Gaussian – Assumes equations of motion are (mathematically) linear Extended Kalman Filter is non-linear – Uses the Jacobian and Taylor Series to linearize – Then it is just a base Kalman Filter 30

The Motion Model and Observation Model are Linear x t = A t x t-1 + B t u t + ε t (The motion model) z t = C t x t + δ t (The observation model) X t is the state of the robot at time “t” – Given the previous state and the control inputs, what is the new state? Z t is the state of the observations at time “t” – Given the state of the world, what should the robot observe? – ε t and δ t are the errors in the measurement (covariance) YOUTUBE: Special Topics - The Kalman Filter 31

“X” is the System State The position of the robot, x, y and theta The x and y position of each landmark 1 column wide 3+2*n rows high – n is the number of landmarks. 32

1/(n -1)Σ(x i - x avg )(y i - y avg ) Covariance provides a measure of how strongly correlated two variables are – The higher the covariance between two variables, the more closely their values follow the same trends over a range of data points 33

The Covariance Matrix Must be initialized using some default values for the diagonal – This reflects uncertainty in the initial position – It is a good idea to include some initial error even though there is reason to believe that the initial robot position is exact 34

Covariance of the Robot Position 3 by 3 sub-matrix (x, y and theta) When no Landmarks are found, this is the entire matrix 35

Covariance of the Landmarks 2 by 2 (x, y) for each landmark 36

Covariance Between the Robot State and Each Landmark 3 by 2 or 2 by 3 each – Transposes across diagonal 37

Covariance Between Each Landmark 2 by 2 each Transposes across diagonal 38

The Kalman Gain Tells You How Much of Each Estimate to Trust Kalman Gain = E est / (E est + E meas ) The first column describes how much should be gained from the innovation in terms of range The second column in the first row describes how much should be gained from the innovation in terms of the bearing The size of the matrix is 2 columns and 3+2*n rows, where n is the number of landmarks 39

Step 1: Update Current State Using the Odometry Data The Prediction Step Update the position Update the Covariance Matrix 40

Step 2: Update State From Re- observed Landmarks This step is run for each re-observed landmark Predict where the landmark is using the current estimated robot position (x, y) and the saved landmark position (λ x, λ y ). With the following formula: A bunch of complicated math … 41

Step 2 Continued Now we can compute the Kalman gain K = P * HT / (H * P * H T + V * R * V T ) (Called the innovation covariance) Which, after the complicated math is the same as: K = E est / (E est + E meas ) The Kalman now contains a set of numbers indicating how much each of the landmark positions and the robot position should be updated according to the re-observed landmark 42

Step 2 Complete Compute a new state vector using the Kalman gain: X = X + K * (z – h) This operation will update the robot position along with all the landmark positions, given the term (z-h) does not result in (0, 0) Note that (z-h) yields a result of two numbers which is the displacement in range and bearing 43

Step 3: Add New Landmarks to the Current State Update the state vector X and the covariance matrix P with new landmarks 44

Done! The robot is now ready to move again, observe landmarks, associate landmarks, update the system state using odometry, update the system state using re-observed landmarks and finally add new landmarks. 45