City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.

Slides:



Advertisements
Similar presentations
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Advertisements

Advanced Mobile Robotics
Dynamic Bayesian Networks (DBNs)
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
(Includes references to Brian Clipp
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
IR Lab, 16th Oct 2007 Zeyn Saigol
Markov Localization & Bayes Filtering 1 with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Bayesian Robot Programming & Probabilistic Robotics Pavel Petrovič Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Bayes Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read the.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
1.Examples of using probabilistic ideas in robotics 2.Reverend Bayes and review of probabilistic ideas 3.Introduction to Bayesian AI 4.Simple example.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Robotic Mapping: A Survey Sebastian Thrun, 2002 Presentation by David Black-Schaffer and Kristof Richmond.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York Course Summary Introduction.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Probability: Review TexPoint fonts used in EMF.
Monte Carlo Localization
A Taste of Robot Localization Course Summary
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotics Research Laboratory University of Southern California
The City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York Mobile Robot Mapping.
Bayesian Filtering for Robot Localization
Mobile Robot controlled by Kalman Filter
Markov Localization & Bayes Filtering
Localization and Mapping (3)
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
SA-1 Mapping with Known Poses Ch 4.2 and Ch 9. 2 Why Mapping? Learning maps is one of the fundamental problems in mobile robotics Maps allow robots to.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
1 Robot Motion and Perception (ch. 5, 6) These two chapters discuss how one obtains the motion model and measurement model mentioned before. They are needed.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
SLAM: Robotic Simultaneous Location and Mapping With a great deal of acknowledgment to Sebastian Thrun.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.1: Bayes Filter Jürgen Sturm Technische Universität München.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
Tracking Multiple Cells By Correspondence Resolution In A Sequential Bayesian Framework Nilanjan Ray Gang Dong Scott T. Acton C.L. Brown Department of.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 5.3: Reasoning with Bayes Law Jürgen Sturm Technische Universität.
Michael Isard and Andrew Blake, IJCV 1998 Presented by Wen Li Department of Computer Science & Engineering Texas A&M University.
4 Proposed Research Projects SmartHome – Encouraging patients with mild cognitive disabilities to use digital memory notebook for activities of daily living.
Chapter 7. Learning through Imitation and Exploration: Towards Humanoid Robots that Learn from Humans in Creating Brain-like Intelligence. Course: Robots.
Robotics Club: 5:30 this evening
Probabilistic Robotics Introduction.  Robotics is the science of perceiving and manipulating the physical world through computer-controlled devices.
Probability in Robotics Trends in Robotics Research Reactive Paradigm (mid-80’s) no models relies heavily on good sensing Probabilistic Robotics (since.
Probabilistic Robotics
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Probabilistic Robotics Introduction. SA-1 2 Introduction  Robotics is the science of perceiving and manipulating the physical world through computer-controlled.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
CSE 468/568 Deadlines Lab1 grades out tomorrow (5/1) HW2 grades out by weekend Lab 3 grades out next weekend HW3 – Probability homework out Due 5/7 FINALS:
Review of Probability.
Probabilistic Robotics
Markov ó Kalman Filter Localization
Course: Autonomous Machine Learning
State Estimation Probability, Bayes Filtering
CSE-490DF Robotics Capstone
Probabilistic Robotics
A Short Introduction to the Bayes Filter and Related Models
Probabilistic Map Based Localization
Presentation transcript:

City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization Problem Introduction to ROBOTICS

City College of New York 2 Topics Brief Review (Robot Mapping) A Taste of Localization Problem

City College of New York 3 Mapping/Localization Answering robotics’ big questions –How to get a map of an environment with imperfect sensors (Mapping) –How a robot can tell where it is on a map (localization) It is an on-going research It is the most difficult task for robot –Even human will get lost in a building!

City College of New York 4 Review: Use Sonar to Create Map What should we conclude if this sonar reads 10 feet? 10 feet there is something somewhere around here there isn’t something here Local Map unoccupied occupied no information

City College of New York 5 What is it a map of? Several answers to this question have been tried: It’s a map of occupied cells. o xy cell (x,y) is occupied cell (x,y) is unoccupied Each cell is either occupied or unoccupied -- this was the approach taken by the Stanford Cart. pre ‘83 What information should this map contain, given that it is created with sonar ?

City College of New York 6 Several answers to this question have been tried: It’s a map of occupied cells. It’s a map of probabilities: p( o | S 1..i ) p( o | S 1..i ) It’s a map of odds. The certainty that a cell is occupied, given the sensor readings S 1, S 2, …, S i The certainty that a cell is unoccupied, given the sensor readings S 1, S 2, …, S i The odds of an event are expressed relative to the complement of that event. odds( o | S 1..i ) = p( o | S 1..i ) The odds that a cell is occupied, given the sensor readings S 1, S 2, …, S i o xy cell (x,y) is occupied cell (x,y) is unoccupied ‘83 - ‘88 pre ‘83 What is it a map of ? probabilities evidence = log 2 (odds)

City College of New York 7 Combining Evidence So, how do we combine evidence to create a map? What we want -- odds( o | S 2  S 1 ) the new value of a cell in the map after the sonar reading S 2 What we know -- odds( o | S 1 ) the old value of a cell in the map (before sonar reading S 2 ) p( S i | o ) & p( S i | o ) the probabilities that a certain obstacle causes the sonar reading S i The key to making accurate maps is combining lots of data.

City College of New York 8 p( S 2 | o ) p( S 1 | o ) p(o) Combining Evidence odds( o | S 2  S 1 ) = p( o | S 2  S 1 ). = p( S 2  S 1 | o ) p(o) p( S 2 | o ) p( S 1 | o ) p(o) Update step = multiplying the previous odds by a precomputed weight. def’n of odds Bayes’ rule (+) conditional independence of S 1 and S 2 p( S 2 | o ) p( o | S 1 ) Bayes’ rule (+) previous odds precomputed values the sensor model The key to making accurate maps is combining lots of data.

City College of New York 9 represent space as a collection of cells, each with the odds (or probability) that it contains an obstacle Lab environment not sure likely obstacle likely free space Evidence Grids... evidence = log 2 (odds) Mapping Using Evidence Grids lighter areas: lower evidence of obstacles being present darker areas: higher evidence of obstacles being present

City College of New York 10 Mobot System Overview Abstraction level Motor Modeling : what voltage should I set now ? Control (PID) : what voltage should I set over time ? Kinematics : if I move this motor somehow, what happens in other coordinate systems ? Motion Planning : Given a known world and a cooperative mechanism, how do I get there from here ? Bug Algorithms : Given an unknowable world but a known goal and local sensing, how can I get there from here? Mapping : Given sensors, how do I create a useful map? low-level high-level Localization : Given sensors and a map, where am I ? Vision : If my sensors are eyes, what do I do?

City College of New York 11 Content Brief Review (Robot Mapping) A Taste of Localization Problem

City College of New York 12 What’s the problem? WHERE AM I? But what does this mean, really? Reference frame is important –Local/Relative: Where am I vs. where I was? –Global/Absolute: Where am I relative to the world frame? Location can be specified in two ways –Geometric: Distances and angles –Topological: Connections among landmarks

City College of New York 13 Localization: Absolute –Proximity-To-Reference Landmarks/Beacons –Angle-To-Reference Visual: manual triangulation from physical points –Distance-From-Reference Time of Flight –RF: GPS –Acoustic: Signal Fading –EM: Bird/3Space Tracker –RF: –Acoustic:

City College of New York 14 Triangulation Land Sea Landmarks Unique Target Lines of Sight Works great -- as long as there are reference points!

City College of New York 15 Compass Triangulation Land Sea Landmarks Unique Target Lines of Sight North cutting-edge 12th century technology

City College of New York 16 Localization: Relative If you know your speed and direction, you can calculate where you are relative to where you were (integrate). Speed and direction might, themselves, be absolute (compass, speedometer), or integrated (gyroscope, Accelerometer) Relative measurements are usually more accurate in the short term -- but suffer from accumulated error in the long term Most robotics work seems to focus on this.

City College of New York 17 Localization Methods Markov Localization: –Represent the robot’s belief by a probability distribution over possible positions and uses Bayes’ rule and convolution to update the belief whenever the robot senses or moves Monte-Carlo methods Kalman Filtering SLAM (simultaneous localization and mapping) ….

City College of New York 18 Environment Representation –Continuos Metric  x,y,  –Discrete Metric  metric grid –Discrete Topological  topological grid Continuous metric Discrete TopologicalMetric grid Real environment

City College of New York 19 Environment Representation Continuous Metric, (x,y,  Topological (landmark-based, state space organized according to the topological structure of the environment) Grid-Based (the world is divided in cells of fixed size; resolution and precision of state estimation are fixed beforehand) The latter suffers from computational overhead

City College of New York 20 Probability Review Discrete Random Variables X denotes a random variable. X can take on a countable number of values in {x 1, x 2, …, x n }. P(X=x i ), or P(x i ), is the probability that the random variable X takes on value x i. P( ) is called probability mass function. E.g..

City College of New York 21 Probability Review Continuous Random Variables X takes on values in the continuum. p(X=x), or p(x), is a probability density function. E.g. x p(x)

City College of New York 22 Probability Review Joint and Conditional Probability P(X=x and Y=y) = P(x,y) If X and Y are independent then P(x,y) = P(x) P(y) P(x | y) is the probability of x given y P(x | y) = P(x,y) / P(y) P(x,y) = P(x | y) P(y) If X and Y are independent then P(x | y) = P(x)

City College of New York 23 Law of Total Probability, Marginals Discrete caseContinuous case

City College of New York 24 Probability Review Law of total probability:

City College of New York 25 Conditional Independence equivalent to and

City College of New York 26 Bayes Formula Posterior probability distribution Prior probability distribution If y is a new sensor reading Generative model, characteristics of the sensor     Does not depend on x

City College of New York 27 Bayes Rule with Background Knowledge

City College of New York 28 Markov Localization What is Markov Localization ? –Special case of probabilistic state estimation applied to mobile robot localization –Initial Hypothesis: Static Environment –Markov assumption –The robot’s location is the only state in the environment which systematically affects sensor readings –Further Hypothesis Dynamic Environment

City College of New York 29 Markov Localization Applying probability theory to robot localization Markov localization uses an explicit, discrete representation for the probability of all position in the state space. This is usually done by representing the environment by a grid or a topological graph with a finite number of possible states (positions ). During each update, the probability for each state (element) of the entire space is updated.

City College of New York 30 Markov Localization –Instead of maintaining a single hypothesis as to where the robot is, Markov localization maintains a probability distribution over the space of all such hypothesis –Uses a fine-grained and metric discretization of the state space

City College of New York 31 Example Assume the robot position is one- dimensional The robot is placed somewhere in the environment but it is not told its location The robot queries its sensors and finds out it is next to a door

City College of New York 32 Example The robot moves one meter forward. To account for inherent noise in robot motion the new belief is smoother The robot queries its sensors and again it finds itself next to a door

City College of New York 33 Basic Notation Bel(Lt=l ) Is the probability (density) that the robot assigns to the possibility that its location at time t is l The belief is updated in response to two different types of events: sensor readings, odometry data

City College of New York 34 Notation Goal:

City College of New York 35 Markov assumption (or static world assumption)

City College of New York 36 Markov Localization Measurement Action

City College of New York 37 Measurement: Update Phase ab c

City College of New York 38 Measurement: Update Phase

City College of New York 39 Recursive Bayesian Updating Markov assumption: z n is independent of z 1,...,z n-1 if we know x.

City College of New York 40 Action: Prediction Phase The robot turns its wheels to move The robot uses its manipulator to grasp an object Plants grow over time… Actions are never carried out with absolute certainty. In contrast to measurements, actions generally increase the uncertainty. How can we incorporate such actions?

City College of New York 41 Modeling Actions To incorporate the outcome of an action u into the current “belief”, we use the conditional pdf P(x|u,x’) This term specifies the pdf that executing u changes the state from x’ to x.

City College of New York 42 Integrating the Outcome of Actions Continuous case: Discrete case:

City College of New York 43 Example: Closing the door

City College of New York 44 State Transitions P(x|u,x’) for u = “close door”: If the door is open, the action “close door” succeeds in 90% of all cases. P(open)=5/8 P(closed)=3/8

City College of New York 45 Example: The Resulting Belief

City College of New York 46 Summary Measurement Action

City College of New York 47 Bayes Filters: Framework Given: –Stream of observations z and action data u: –Sensor model P(z|x). –Action model P(x|u,x’). –Prior probability of the system state P(x). Wanted: –Estimate of the state X of a dynamical system. –The posterior of the state is also called Belief:

City College of New York 48 Markov Assumption Underlying Assumptions Static world, Independent noise Perfect model, no approximation errors State transition probability Measurement probability   Markov Assumption: –past and future data are independent if one knows the current state

City College of New York 49 Bayes Filters Bayes z = observation u = action x = state Markov Total prob. Markov

City College of New York 50 Bayes Filters are Family Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems.

City College of New York 51 Thank you! Next Monday: Final Exam Time: 6:30pm-9:00pm Coverage: Mobile Robot Close-book with 1 page cheat sheet, but Do Not Cheat