Presentation is loading. Please wait.

Presentation is loading. Please wait.

City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.

Similar presentations


Presentation on theme: "City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization."— Presentation transcript:

1 City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York jxiao@ccny.cuny.edu A Taste of Localization Problem Introduction to ROBOTICS

2 City College of New York 2 Topics Brief Review (Robot Mapping) A Taste of Localization Problem

3 City College of New York 3 Mapping/Localization Answering robotics’ big questions –How to get a map of an environment with imperfect sensors (Mapping) –How a robot can tell where it is on a map (localization) It is an on-going research It is the most difficult task for robot –Even human will get lost in a building!

4 City College of New York 4 Review: Use Sonar to Create Map What should we conclude if this sonar reads 10 feet? 10 feet there is something somewhere around here there isn’t something here Local Map unoccupied occupied no information

5 City College of New York 5 What is it a map of? Several answers to this question have been tried: It’s a map of occupied cells. o xy cell (x,y) is occupied cell (x,y) is unoccupied Each cell is either occupied or unoccupied -- this was the approach taken by the Stanford Cart. pre ‘83 What information should this map contain, given that it is created with sonar ?

6 City College of New York 6 Several answers to this question have been tried: It’s a map of occupied cells. It’s a map of probabilities: p( o | S 1..i ) p( o | S 1..i ) It’s a map of odds. The certainty that a cell is occupied, given the sensor readings S 1, S 2, …, S i The certainty that a cell is unoccupied, given the sensor readings S 1, S 2, …, S i The odds of an event are expressed relative to the complement of that event. odds( o | S 1..i ) = p( o | S 1..i ) The odds that a cell is occupied, given the sensor readings S 1, S 2, …, S i o xy cell (x,y) is occupied cell (x,y) is unoccupied ‘83 - ‘88 pre ‘83 What is it a map of ? probabilities evidence = log 2 (odds)

7 City College of New York 7 Combining Evidence So, how do we combine evidence to create a map? What we want -- odds( o | S 2  S 1 ) the new value of a cell in the map after the sonar reading S 2 What we know -- odds( o | S 1 ) the old value of a cell in the map (before sonar reading S 2 ) p( S i | o ) & p( S i | o ) the probabilities that a certain obstacle causes the sonar reading S i The key to making accurate maps is combining lots of data.

8 City College of New York 8 p( S 2 | o ) p( S 1 | o ) p(o) Combining Evidence odds( o | S 2  S 1 ) = p( o | S 2  S 1 ). = p( S 2  S 1 | o ) p(o) p( S 2 | o ) p( S 1 | o ) p(o) Update step = multiplying the previous odds by a precomputed weight. def’n of odds Bayes’ rule (+) conditional independence of S 1 and S 2 p( S 2 | o ) p( o | S 1 ) Bayes’ rule (+) previous odds precomputed values the sensor model The key to making accurate maps is combining lots of data.

9 City College of New York 9 represent space as a collection of cells, each with the odds (or probability) that it contains an obstacle Lab environment not sure likely obstacle likely free space Evidence Grids... evidence = log 2 (odds) Mapping Using Evidence Grids lighter areas: lower evidence of obstacles being present darker areas: higher evidence of obstacles being present

10 City College of New York 10 Mobot System Overview Abstraction level Motor Modeling : what voltage should I set now ? Control (PID) : what voltage should I set over time ? Kinematics : if I move this motor somehow, what happens in other coordinate systems ? Motion Planning : Given a known world and a cooperative mechanism, how do I get there from here ? Bug Algorithms : Given an unknowable world but a known goal and local sensing, how can I get there from here? Mapping : Given sensors, how do I create a useful map? low-level high-level Localization : Given sensors and a map, where am I ? Vision : If my sensors are eyes, what do I do?

11 City College of New York 11 Content Brief Review (Robot Mapping) A Taste of Localization Problem

12 City College of New York 12 What’s the problem? WHERE AM I? But what does this mean, really? Reference frame is important –Local/Relative: Where am I vs. where I was? –Global/Absolute: Where am I relative to the world frame? Location can be specified in two ways –Geometric: Distances and angles –Topological: Connections among landmarks

13 City College of New York 13 Localization: Absolute –Proximity-To-Reference Landmarks/Beacons –Angle-To-Reference Visual: manual triangulation from physical points –Distance-From-Reference Time of Flight –RF: GPS –Acoustic: Signal Fading –EM: Bird/3Space Tracker –RF: –Acoustic:

14 City College of New York 14 Triangulation Land Sea Landmarks Unique Target Lines of Sight Works great -- as long as there are reference points!

15 City College of New York 15 Compass Triangulation Land Sea Landmarks Unique Target Lines of Sight North cutting-edge 12th century technology

16 City College of New York 16 Localization: Relative If you know your speed and direction, you can calculate where you are relative to where you were (integrate). Speed and direction might, themselves, be absolute (compass, speedometer), or integrated (gyroscope, Accelerometer) Relative measurements are usually more accurate in the short term -- but suffer from accumulated error in the long term Most robotics work seems to focus on this.

17 City College of New York 17 Localization Methods Markov Localization: –Represent the robot’s belief by a probability distribution over possible positions and uses Bayes’ rule and convolution to update the belief whenever the robot senses or moves Monte-Carlo methods Kalman Filtering SLAM (simultaneous localization and mapping) ….

18 City College of New York 18 Environment Representation –Continuos Metric  x,y,  –Discrete Metric  metric grid –Discrete Topological  topological grid Continuous metric Discrete TopologicalMetric grid Real environment

19 City College of New York 19 Environment Representation Continuous Metric, (x,y,  Topological (landmark-based, state space organized according to the topological structure of the environment) Grid-Based (the world is divided in cells of fixed size; resolution and precision of state estimation are fixed beforehand) The latter suffers from computational overhead

20 City College of New York 20 Probability Review Discrete Random Variables X denotes a random variable. X can take on a countable number of values in {x 1, x 2, …, x n }. P(X=x i ), or P(x i ), is the probability that the random variable X takes on value x i. P( ) is called probability mass function. E.g..

21 City College of New York 21 Probability Review Continuous Random Variables X takes on values in the continuum. p(X=x), or p(x), is a probability density function. E.g. x p(x)

22 City College of New York 22 Probability Review Joint and Conditional Probability P(X=x and Y=y) = P(x,y) If X and Y are independent then P(x,y) = P(x) P(y) P(x | y) is the probability of x given y P(x | y) = P(x,y) / P(y) P(x,y) = P(x | y) P(y) If X and Y are independent then P(x | y) = P(x)

23 City College of New York 23 Law of Total Probability, Marginals Discrete caseContinuous case

24 City College of New York 24 Probability Review Law of total probability:

25 City College of New York 25 Conditional Independence equivalent to and

26 City College of New York 26 Bayes Formula Posterior probability distribution Prior probability distribution If y is a new sensor reading Generative model, characteristics of the sensor     Does not depend on x

27 City College of New York 27 Bayes Rule with Background Knowledge

28 City College of New York 28 Markov Localization What is Markov Localization ? –Special case of probabilistic state estimation applied to mobile robot localization –Initial Hypothesis: Static Environment –Markov assumption –The robot’s location is the only state in the environment which systematically affects sensor readings –Further Hypothesis Dynamic Environment

29 City College of New York 29 Markov Localization Applying probability theory to robot localization Markov localization uses an explicit, discrete representation for the probability of all position in the state space. This is usually done by representing the environment by a grid or a topological graph with a finite number of possible states (positions ). During each update, the probability for each state (element) of the entire space is updated.

30 City College of New York 30 Markov Localization –Instead of maintaining a single hypothesis as to where the robot is, Markov localization maintains a probability distribution over the space of all such hypothesis –Uses a fine-grained and metric discretization of the state space

31 City College of New York 31 Example Assume the robot position is one- dimensional The robot is placed somewhere in the environment but it is not told its location The robot queries its sensors and finds out it is next to a door

32 City College of New York 32 Example The robot moves one meter forward. To account for inherent noise in robot motion the new belief is smoother The robot queries its sensors and again it finds itself next to a door

33 City College of New York 33 Basic Notation Bel(Lt=l ) Is the probability (density) that the robot assigns to the possibility that its location at time t is l The belief is updated in response to two different types of events: sensor readings, odometry data

34 City College of New York 34 Notation Goal:

35 City College of New York 35 Markov assumption (or static world assumption)

36 City College of New York 36 Markov Localization Measurement Action

37 City College of New York 37 Measurement: Update Phase ab c

38 City College of New York 38 Measurement: Update Phase

39 City College of New York 39 Recursive Bayesian Updating Markov assumption: z n is independent of z 1,...,z n-1 if we know x.

40 City College of New York 40 Action: Prediction Phase The robot turns its wheels to move The robot uses its manipulator to grasp an object Plants grow over time… Actions are never carried out with absolute certainty. In contrast to measurements, actions generally increase the uncertainty. How can we incorporate such actions?

41 City College of New York 41 Modeling Actions To incorporate the outcome of an action u into the current “belief”, we use the conditional pdf P(x|u,x’) This term specifies the pdf that executing u changes the state from x’ to x.

42 City College of New York 42 Integrating the Outcome of Actions Continuous case: Discrete case:

43 City College of New York 43 Example: Closing the door

44 City College of New York 44 State Transitions P(x|u,x’) for u = “close door”: If the door is open, the action “close door” succeeds in 90% of all cases. P(open)=5/8 P(closed)=3/8

45 City College of New York 45 Example: The Resulting Belief

46 City College of New York 46 Summary Measurement Action

47 City College of New York 47 Bayes Filters: Framework Given: –Stream of observations z and action data u: –Sensor model P(z|x). –Action model P(x|u,x’). –Prior probability of the system state P(x). Wanted: –Estimate of the state X of a dynamical system. –The posterior of the state is also called Belief:

48 City College of New York 48 Markov Assumption Underlying Assumptions Static world, Independent noise Perfect model, no approximation errors State transition probability Measurement probability   Markov Assumption: –past and future data are independent if one knows the current state

49 City College of New York 49 Bayes Filters Bayes z = observation u = action x = state Markov Total prob. Markov

50 City College of New York 50 Bayes Filters are Family Bayes rule allows us to compute probabilities that are hard to assess otherwise. Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence. Bayes filters are a probabilistic tool for estimating the state of dynamic systems.

51 City College of New York 51 Thank you! Next Monday: Final Exam Time: 6:30pm-9:00pm Coverage: Mobile Robot Close-book with 1 page cheat sheet, but Do Not Cheat


Download ppt "City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization."

Similar presentations


Ads by Google