A Taste of Robot Localization Course Summary

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

Mobile Robot Locomotion
Reactive and Potential Field Planners
Motion Planning for Point Robots CS 659 Kris Hauser.
Advanced Mobile Robotics
Sonar and Localization LMICSE Workshop June , 2005 Alma College.
INTRODUCTION TO ROBOTICS INTRODUCTION TO ROBOTICS Part 4: Sensors Robotics and Automation Copyright © Texas Education Agency, All rights reserved.
ROBOT LOCALISATION & MAPPING: MAPPING & LIDAR By James Mead.
University of Amsterdam Search, Navigate, and Actuate - Quantitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Quantative Navigation.
(Includes references to Brian Clipp
IR Lab, 16th Oct 2007 Zeyn Saigol
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Motion planning, control and obstacle avoidance D. Calisi.
The City College of New York 1 Prepared by Dr. Salah Talha Mobot: Mobile Robot Introduction to ROBOTICS.
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Robotic Mapping: A Survey Sebastian Thrun, 2002 Presentation by David Black-Schaffer and Kristof Richmond.
Probabilistic Methods in Mobile Robotics. Stereo cameras Infra-red Sonar Laser range-finder Sonar Tactiles.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York Course Summary Introduction.
City College of New York 1 Prof. Jizhong Xiao Department of Electrical Engineering CUNY City College Syllabus/Introduction/Review Advanced.
Robotics R&N: ch 25 based on material from Jean- Claude Latombe, Daphne Koller, Stuart Russell.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Introduction to ROBOTICS
Chapter 5: Path Planning Hadi Moradi. Motivation Need to choose a path for the end effector that avoids collisions and singularities Collisions are easy.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
Inverse Kinematics Jacobian Matrix Trajectory Planning
1/53 Key Problems Localization –“where am I ?” Fault Detection –“what’s wrong ?” Mapping –“what is my environment like ?”
The City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York Mobile Robot Mapping.
Introduction to ROBOTICS
Robot Sensing and Sensors Jizhong Xiao Dept. of Electrical Engineering City College of New York Capstone Design -- Robotics.
Sensors - 1 Intro to Sensors. Sensors - 2 Physical Principles of Sensing Generation of electrical signals in response to nonelectrical influences Electric.
June 12, 2001 Jeong-Su Han An Autonomous Vehicle for People with Motor Disabilities by G. Bourhis, O.Horn, O.Habert and A. Pruski Paper Review.
Getting started with interactive projects using the Arduino, Max and Processing.
Introduction to Robotics and ASU Robots Yinong Chen (Ph.D.) School of Computing, Informatics, and Decision Systems Engineering.
1 CMPUT 412 Motion Control – Wheeled robots Csaba Szepesvári University of Alberta TexPoint fonts used in EMF. Read the TexPoint manual before you delete.
Constraints-based Motion Planning for an Automatic, Flexible Laser Scanning Robotized Platform Th. Borangiu, A. Dogar, A. Dumitrache University Politehnica.
Localisation & Navigation
9/14/2015CS225B Kurt Konolige Locomotion of Wheeled Robots 3 wheels are sufficient and guarantee stability Differential drive (TurtleBot) Car drive (Ackerman.
Localization and Mapping (3)
© Manfred Huber Autonomous Robots Robot Path Planning.
/09/dji-phantom-crashes-into- canadian-lake/
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Introduction to ROBOTICS
1 Robot Motion and Perception (ch. 5, 6) These two chapters discuss how one obtains the motion model and measurement model mentioned before. They are needed.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
SLAM: Robotic Simultaneous Location and Mapping With a great deal of acknowledgment to Sebastian Thrun.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
Path Planning for a Point Robot
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Sensors I Lecture is based on material from Robotic Explorations: A Hands-on Introduction to Engineering, Fred Martin, Prentice Hall, 2001.
City College of New York 1 Dr. Jizhong Xiao Department of Electrical Engineering City College of New York Advanced Mobile Robotics.
UNC Chapel Hill M. C. Lin Introduction to Motion Planning Applications Overview of the Problem Basics – Planning for Point Robot –Visibility Graphs –Roadmap.
Robotics Club: 5:30 this evening
INTRODUCTION TO ROBOTICS INTRODUCTION TO ROBOTICS Part 4: Sensors Robotics and Automation Copyright © Texas Education Agency, All rights reserved.
City College of New York 1 John (Jizhong) Xiao Department of Electrical Engineering City College of New York Mobile Robot Control G3300:
Probabilistic Robotics
Autonomous Robots Robot Path Planning (3) © Manfred Huber 2008.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York A Taste of Localization.
City College of New York 1 Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York Review for Midterm.
Basilio Bona DAUIN – Politecnico di Torino
Autonomous Mobile Robots Autonomous Systems Lab Zürich Probabilistic Map Based Localization "Position" Global Map PerceptionMotion Control Cognition Real.
CS b659: Intelligent Robotics
Schedule for next 2 weeks
دکتر سعید شیری قیداری & فصل 4 کتاب
Locomotion of Wheeled Robots
Probabilistic Robotics
A Short Introduction to the Bayes Filter and Related Models
Probabilistic Map Based Localization
Planning.
Presentation transcript:

A Taste of Robot Localization Course Summary Introduction to ROBOTICS A Taste of Robot Localization Course Summary Dr. John (Jizhong) Xiao Department of Electrical Engineering City College of New York jxiao@ccny.cuny.edu

Topics Brief Review (Robot Mapping) A Taste of Localization Problem Course Summary

Mapping/Localization Answering robotics’ big questions How to get a map of an environment with imperfect sensors (Mapping) How a robot can tell where it is on a map (localization) It is an on-going research It is the most difficult task for robot Even human will get lost in a building!

Review: Use Sonar to Create Map What should we conclude if this sonar reads 10 feet? there isn’t something here there is something somewhere around here 10 feet Local Map unoccupied no information occupied

What is it a map of? Several answers to this question have been tried: cell (x,y) is occupied cell (x,y) is unoccupied It’s a map of occupied cells. oxy oxy pre ‘83 What information should this map contain, given that it is created with sonar ? Each cell is either occupied or unoccupied -- this was the approach taken by the Stanford Cart.

What is it a map of ? Several answers to this question have been tried: oxy cell (x,y) is occupied cell (x,y) is unoccupied oxy pre ‘83 It’s a map of occupied cells. It’s a map of probabilities: p( o | S1..i ) p( o | S1..i ) The certainty that a cell is occupied, given the sensor readings S1, S2, …, Si ‘83 - ‘88 The certainty that a cell is unoccupied, given the sensor readings S1, S2, …, Si The odds of an event are expressed relative to the complement of that event. It’s a map of odds. evidence = log2(odds) probabilities p( o | S1..i ) The odds that a cell is occupied, given the sensor readings S1, S2, …, Si odds( o | S1..i ) = p( o | S1..i )

Combining Evidence So, how do we combine evidence to create a map? The key to making accurate maps is combining lots of data. So, how do we combine evidence to create a map? What we want -- odds( o | S2  S1) the new value of a cell in the map after the sonar reading S2 What we know -- odds( o | S1) the old value of a cell in the map (before sonar reading S2) p( Si | o ) & p( Si | o ) the probabilities that a certain obstacle causes the sonar reading Si

Update step = multiplying the previous odds by a precomputed weight. Combining Evidence The key to making accurate maps is combining lots of data. p( o | S2  S1 ) def’n of odds odds( o | S2  S1) = p( o | S2  S1 ) p( S2  S1 | o ) p(o) . = Bayes’ rule (+) p( S2  S1 | o ) p(o) p( S2 | o ) p( S1 | o ) p(o) conditional independence of S1 and S2 . = p( S2 | o ) p( S1 | o ) p(o) p( S2 | o ) p( o | S1 ) . = Bayes’ rule (+) p( S2 | o ) p( o | S1 ) precomputed values previous odds the sensor model Update step = multiplying the previous odds by a precomputed weight.

Mapping Using Evidence Grids represent space as a collection of cells, each with the odds (or probability) that it contains an obstacle evidence = log2(odds) Lab environment likely free space likely obstacle lighter areas: lower evidence of obstacles being present not sure darker areas: higher evidence of obstacles being present

Mobot System Overview high-level Abstraction level Motor Modeling: what voltage should I set now ? Control (PID): what voltage should I set over time ? Kinematics: if I move this motor somehow, what happens in other coordinate systems ? Motion Planning: Given a known world and a cooperative mechanism, how do I get there from here ? Bug Algorithms: Given an unknowable world but a known goal and local sensing, how can I get there from here? Mapping: Given sensors, how do I create a useful map? low-level high-level Localization: Given sensors and a map, where am I ? Vision: If my sensors are eyes, what do I do?

Content Brief Review (Robot Mapping) A Taste of Localization Problem Course Summary

What’s the problem? WHERE AM I? But what does this mean, really? Frame of reference is important Local/Relative: Where am I vs. where I was? Global/Absolute: Where am I relative to the world frame? Location can be specified in two ways Geometric: Distances and angles Topological: Connections among landmarks

Localization: Absolute Proximity-To-Reference Landmarks/Beacons Angle-To-Reference Visual: manual triangulation from physical points Distance-From-Reference Time of Flight RF: GPS Acoustic: Signal Fading EM: Bird/3Space Tracker RF:

Triangulation Sea Land Landmarks Works great -- as long as there are reference points! Lines of Sight Unique Target Sea

Compass Triangulation cutting-edge 12th century technology Land Landmarks Lines of Sight North Unique Target Sea

Localization: Relative If you know your speed and direction, you can calculate where you are relative to where you were (integrate). Speed and direction might, themselves, be absolute (compass, speedometer), or integrated (gyroscope, Accelerometer) Relative measurements are usually more accurate in the short term -- but suffer from accumulated error in the long term Most robotics work seems to focus on this.

Localization Methods Markov Localization: Monte-Carlo methods Represent the robot’s belief by a probability distribution over possible positions and uses Bayes’ rule and convolution to update the belief whenever the robot senses or moves Monte-Carlo methods Kalman Filtering SLAM (simultaneous localization and mapping) ….

Markov Localization What is Markov Localization ? Special case of probabilistic state estimation applied to mobile robot localization Initial Hypothesis: Static Environment Markov assumption The robot’s location is the only state in the environment which systematically affects sensor readings Further Hypothesis Dynamic Environment

Markov Localization Instead of maintaining a single hypothesis as to where the robot is, Markov localization maintains a probability distribution over the space of all such hypothesis Uses a fine-grained and metric discretization of the state space

Example Assume the robot position is one- dimensional The robot is placed somewhere in the environment but it is not told its location The robot queries its sensors and finds out it is next to a door

Example The robot moves one meter forward. To account for inherent noise in robot motion the new belief is smoother The robot queries its sensors and again it finds itself next to a door

Basic Notation Bel(Lt=l ) Is the probability (density) that the robot assigns to the possibility that its location at time t is l The belief is updated in response to two different types of events: • sensor readings, • odometry data

Notation Goal:

Markov assumption (or static world assumption)

Markov Localization

Update Phase a b c

Update Phase

Prediction Phase

Summary

Markov Localization Topological (landmark-based, state space organized according to the topological structure of the environment) Grid-Based (the world is divided in cells of fixed size; resolution and precision of state estimation are fixed beforehand) The latter suffers from computational overhead

Content Brief Review (Robot Mapping) A Taste of Localization Problem Course Summary

Mobile Robot

Mobile Robot Locomotion Locomotion: the process of causing a robot to move Differential Drive Tricycle R Swedish Wheel Synchronous Drive Ackerman Steering Omni-directional

Differential Drive Kinematic equation Nonholonomic Constraint Property: At each time instant, the left and right wheels must follow a trajectory that moves around the ICC at the same angular rate , i.e., Kinematic equation (Eq1) Nonholonomic Constraint (Eq2) Eq1-Eq2

Differential Drive Basic Motion Control Straight motion R : Radius of rotation Straight motion R = Infinity VR = VL Rotational motion R = 0 VR = -VL

Tricycle Steering and power are provided through the front wheel control variables: angular velocity of steering wheel ws(t) steering direction α(t) d: distance from the front wheel to the rear axle

Tricycle Kinematics model in the world frame ---Posture kinematics model

Synchronous Drive All the wheels turn in unison All wheels point in the same direction and turn at the same rate Two independent motors, one rolls all wheels forward, one rotate them for turning Control variables (independent) v(t), ω(t)

Ackerman Steering (Car Drive) The Ackerman Steering equation: : R

Car-like Robot Driving type: Rear wheel drive, front wheel steering X Y   l ICC R Rear wheel drive car model: : forward velocity of the rear wheels : angular velocity of the steering wheels non-holonomic constraint: l : length between the front and rear wheels

Robot Sensing Collect information about the world Sensor - an electrical/mechanical/chemical device that maps an environmental attribute to a quantitative measurement Each sensor is based on a transduction principle - conversion of energy from one form to another Extend ranges and modalities of Human Sensing

Resistive Light Sensor Gas Sensor Accelerometer Gyro Metal Detector Pendulum Resistive Tilt Sensors Piezo Bend Sensor Gieger-Muller Radiation Sensor Pyroelectric Detector UV Detector Resistive Bend Sensors CDS Cell Resistive Light Sensor Digital Infrared Ranging Pressure Switch Miniature Polaroid Sensor Limit Switch Touch Switch Mechanical Tilt Sensors IR Pin Diode IR Sensor w/lens Thyristor Magnetic Sensor Polaroid Sensor Board Hall Effect Magnetic Field Sensors IR Reflection Sensor Magnetic Reed Switch IR Amplifier Sensor IRDA Transceiver IR Modulator Receiver Lite-On IR Remote Receiver Radio Shack Remote Receiver Solar Cell Compass Compass Piezo Ultrasonic Transducers

Sensors Used in Robot Resistive sensors: bend sensors, potentiometer, resistive photocells, ... Tactile sensors: contact switch, bumpers… Infrared sensors Reflective, proximity, distance sensors… Ultrasonic Distance Sensor Motor Encoder Inertial Sensors (measure the second derivatives of position) Accelerometer, Gyroscopes, Orientation Sensors: Compass, Inclinometer Laser range sensors Vision, GPS, …

Motion Planning Path Planning: Find a path connecting an initial configuration to goal configuration without collision with obstacles Configuration Space Motion Planning Methods Roadmap Approaches Cell Decomposition Potential Fields Bug Algorithms

Motion Planning Motion Planning Methodololgies – Roadmap – Cell Decomposition – Potential Field • Roadmap – From Cfree a graph is defined (Roadmap) – Ways to obtain the Roadmap • Visibility graph • Voronoi diagram • Cell Decomposition – The robot free space (Cfree) is decomposed into simple regions (cells) – The path in between two poses of a cell can be easily generated • Potential Field – The robot is treated as a particle acting under the influence of a potential field U, where: • the attraction to the goal is modeled by an additive field • obstacles are avoided by acting with a repulsive force that yields a negative field Global methods Local methods

Full-knowledge motion planning Roadmaps Cell decompositions visibility graph exact free space represented via convex polygons voronoi diagram approximate free space represented via a quadtree

Potential field Method Usually assumes some knowledge at the global level The goal is known; the obstacles sensed Each contributes forces, and the robot follows the resulting gradient.

Thank you! Next Week: Final Exam Time: Dec. 13, 6:30pm-9:00pm, Place: T512 Coverage: Mobile Robot, Close-book with 1 page cheat sheet