Download presentation
Presentation is loading. Please wait.
1
An appearance-based visual compass for mobile robots Jürgen Sturm University of Amsterdam Informatics Institute
2
Overview Introduction to Mobile Robotics Introduction to Mobile Robotics Background (RoboCup, Dutch Aibo Team) Background (RoboCup, Dutch Aibo Team) Approach Approach Results Results Conclusions Conclusions 2
3
Mobile robots SICO at Kosair Children's Hospital Dometic, Louisville, Kentucky Sony Aibos playing soccer Cinekids, De Balie, Amsterdam Robot cranes and trucks unloading ships Port of Rotterdam RC3000, the robocleaner Kärcher
4
Dutch Aibo Team, since 2004 Universiteit van Amsterdam Universiteit Utrecht Technische Universiteit Delft Rijksuniversiteit Groningen Technische Universiteit Eindhoven Universiteit Twente, Saxion Universities at Enschede DECIS Lab
5
Challenge Application: Robot Soccer 5
6
Robot localization Robot localization Robot localization.. is the problem of estimating the robot’s pose relative to a map of the environment. Probabilistic approaches Probabilistic approaches Noise Noise Ambiguity Ambiguity Uncertainty Uncertainty 6
7
Design Sensors Sensors Wheelsensors, GPS, Laserscanner, Camera.. Wheelsensors, GPS, Laserscanner, Camera.. Feature space Feature space Map and Belief Representation Map and Belief Representation Grid-based Maps, Topological graphs Grid-based Maps, Topological graphs Single/Multi Hypothesis Trackers Single/Multi Hypothesis Trackers Filters Filters Kalman Filter, Monte-Carlo Methods Kalman Filter, Monte-Carlo Methods 7
8
Design of Classical Approaches Artificial environments Artificial environments (Electro-magnetic) guiding lines (Electro-magnetic) guiding lines (Visual) landmarks (Visual) landmarks Special sensors Special sensors GPS GPS Laser-range-scanners Laser-range-scanners Omni-directional cameras Omni-directional cameras Computationally heavy Computationally heavy offline computation offline computation 8
9
Design of New Approach Natural environments Natural environments Human environments Human environments Unstructured and unknown for the robot Unstructured and unknown for the robot Normal sensors Normal sensors Camera Camera Reasonable requirements Reasonable requirements Real-time Real-time On-board On-board 9
10
Platform: Sony Aibo Internal camera 30fps 208x160 pixels Computer 64bit RISC processor 567 MHz 64 MB RAM 16 MB memorystick WLAN Actuators Legs: 4 x 3 joints Head: 3 joints 10
11
Approach 11
12
Demo Video Visual Compass 12
13
Approach - Synopsis 13
14
Localization Filter Raw imageColor class image Sector-based feature extraction Motion Model Estimated Motion Motion data Image data Previously learned map priorodometry-correctedposterior Sensor Model Correlation Likelihoods 14
15
Sector-based feature extraction (1) Camera field of view: 50° Head field of view: 230° 15
16
Sector-based feature extraction (2) For each sector: For each sector: Count color class transitions in vertical direction Count color class transitions in vertical direction Compute relative transition frequencies Compute relative transition frequencies 16
17
Sensor model (1) Relative frequency of transitions from color class i to color class j in direction φ Relative frequency of transitions from color class i to color class j in direction φ Frequency measurements originate from a probabilistic source (distribution) Frequency measurements originate from a probabilistic source (distribution) How to approximate these distributions? How to approximate these distributions? 17
18
Sensor model (2) Approximate source by a histogram distribution Approximate source by a histogram distribution (parameters constitute the map) 18
19
Sensor model (2) Likelihood that a single frequency measurement originated from direction φ Likelihood that a single frequency measurement originated from direction φ Likelihood that a full feature vector (one sector) originated from direction φ Likelihood that a full feature vector (one sector) originated from direction φ Likelihood that a camera image (set of features) originated from direction φ Likelihood that a camera image (set of features) originated from direction φ 19
20
Sensor model (2) Likelihood that a single frequency measurement originated from direction φ Likelihood that a single frequency measurement originated from direction φ Likelihood that all frequency measurements originated from direction φ Likelihood that all frequency measurements originated from direction φ Likelihood that whole camera image originated from direction φ Likelihood that whole camera image originated from direction φ
21
Localization filter Orientational component Use a Bayesian Filter to update robot‘s beliefs (circular grid buffer) Use a Bayesian Filter to update robot‘s beliefs (circular grid buffer) From this buffer, extract per time step From this buffer, extract per time step Heading estimate Heading estimate Variance estimate Variance estimate priorodometry-correctedposterior 21
22
Results
23
Results Results Brightly illuminated living room Applicable in natural indoor environment Applicable in natural indoor environment Good accuracy (error <5°) Good accuracy (error <5°) 23
24
Results Results Daylight office environment Applicable in natural office environment Applicable in natural office environment Very robust against displacement Very robust against displacement (error <20° over 15m) 24
25
Results Results Outdoor soccer field Applicable in natural outdoor environment Applicable in natural outdoor environment 25
26
Results RoboLab Results RoboLab 4-Legged soccer field Applicable in RoboCup soccer environment Applicable in RoboCup soccer environment 26
27
Results RoboLab Results RoboLab 4-Legged soccer field True average error <10° on a grid of 3x3m True average error <10° on a grid of 3x3m 27
28
Results Variable and Parameter Studies Distance to training spot Distance to training spot Changes in illumination Changes in illumination Angular resolution Angular resolution Scanning grid coverage Scanning grid coverage Number of color classes Number of color classes 28
29
Localization filter Translational component Use multiple training spots Use multiple training spots Each (projectively distorted) patch yields slightly different likelihoods Each (projectively distorted) patch yields slightly different likelihoods Interpolate translation from these likelihoods Interpolate translation from these likelihoods Visual Homing Visual Homing 29
30
Demo Video Visual Homing 30
31
Results Visual Homing Proof of concept Proof of concept 31 -100 -75 -50 -25 0 25 50 75 100 -100-75-50-250255075100 x [cm] y [cm]
32
Conclusions Novel approach to localization: Novel approach to localization: Works in unstructured environments Works in unstructured environments Accurate, robust, efficient, scaleable Accurate, robust, efficient, scaleable Interesting approach for mobile robots Interesting approach for mobile robots
33
Future Research Use Monte-Carlo Localization Use Monte-Carlo Localization Extend to dynamic environments Extend to dynamic environments Triangulation from two training spots Triangulation from two training spots Announced succeeding projects: Port to RoboCup Rescue Simulation (MSc. Project) Port to RoboCup Rescue Simulation (MSc. Project) RoboCup 2007 Open Challenge (DOAS Project) RoboCup 2007 Open Challenge (DOAS Project) 33
34
Future Research (2) Direct Translational Triangulation from two perspectives
35
3rd Prize Technical Challenges of the 4-Legged League, RoboCup 2006 in Bremen 35
36
Questions?
37
PhD project Motivation: Sensor/motion models depend too much on apriori information depend too much on apriori information can change over time can change over time are possibly unknown to the designer are possibly unknown to the designer
38
PhD project (2) Self-perception Self-perception Self-calibration Self-calibration Body scheme aquisition Body scheme aquisition Robot Bootstrapping Robot Bootstrapping Imitation Imitation
39
PhD project (3) Zora Body 3 DOF Manipulator 4 DOF Idea: Learn body scheme Learn body scheme Demo applications: Reach point Reach point Touch object Touch object Grasp book Grasp book Open door Open door Pour coffee Pour coffee
40
Dynamic Bayesian Network Dynamic Bayesian Network PhD project (4) action state observationobjective Use GPs to approximate dynamic behavior Use GPs to approximate dynamic behavior Use Mixture of Gaussians to represent beliefs Use Mixture of Gaussians to represent beliefs Action inference to reach objective goal Action inference to reach objective goal
41
Comments are welcome
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.