Biologically-inspired Visual Landmark Navigation for Mobile Robots

Slides:



Advertisements
Similar presentations
Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Advertisements

Active Appearance Models
Chapter 2: Marr’s theory of vision. Cognitive Science  José Luis Bermúdez / Cambridge University Press 2010 Overview Introduce Marr’s distinction between.
Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Lect.3 Modeling in The Time Domain Basil Hamed
Computer Aided Thermal Fluid Analysis Lecture 2 Dr. Ming-Jyh Chern ME NTUST.
COORDINATION and NETWORKING of GROUPS OF MOBILE AUTONOMOUS AGENTS.
Jürgen Wolf 1 Wolfram Burgard 2 Hans Burkhardt 2 Robust Vision-based Localization for Mobile Robots Using an Image Retrieval System Based on Invariant.
Hybrid Position-Based Visual Servoing
Texture Segmentation Based on Voting of Blocks, Bayesian Flooding and Region Merging C. Panagiotakis (1), I. Grinias (2) and G. Tziritas (3)
Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots Chao-Yeh Chen and Kristen Grauman University of Texas at Austin.
66: Priyanka J. Sawant 67: Ayesha A. Upadhyay 75: Sumeet Sukthankar.
ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
This presentation can be downloaded at Water Cycle Projections over Decades to Centuries at River Basin to Regional Scales:
A New Omnidirectional Vision Sensor for the Spatial Semantic Hierarchy E. Menegatti, M. Wright, E. Pagello Dep. of Electronics and Informatics University.
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Probabilistic video stabilization using Kalman filtering and mosaicking.
Probabilistic Robotics
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
ICRA, May 2002 Simon Thompson & Alexander Zelinsky1 Accurate Local Positioning using Visual Landmarks from a Panoramic Sensor Simon Thompson Alexander.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Augmented Reality: Object Tracking and Active Appearance Model
Abstract Overall Algorithm Target Matching Error Checking: By comparing what we transform from Kinect Camera coordinate to robot coordinate with what we.
Camera Parameters and Calibration. Camera parameters From last time….
Query Operations: Automatic Global Analysis. Motivation Methods of local analysis extract information from local set of documents retrieved to expand.
A PREDICTION APPROACH TO AGRICULTURAL LAND USE ESTIMATION Ambrosio L., Marín C., Iglesias L., Montañés J., Rubio L.A. Universidad Politécnica Madrid. Spain.
ME751 Advanced Computational Multibody Dynamics Introduction January 21, 2010 Dan Negrut University of Wisconsin, Madison © Dan Negrut, 2010 ME751, UW-Madison.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Neural mechanisms of Spatial Learning. Spatial Learning Materials covered in previous lectures Historical development –Tolman and cognitive maps the classic.
Multimodal Interaction Dr. Mike Spann
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; March Adaptive Intelligent Mobile Robotics Leslie Pack Kaelbling Artificial Intelligence.
When do honey bees use snapshots during navigation? By Frank Bartlett Bees and wasps learn information about visual landmarks near the goal Edge orientation.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
3D SLAM for Omni-directional Camera
Light Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples. Christopher D. Kulla, James D. Tucek, Reynold J. Bailey, Cindy M. Grimm.
Finite Element Method.
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
EE663 Image Processing Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
1 CS430: Information Discovery Lecture 18 Usability 3.
Uniform discretizations: the continuum limit of consistent discretizations Jorge Pullin Horace Hearne Institute for Theoretical Physics Louisiana State.
Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.
SmartNets Results Overview SmartNets SmartNets Methods.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
1 University of Texas at Austin Machine Learning Group 图像与视频处理 计算机学院 Motion Detection and Estimation.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
Privacy-preserving rule mining. Outline  A brief introduction to association rule mining  Privacy preserving rule mining Single party  Perturbation.
Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Motivation  Methods of local analysis extract information from local set of documents retrieved to expand the query  An alternative is to expand the.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Path Planning Based on Ant Colony Algorithm and Distributed Local Navigation for Multi-Robot Systems International Conference on Mechatronics and Automation.
Learning for Physically Diverse Robot Teams Robot Teams - Chapter 7 CS8803 Autonomous Multi-Robot Systems 10/3/02.
Selection and Navigation of Mobile Sensor Nodes Using a Sensor Network Atul Verma, Hemjit Sawant and Jindong Tan Department of Electrical and Computer.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Dynamic Background Learning through Deep Auto-encoder Networks Pei Xu 1, Mao Ye 1, Xue Li 2, Qihe Liu 1, Yi Yang 2 and Jian Ding 3 1.University of Electronic.
Dynamic View Morphing performs view interpolation of dynamic scenes.
Compressive Sensing Techniques for Video Acquisition EE5359 Multimedia Processing December 8,2009 Madhu P. Krishnan.
Amir Yavariabdi Introduction to the Calculus of Variations and Optical Flow.
Robot Vision SS 2009 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
A. M. R. R. Bandara & L. Ranathunga
Dynamic View Morphing performs view interpolation of dynamic scenes.
Contrast and Inferences
Paper – Stephen Se, David Lowe, Jim Little
topic13_grid_generation
Determinant of amatrix:
Chapter 3 Modeling in the Time Domain
Random Neural Network Texture Model
Presentation transcript:

Biologically-inspired Visual Landmark Navigation for Mobile Robots Recent Research Biologically-inspired Visual Landmark Navigation for Mobile Robots Collaborative Work with Bianco

Mobile Robot Navigation Robot Navigation relies answering the questions: Where am I? Where are other places relative to me? How do I get to other places from here? Possible answers: Classical robotic techniques New trend: biologically-inspired methods

Biological-inspiration Animals and insects are proficient in visual navigation (Papi 1992) The use of natural visual landmarks by insects for navigation have been well documented (Wehner 1992) Strategies for the selection of natural landmarks by insects has been reported (Lehrer 1993, Zeil 1993) Many models have been introduced without formal methods (Trullier et al. 1997)

Related Issues Navigation can be considered as a four-level hierarchy guidance place recognition-triggered response topological navigation metric navigation We perform guidance: the agent is guided by a spatial distribution of landmarks we do not use maps we do not know our position in reference co-ordinates

Acquiring Visual Landmarks “A landmark must be reliable and landmarks which appear to be appropriate for human beings are not necessarily appropriate for robots because of the different sensor and matching apparatus.” Matàric 1990, Thrun 1996 If we can establish what is meant by reliability for given sensors and matching schemes then the problem of landmark selection is automatically solved! Reliability depends on sensor and matching scheme Sony NTSC camera and Fujitsu Tracking card (TRV) Landmarks are based on the image correlation concept.

Definition of a Landmark A landmark is a region within a whole image TRV performs 16 x 16 SAD correlation A correlation matrix is generated (ox,oy) (ox-8*mx,oy -8*my) (ox+7*mx,oy +7*my) 16*mx 16*my Template Frame

Uniqueness of Landmarks Different landmarks have different correlation matrices

Reliability of Landmarks We define the reliability of a landmark as where g’ is a local minimum found in the neighbourhood of g, the global minimum.

Selecting Landmarks Maximising r, coupled with different template sizes, we can select unique landmarks. Magnification size = 3 & 4 Magnification size = 5 & 6 December 1999 9

Turn Back and Look How “dynamically” reliable are our landmarks? Through a phase directly inspired by wasps and bees, the robustness of statically chosen landmarks is tested. The robot moves with stereotyped movements The camera continuously points toward the goal December 1999 10

Turn Back and Look Two frames from a typical TBL phase Numbers show the reliability factors of the landmarks December 1999 11

Turn Back and Look The reliability r is constantly monitored for each landmark during TBL. Only those landmarks whose r is above a threshold e are considered. Small perturbations (light, position, etc.) are produced with TBL and this represents a framework for testing the reliability of real navigation tasks. “Only strong individuals can survive through a selection phase” (Murray Gell-Mann, The Quark and the Jaguar, 1994) December 1999 12

Perturbations TBL produces small perturbations: light, perspective, size... December 1999 13

Landmark Navigation The underlying principle is based on the model proposed by Cartwright and Collet to explain bee behaviour. It mimics the behaviour of a bee quite well BUT a 2D extension is required Key Point for the extension: A landmark is attracted toward its original position and size December 1999 14

Landmark Navigation Displacement from the original position and size is suitable for extracting navigation information from landmarks. Original position and size New position and size December 1999 15

Landmark Navigation Let be the difference between the original and present positions of the landmarks. Let be the weight for size difference of the landmarks. The landmark attraction vector is given by: December 1999 16

Landmark navigation By fusing all the landmark attraction vectors through weighted averaging: we obtain the final navigation vector Typical image input frame December 1999 17

Images from a Navigation Experiment December 1999 18

Landmark navigation A navigation vector field: for each (x,y) can be computed 21656 5234 2294 3742 238 5623 4899 1542 2774 95 193 207 134 405 17 196 72 1340 121 192 3362 909 5358 4431 4551 1076 4032 3450 1017 5733 13172 4717 5023 5702 655 1025 1155 6824 16777 5596 16863 15323 14001 14526 12444 3503 7588 5228 8002 5210 9118 981 2263 2047 2240 2341 3768 12717 1853 4243 8675 5208 5934 6891 1225 13512 11857 7510 3006 2134 1307 2413 22001 1020 cm 60 cm GOAL POSITION 720 cm December 1999 19

Visual Potential Field There is evidence of a potential field when biologically-based navigation is considered (Voss 1995, Gaussier 1998) In this case, a potential function U(x,y) such that can drive the movements of the robot. A necessary and sufficient condition for U to exist is that the vector field is conservative, that is: or, alternatively December 1999 20

Computation of Partial Derivatives Partial derivatives and are computed numerically. December 1999 21

TBL affects Conservativeness TBL affects the conservativeness according to different thresholds Plotting for different threshold values of e yields: e=0 e=0.1 December 1999 22

TBL affects Conservativeness e=0.2 e=2.5 As e -->1 the vector field becomes conservative and computation of the potential field can be possible December 1999 23

Computation of the Potential Field Different Potential Fields U can be generated from values of e. e=0 e=0.1 December 1999 24

Computation of the Potential Field e=0.1 e=0.25 As e -->1 the potential field is suitable to drive navigation December 1999 25

Visual potential field When a smaller template size is considered, the potential field basin has a different shape: deeper at the goal position. a reduced basin of attraction. Example size 4, e=0.2 December 1999 26

Experimentation Equipment Basic navigation rule: Nomad200 Sony EVI-D30 Fujitsu Colour TRV Basic navigation rule: if then continue using the last navigation vector else give the robot the currently computed navigation vector December 1999 27

The Environment December 1999 28 1020 cm 60 cm 720 cm GOAL POSITION CUPBOARD(h=210cm) TABLE (h=70cm) 60 cm GOAL POSITION COLUMN 720 cm December 1999 28

Experiment A Size = 6 Threshold e = 0 December 1999 29 1020 cm 60 cm CUPBOARD(h=210cm) TABLE (h=70cm) 60 cm GOAL POSITION COLUMN 720 cm WALL 1 G1 2 3 4 G4 5 G5 6 7 8 G8 9 G9 10 G10 11 G11 December 1999 29

Experiment B Size = 6 Threshold e = 0.2 December 1999 30 1020 cm 60 cm CUPBOARD(h=210cm) TABLE (h=70cm) 60 cm GOAL POSITION COLUMN 720 cm WALL 1 G1 2 G2 3 4 G4 5 G5 6 7 8 G8 9 G9 10 G10 11 G11 G6 G7 December 1999 30

Experiment C Size = 5 Threshold e = 0.2 December 1999 31 1020 cm 60 cm CUPBOARD(h=210cm) TABLE (h=70cm) 60 cm GOAL POSITION COLUMN 720 cm WALL 1 G1 2 G2 3 4 G4 5 G5 6 7 8 G8 9 G9 10 G10 11 G6 G7 December 1999 31

Conclusions Major results Most importantly Self-selection of natural landmarks Theory of visual potential Landmark definition based on reliability Landmark navigation can been formalised as driven by a potential field Invariants or Transformations are not needed Most importantly TBL affects the conservativeness of the vector field strong landmarks = conservativeness = potential field Biologically-inspired navigation methods are effective December 1999 32