Vision for mobile robot navigation Jannes Eindhoven 2-3-2010.

Slides:



Advertisements
Similar presentations
ICRA 2002 Topological Mobile Robot Localization Using Fast Vision Techniques Paul Blaer and Peter Allen Dept. of Computer Science, Columbia University.
Advertisements

Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
An appearance-based visual compass for mobile robots Jürgen Sturm University of Amsterdam Informatics Institute.
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
IR Lab, 16th Oct 2007 Zeyn Saigol
Visual Event Detection & Recognition Filiz Bunyak Ersoy, Ph.D. student Smart Engineering Systems Lab.
Simultaneous Localization & Mapping - SLAM
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
1 Panoramic University of Amsterdam Informatics Institute.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
An appearance-based visual compass for mobile robots Jürgen Sturm University of Amsterdam Informatics Institute.
A New Omnidirectional Vision Sensor for the Spatial Semantic Hierarchy E. Menegatti, M. Wright, E. Pagello Dep. of Electronics and Informatics University.
1 Color Segmentation: Color Spaces and Illumination Mohan Sridharan University of Birmingham
Automatic 2D-3D Registration Student: Lingyun Liu Advisor: Prof. Ioannis Stamos.
Christian Siagian Laurent Itti Univ. Southern California, CA, USA
Study on Mobile Robot Navigation Techniques Presenter: 林易增 2008/8/26.
Planetary Surface Robotics ENAE 788U, Spring 2005 U N I V E R S I T Y O F MARYLAND Lecture 8 Mapping 5 April, 2005.
Creating and Exploring a Large Photorealistic Virtual Space INRIA / CSAIL / Adobe First IEEE Workshop on Internet Vision, associated with CVPR 2008.
Bayesian Filtering for Robot Localization
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
A Brief Overview of Computer Vision Jinxiang Chai.
Using Incomplete Online Metric Maps for Topological Exploration with the Gap Navigation Tree Liz Murphy and Paul Newman Mobile Robotics Research Group,
Localisation & Navigation
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
3D SLAM for Omni-directional Camera
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
Using Soar for an indoor robotic search mission Scott Hanford Penn State University Applied Research Lab 1.
Classification / Regression Neural Networks 2
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
Computer Science Department Pacific University Artificial Intelligence -- Computer Vision.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Topological Path Planning JBNU, Division of Computer Science and Engineering Parallel Computing Lab Jonghwi Kim Introduction to AI Robots Chapter 9.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot
Application of neural network to analyses of CCD colour TV-camera image for the detection of car fires in expressway tunnels Speaker: Wu Wei-Cheng Date:
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT.
1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA.
Vision and Obstacle Avoidance In Cartesian Space.
Student Name: Honghao Chen Supervisor: Dr Jimmy Li Co-Supervisor: Dr Sherry Randhawa.
Chapter 10. The Explorer System in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans On, Kyoung-Woon Biointelligence Laboratory.
Overivew Occupancy Grids -Sonar Models -Bayesian Updating
9 Introduction to AI Robotics (MIT Press), copyright Robin Murphy 2000 Chapter 9: Topological Path Planning1 Part II Chapter 9: Topological Path Planning.
Chapter 1: Image processing and computer vision Introduction
MACHINE VISION GROUP MOBILE FEATURE-CLOUD PANORAMA CONSTRUCTION FOR IMAGE RECOGNITION APPLICATIONS Miguel Bordallo, Jari Hannuksela, Olli silvén Machine.
Learning Roomba Module 5 - Localization. Outline What is Localization? Why is Localization important? Why is Localization hard? Some Approaches Using.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
Vision for Mobile Robot Navigation. Context Introduction Indoor Navigation Map-Based Approaches Map-Building Mapless Navigation Outdoor Navigation Outdoor.
Evolving robot brains using vision Lisa Meeden Computer Science Department Swarthmore College.
Chapter 13 Artificial Intelligence. Artificial Intelligence – Figure 13.1 The Turing Test.
Things about pattern recognition OGD. Pattern recognition ● Simplify the input ● Extract features ● Process ● Learn? ● Output results.
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Paper – Stephen Se, David Lowe, Jim Little
Sensors for industrial mobile Robots: environment referred laser scanning Need more/better pictures.
Image Recognition. Contents: Motivation Objective Definition Introduction Preprocessing / Edge Detection Neural Networks in Image Recognition Practical.
Chapter 6. Robot Vision.
Chapter 1: Image processing and computer vision Introduction
Image processing and computer vision
Sensor Fusion Localization and Navigation for Visually Impaired People
Visual Navigation Yukun Cui.
CSE (c) S. Tanimoto, 2007 Image Understanding
Lecture 09: Introduction Image Recognition using Neural Networks
Presentation transcript:

Vision for mobile robot navigation Jannes Eindhoven

Contents Introduction [2] Indoor navigation  Map based approaches [5]  Map building [1]  Mapless navigation [2] Outdoor navigation  In structured environments [3]  In unstructured environments [1] Summary [1]

Introduction Guilherme DeSouza Avinash Kak

Introduction[2] Summary of the developments of the last 2 decades. February 2002, thus not including latest developments Not all-comprising Gives examples of achievements

Indoor navigation – map based Acquire sensory information Detect landmarks Establish matches between observation and expectation Calculate position

Map based – absolute localization Initial position is unknown Multi belief system Known landmarks from a map Calculate the position, incorporating the uncertainty in the landmark locations Metric map

Map based – incremental localization Start position is known Uncertainty in position is projected in camera image Only use features in their expected image parts The position gets updated

Map based – incremental localization [2]

Map based – Landmark tracking Artificial landmarks Natural landmarks Geometric and even topological representations Example: NEURO- NAV

Map building Slow process Additional problem to localization Generating occupancy grid or topological map with metric representation at nodes

Mapless navigation No explicit map Storing instructions as direct association with perception

Mapless navigation – optical flow Corridor following Viewing sideways, measuring surface speed and proximity of both walls Direction determined by PID controller Problems with walls with little visible features

Mapless navigation - Appearance-based matching Memorizing the environment Associate commands or controls with these images Like a train with a movie as “track” Can be simplified by matching only vertical edges

Outdoor navigation Changing lightning is challenging Main application is car automation

Outdoor navigation – Structured environments Navlab's ALVINN Neural network with picture or Hough transformed picture as input Lighting and shadows are a problem

Outdoor navigation – Structured environments [2] Virtual camera images, extracted from the original camera image Red and blue contrasts Speed is required for automotive applications Hue / intensity images

Outdoor navigation - Unstructured Measuring local environment metrical Example: Pathfinder rover and lander

Conclusions In controlled environments a lot can be achieved with current knowledge In free or unpredictable environments, there is still a long way to go