SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.

Slides:



Advertisements
Similar presentations
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Advertisements

A Novel Approach of Assisting the Visually Impaired to Navigate Path and Avoiding Obstacle-Collisions.
Leveraging Stereopsis for Saliency Analysis
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
Luis Mejias, Srikanth Saripalli, Pascual Campoy and Gaurav Sukhatme.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Automatic Identification of Bacterial Types using Statistical Image Modeling Sigal Trattner, Dr. Hayit Greenspan, Prof. Shimon Abboud Department of Biomedical.
Motion planning, control and obstacle avoidance D. Calisi.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Ghunhui Gu, Joseph J. Lim, Pablo Arbeláez, Jitendra Malik University of California at Berkeley Berkeley, CA
Image Parsing: Unifying Segmentation and Detection Z. Tu, X. Chen, A.L. Yuille and S-C. Hz ICCV 2003 (Marr Prize) & IJCV 2005 Sanketh Shetty.
Abstract This project focuses on realizing a series of operational improvements for WPI’s unmanned ground vehicle Prometheus with the end goal of a winning.
Project Progress Presentation Coffee delivery mission Dec, 10, 2007 NSH 3211 Hyun Soo Park, Iacopo Gentilini 1.
Nice, 17/18 December 2001 Autonomous mapping of natural fields using Random Closed Set Models Stefan Rolfes, Maria Joao Rendas
1 Color Segmentation: Color Spaces and Illumination Mohan Sridharan University of Birmingham
ADVISE: Advanced Digital Video Information Segmentation Engine
ECE 4340/7340 Exam #2 Review Winter Sensing and Perception CMUcam and image representation (RGB, YUV) Percept; logical sensors Logical redundancy.
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
Overview of Computer Vision CS491E/791E. What is Computer Vision? Deals with the development of the theoretical and algorithmic basis by which useful.
1 Visual Information Extraction in Content-based Image Retrieval System Presented by: Mian Huang Weichuan Dong Apr 29, 2004.
Quadtrees, Octrees and their Applications in Digital Image Processing
Obstacle detection using v-disparity image
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Image Processing of Video on Unmanned Aircraft Video processing on-board Unmanned Aircraft Aims to develop image acquisition, processing and transmission.
Christian Siagian Laurent Itti Univ. Southern California, CA, USA
Real-time Crowd Movement On Large Scale Terrains Speaker: Alvin Date:4/26/2004From:TPCG03.
ICPCA 2008 Research of architecture for digital campus LBS in Pervasive Computing Environment 1.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Manhattan-world Stereo Y. Furukawa, B. Curless, S. M. Seitz, and R. Szeliski 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
1 Applying Vision to Intelligent Human-Computer Interaction Guangqi Ye Department of Computer Science The Johns Hopkins University Baltimore, MD
A Brief Overview of Computer Vision Jinxiang Chai.
Introduction to Vision & Robotics Bob Fisher IF 1.26 Michael Herrmann IF Lectures: Handouts (+ video)
Multi-Sensor Image Fusion (MSIF) Team Members: Phu Kieu, Keenan Knaur Faculty Advisor: Dr. Eun-Young (Elaine) Kang Northrop Grumman Liaison: Richard Gilmore.
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; March Adaptive Intelligent Mobile Robotics Leslie Pack Kaelbling Artificial Intelligence.
A General Framework for Tracking Multiple People from a Moving Camera
EE 492 ENGINEERING PROJECT LIP TRACKING Yusuf Ziya Işık & Ashat Turlibayev Yusuf Ziya Işık & Ashat Turlibayev Advisor: Prof. Dr. Bülent Sankur Advisor:
Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial Visual Object Recognition Bastian Leibe & Computer Vision Laboratory ETH.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
CVPR Workshop on RTV4HCI 7/2/2004, Washington D.C. Gesture Recognition Using 3D Appearance and Motion Features Guangqi Ye, Jason J. Corso, Gregory D. Hager.
Academic and pedagogical options in CIM laboratory CIM in universities.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
Model of the Human  Name Stan  Emotion Happy  Command Watch me  Face Location (x,y,z) = (122, 34, 205)  Hand Locations (x,y,z) = (85, -10, 175) (x,y,z)
Using RouteGraphs as an Appropriate Data Structure for Navigational Tasks SFB/IQN-Kolloquium Christian Mandel, A1-[RoboMap] Overview Goal scenario.
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
Topological Path Planning JBNU, Division of Computer Science and Engineering Parallel Computing Lab Jonghwi Kim Introduction to AI Robots Chapter 9.
Robotic Chapter 8. Artificial IntelligenceChapter 72 Robotic 1) Robotics is the intelligent connection of perception action. 2) A robotic is anything.
Aeronautics & Astronautics Autonomous Flight Systems Laboratory All slides and material copyright of University of Washington Autonomous Flight Systems.
Image Emotional Semantic Query Based On Color Semantic Description Wei-Ning Wang, Ying-Lin Yu Department of Electronic and Information Engineering, South.
Exploiting cross-modal rhythm for robot perception of objects Artur M. Arsenio Paul Fitzpatrick MIT Computer Science and Artificial Intelligence Laboratory.
1 Robotic Chapter AI & ESChapter 7 Robotic 2 Robotic 1) Robotics is the intelligent connection of perception action. 2) A robotic is anything.
Stereo Vision Local Map Alignment for Robot Environment Mapping Computer Vision Center Dept. Ciències de la Computació UAB Ricardo Toledo Morales (CVC)
Edge Segmentation in Computer Images CSE350/ Sep 03.
Multi-view Traffic Sign Detection, Recognition and 3D Localisation Radu Timofte, Karel Zimmermann, and Luc Van Gool.
Chapter 21 Robotic Perception and action Chapter 21 Robotic Perception and action Artificial Intelligence ดร. วิภาดา เวทย์ประสิทธิ์ ภาควิชาวิทยาการคอมพิวเตอร์
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
San Diego May 22, 2013 Giovanni Saponaro Giampiero Salvi
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
Pursuit-Evasion Games with UGVs and UAVs
Zhigang Zhu, K. Deepak Rajasekar Allen R. Hanson, Edward M. Riseman
Institute of Neural Information Processing (Prof. Heiko Neumann •
Application to Animating a Digital Actor on Flat Terrain
Region and Shape Extraction
EE 492 ENGINEERING PROJECT
CSE (c) S. Tanimoto, 2001 Image Understanding
Presentation transcript:

SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction and Robotics Laboratory (CIRL) Johns Hopkins University

SPIE'01CIRL-JHU2 Outline  Introduction Motivation – Navigation Strategies  Tracking-System Architecture Pre-Processing New Tracking Definition Feature Identification  Results  Conclusions

SPIE'01CIRL-JHU3 Navigation Strategies Sensor-Based Control control signals for the robot are generated directly from the visual input Map-Based Navigation pre-processed sensor data is stored in a geometrical representation of the envi- ronment (map). Path plan- ning+strategy algorithms are used to define the actions of the robot

SPIE'01CIRL-JHU4 Tracking Primitives Dynamic Vision (XVision) algorithms Color Tracking Pattern Tracking Disparity tracking

SPIE'01CIRL-JHU5 XVision as Tracking Tool Dynamic Vision (XVision) algorithms applications

SPIE'01CIRL-JHU6 Tracking-System Architecture

SPIE'01CIRL-JHU7 Dynamic Composition of Tracking Cues

SPIE'01CIRL-JHU8 Tracking-System Architecture

SPIE'01CIRL-JHU9 Segmentation in the Color Space - HSI representation of color space - Variable resolution gridding of space Intensity Hue Saturation

SPIE'01CIRL-JHU10 Segmentation in the Disparity Domain

SPIE'01CIRL-JHU11 Tracking-System Architecture

SPIE'01CIRL-JHU12 State Transitions in the Tracking Process

SPIE'01CIRL-JHU13 State Information saved in the Tracking Module Information about the object in the real scene is shared between the different Image Identifications: Position in the image Size of the region Range in the current image domain Shape ratio in the image Compactness of the region

SPIE'01CIRL-JHU14 Tracking-System Architecture

SPIE'01CIRL-JHU15 Quality Value for Initial Search

SPIE'01CIRL-JHU16 Problem in the Disparity Domain

SPIE'01CIRL-JHU17 Ground Plane Suppression

SPIE'01CIRL-JHU18 Results Obstacle Detection

SPIE'01CIRL-JHU19 Results Dynamic Composition

SPIE'01CIRL-JHU20 Conclusions and Future Work:  Dynamic Composition of the two Basic Feature Identification tools allowed robust initial selection and navigation through a door  Extension to the entire set of Feature Identification tools is our next step  The developed algorithms allow robust obstacle avoidance

SPIE'01CIRL-JHU21 Additional Information: Web: