Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.

Slides:



Advertisements
Similar presentations
University of Karlsruhe September 30th, 2004 Masayuki Fujita
Advertisements

Chayatat Ratanasawanya Min He May 13, Background information The goal Tasks involved in implementation Depth estimation Pitch & yaw correction angle.
Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Introduction to Robotics Lecture One Robotics Club -Arjun Bhasin.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Hybrid Position-Based Visual Servoing
Vision Based Control Motion Matt Baker Kevin VanDyke.
EVENTS: INRIA Work Review Nov 18 th, Madrid.
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Professor :王聖智 教授 Student :周節.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Camera calibration and epipolar geometry
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
Real-Time Tracking of an Unpredictable Target Amidst Unknown Obstacles Cheng-Yu Lee Hector Gonzalez-Baños* Jean-Claude Latombe Computer Science Department.
ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant.
3-D Depth Reconstruction from a Single Still Image 何開暘
A Lightweight Computer- Vision-based Electronic Travel Aid Andrew B. Raij Enabling Tech Project Final Report 4/17/2003.
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
Behaviors for Compliant Robots Benjamin Stephens Christopher Atkeson We are developing models and controllers for human balance, which are evaluated on.
Embedded Visual Control for Robotics 5HC99 ir. Roel Pieters & dr. Dragan Kostić Section Dynamics and Control Dept. Mechanical Engineering {r.s.pieters,
Motion based Correspondence for Distributed 3D tracking of multiple dim objects Ashok Veeraraghavan.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
3-D Computer Vision Using Structured Light Prepared by Burak Borhan.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Vision Guided Robotics
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
1 DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Second Quarterly IPR Meeting January 13, 1999 P. I.s: Leonidas J. Guibas and.
Zereik E., Biggio A., Merlo A. and Casalino G. EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Autonomous Learning of Object Models on Mobile Robots Xiang Li Ph.D. student supervised by Dr. Mohan Sridharan Stochastic Estimation and Autonomous Robotics.
Multiple Autonomous Ground/Air Robot Coordination Exploration of AI techniques for implementing incremental learning. Development of a robot controller.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
A General Framework for Tracking Multiple People from a Moving Camera
Flow Separation for Fast and Robust Stereo Odometry [ICRA 2009]
DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Third Quarterly IPR Meeting May 11, 1999 P. I.s: Leonidas J. Guibas and Jean-Claude.
Localization for Mobile Robot Using Monocular Vision Hyunsik Ahn Jan Tongmyong University.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
MULTISENSOR INTEGRATION AND FUSION Presented by: Prince Garg.
Stereo Object Detection and Tracking Using Clustering and Bayesian Filtering Texas Tech University 2011 NSF Research Experiences for Undergraduates Site.
Mobile Robot Navigation Using Fuzzy logic Controller
Stereo Many slides adapted from Steve Seitz.
A General-Purpose Platform for 3-D Reconstruction from Sequence of Images Ahmed Eid, Sherif Rashad, and Aly Farag Computer Vision and Image Processing.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Automated Reading Assistance System Using Point-of-Gaze Estimation M.A.Sc. Thesis Presentation Automated Reading Assistance System Using Point-of-Gaze.
AN INTELLIGENT ASSISTANT FOR NAVIGATION OF VISUALLY IMPAIRED PEOPLE N.G. Bourbakis*# and D. Kavraki # #AIIS Inc., Vestal, NY, *WSU,
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
City College of New York 1 John (Jizhong) Xiao Department of Electrical Engineering City College of New York Mobile Robot Control G3300:
Computational Rephotography Soonmin Bae Aseem Agarwala Frédo Durand.
Typical DOE environmental management robotics require a highly experienced human operator to remotely guide and control every joint movement. The human.
Alan Cleary ‘12, Kendric Evans ‘10, Michael Reed ‘12, Dr. John Peterson Who is Western State? Western State College of Colorado is a small college located.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
University of Pennsylvania 1 GRASP Control of Multiple Autonomous Robot Systems Vijay Kumar Camillo Taylor Aveek Das Guilherme Pereira John Spletzer GRASP.
Processing visual information for Computer Vision
Avneesh Sud Vaibhav Vaish under the guidance of Dr. Subhashis Banerjee
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Zaid H. Rashid Supervisor Dr. Hassan M. Alwan
Multiple View Geometry for Robotics
Presentation transcript:

Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular vision system for acquiring depth information, for path planning, and for servo control. To achieve this objective, the corresponding feature points between a prerecorded image trajectory and the current image are compared and a homography is constructed. By decomposing the homography, rotation and translation information can be decoupled from the images and used as feedback in an adaptive, Lyapunov-based control strategy. To demonstrate the performance of this technique, a mobile robot platform typically used in Department of Energy D&D operations was equipped with a single camera and an additional computer. To determine the position and orientation of a robot with respect fixed location, various strategies have been investigated including Global Positioning Systems, and laser and sound ranging techniques. Due to various shortcomings associated with these techniques in some environments, researchers have explored the use of camera-based vision systems. Vision systems can provide an intelligent sense of perception for robotic systems that are required to operate in an unknown and/or dynamically changing environment that are endemic in decommissioning and decontaminating (D&D) operations. Typically, a stereo pair of cameras are required to extract depth information about a scene. Unfortunately, the position and orientation of each camera must be precisely calibrated and the requirement for an additional camera leads to increased cost, complexity, and processing burden. Introduction In this project, the performance of a recently developed homography-based visual servo controller was examined that enables a wheeled mobile robot (WMR) to track a prerecorded desired time-varying feature point trajectory. The goal of this project is to demonstrate the ability to use spatial geometric (homographic) relationships between multiple images as a means to compensate for the lack of depth information inherent in a monocular camera system mounted on a WMR. Results Methodology and Work Description Background and Motivation The performance of the homography-based visual servoing controller was experimentally demonstrated. The performance of the controller is indicated by the following plots. On-Board Camera On-Board Computer for Image Capturing and Processing Proof-of-principle testbed Monocular Visual Servo Tracking of Wheeled Mobile Robots In this project the desired WMR trajectory was prerecorded through a “Teach-By-Showing” approach. Future work will target generating the desired motion from a “Teach-By-Zooming” approach that is performed on the fly with an adjustable focal length camera. Future work will also target generating navigation functions based on current image information to control a mobile robot in the presence of obstacles. Future Work References J. Chen, W. E. Dixon, D. M. Dawson, and M. McIntire, “Homography-based Visual Servo Tracking Control of a Wheeled Mobile Robot,” Proceedings of the 2003 IEEE International Conference on Intelligent Robots and Systems, accepted, to appear. W. E. Dixon, “Teach by Zooming: Camera Independent Alternative to Teach by Showing Visual Servo Control,” Proceedings of the 2003 IEEE International Conference on Intelligent Robots and Systems, accepted, to appear. E. Holcombe and W. E. Dixon, “Monocular Visual Servoing of Wheeled Mobile Robots”, Journal of Undergraduate Research, submitted for publication. Modified Cybermotion K2A Mobile Robot A target with four feature points was constructed. An intensity thresholding image processing algorithm was developed and implemented to capture the image features. A desired image trajectory was recorded. A Singular Value Decomposition-based algorithm was implemented that extracted rotation and translation error information from a comparison of the prerecorded and current images. A controller was implemented to force the robot to follow the desired image-space trajectory. Current Position & Orientation Desired Time-Varying Position & Orientation Trajectory Control Objective Translation Error e1(t) e2(t) Time [sec] e3(t) [deg] Rotation Error Time [sec]