Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.

Slides:



Advertisements
Similar presentations
KinectFusion: Real-Time Dense Surface Mapping and Tracking
Advertisements

A Novel Approach of Assisting the Visually Impaired to Navigate Path and Avoiding Obstacle-Collisions.
Sonar and Localization LMICSE Workshop June , 2005 Alma College.
Tian Honeywell ACS Fellows Symposium Event-Driven Localization Techniques Tian He Department of Computer Science and Engineering University of Minnesota.
Wen-Hung Liao Department of Computer Science National Chengchi University November 27, 2008 Estimation of Skin Color Range Using Achromatic Features.
Motivation Spectroscopy is most important analysis tool in all natural sciences Astrophysics, chemical/material sciences, biomedicine, geophysics,… Industry.
Extracting Minimalistic Corridor Geometry from Low-Resolution Images Yinxiao Li, Vidya, N. Murali, and Stanley T. Birchfield Department of Electrical and.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Chapter 6 Feature-based alignment Advanced Computer Vision.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Segmentation (2): edge detection
Bohr Robot Group OpenCV ECE479 John Chhokar J.C. Arada Richard Dixon.
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
HCI Final Project Robust Real Time Face Detection Paul Viola, Michael Jones, Robust Real-Time Face Detetion, International Journal of Computer Vision,
1 Color Segmentation: Color Spaces and Illumination Mohan Sridharan University of Birmingham
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
What’s That? : A Location Based Service Department of Computer Science and Engineering University of Minnesota Presented by: Don Eagan Chintan Patel
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
Computer Vision Group University of California Berkeley Recognizing Objects in Adversarial Clutter: Breaking a Visual CAPTCHA Greg Mori and Jitendra Malik.
Geometric Probing with Light Beacons on Multiple Mobile Robots Sarah Bergbreiter CS287 Project Presentation May 1, 2002.
Overview and Mathematics Bjoern Griesbach
1 DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Second Quarterly IPR Meeting January 13, 1999 P. I.s: Leonidas J. Guibas and.
Sixth Sense Technology. Already existing five senses Five basic senses – seeing, feeling, smelling, tasting and hearing.
Fuzzy control of a mobile robot Implementation using a MATLAB-based rapid prototyping system.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
PixelLaser: Range scans from image segmentation Nicole Lesperance ’11 Michael Leece ’11 Steve Matsumoto ’12 Max Korbel ’13 Kenny Lei ’15 Zach Dodds ‘62.
Zereik E., Biggio A., Merlo A. and Casalino G. EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
Presented by: Z.G. Huang May 04, 2011 Did You See Bob? Human Localization using Mobile Phones Romit Roy Choudhury Duke University Durham, NC, USA Ionut.
Abstract Design Considerations and Future Plans In this project we focus on integrating sensors into a small electrical vehicle to enable it to navigate.
A Navigation Mesh for Dynamic Environments Wouter G. van Toll, Atlas F. Cook IV, Roland Geraerts CASA 2012.
Professor : Yih-Ran Sheu Student’s name : Nguyen Van Binh Student ID: MA02B203 Kinect camera 1 Southern Taiwan University Department of Electrical Engineering.
Localisation & Navigation
Indoor Localization Using a Modern Smartphone Carick Wienke Advisor: Dr. Nicholas Kirsch Although indoor localization is an important tool for a wide range.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Accuracy Evaluation of Stereo Vision Aided Inertial Navigation for Indoor Environments D. Grießbach, D. Baumbach, A. Börner, S. Zuev German Aerospace Center.
Presentation by: K.G.P.Srikanth. CONTENTS  Introduction  Components  Working  Applications.
Robot Crowd Navigation using Predictive Position Fields in the Potential Function Framework Ninad Pradhan, Timothy Burg, and Stan Birchfield Electrical.
Tour Guide Robot Project Face Detection and Face Orientation on The Mobile Robot Robotino Gökhan Remzi Yavuz Ayşenur Bilgin.
3D SLAM for Omni-directional Camera
LECTURE 6 Segment-based Localization. Position Measurement Systems The problem of Mobile Robot Navigation: Where am I? Where am I going? How should I.
Using Soar for an indoor robotic search mission Scott Hanford Penn State University Applied Research Lab 1.
MESA LAB Two papers in icfda14 Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California,
Interactively Modeling with Photogrammetry Pierre Poulin Mathieu Ouimet Marie-Claude Frasson Dép. Informatique et recherche opérationnelle Université de.
Augmented Reality and 3D modelling By Stafford Joemat Supervised by Mr James Connan.
Asian Institute of Technology
Professor : Tsung Fu Chien Student’s name : Nguyen Trong Tuyen Student ID: MA02B208 An application Kinect camera controls Vehicles by Gesture 1 Southern.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
The 18th Meeting on Image Recognition and Understanding 2015/7/29 Depth Image Enhancement Using Local Tangent Plane Approximations Kiyoshi MatsuoYoshimitsu.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
CONTENT 1. Introduction to Kinect 2. Some Libraries for Kinect 3. Implement 4. Conclusion & Future works 1.
Chapter 10. The Explorer System in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans On, Kyoung-Woon Biointelligence Laboratory.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Multi Scale CRF Based RGB-D Image Segmentation Using Inter Frames Potentials Taha Hamedani Robot Perception Lab Ferdowsi University of Mashhad The 2 nd.
Stereo Vision Local Map Alignment for Robot Environment Mapping Computer Vision Center Dept. Ciències de la Computació UAB Ricardo Toledo Morales (CVC)
Learning Roomba Module 5 - Localization. Outline What is Localization? Why is Localization important? Why is Localization hard? Some Approaches Using.
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
Neural networks (2) Reminder Avoiding overfitting Deep neural network Brief summary of supervised learning methods.
Evolving robot brains using vision Lisa Meeden Computer Science Department Swarthmore College.
A Plane-Based Approach to Mondrian Stereo Matching
Signal and Image Processing Lab
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
3D Single Image Scene Reconstruction For Video Surveillance Systems
Robotic Guidance.
Fitting Curve Models to Edges
Internet of Things A Process Calculus Approach
Presentation transcript:

Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to indoor mobility of a visually impaired person or a robot. The perspective and known geometry of a corridor is used to extract the important features of the image and create a 3D model from a single image. Multiple 3D models can be combined to increase confidence and provide a global 3D model. This poster presents our current work and results on 3D corridor modeling using single images. Abstract In the world built by humans, hallways are everywhere. As such, there are many cases where the information about the corridor is important. Hallway detection is an important component for any indoor navigation system including those used by the visually impaired, and can be used to help visually impaired people avoid the walls as well as find the doors and cross section corridors necessary to take them to their destination. A mobile robot needs to be able to navigate its environment. If that environment involves the inside of a building, it’s likely the robot will frequently encounter hallways. The most common way to extract the information on a corridor in computer vision currently is to use a range sensor and perform the computations necessary to find the walls of the corridor. Unfortunately, these methods require relatively expensive equipment, heavy processing, and high energy consumption. This makes the method impractical for a robot that needs to move quickly, a wearable system that need to be built cheaply, or a blind person who doesn’t want to carry a laptop on their back processing data all day. The most recent relatively low-cost 3D sensors are the RGB-D sensors, such as Microsoft Kinect and Asus Xtion Pro, but their sensing ranges are quite limited (0.5 to 4 meters) and are not very suitable for corridor detection. We propose a high speed, low resource, inexpensive solution to these needs. Effective portable corridor detection capabilities with nothing more than a single consumer camera such as a webcam or the cameras found on a smartphone. This is done through the detection, localization and 3D reconstruction of corridors from individual images, while multiple images can provide more detailed and confident information. Furthermore, as the RGB camera does not have a range limitation as the range sensors do, another example is using the methods described here combined with data already retrieved from range sensors to make assumptions about what the camera can see beyond the range sensors limits. Motivation Approach: While there are plenty of differences from one hallway to the next, the property of having (mostly) straight walls makes the single image processing possible. The lines which define the boundaries between the floor to the walls and the walls to the ceiling are lines that go to the vanishing point. Lines that are perpendicular to the hallway (e.g. doorframes, floor tiles, etc.) will not go to the vanishing point. Basic Steps to perform the reconstruction: We begin by performing a canny edge detection. The result is then put through a Hough transform to produce a map of lines. We are able to determine that the position of the vanishing point will be where the most lines from the Hough transform converge. From here, we search for surface features, parallel lines which do not run towards the vanishing point. This is done using a perspective based Hough transform (PBHT) which retrieves line segments. A perspective based Hough transform (PBHT) Algorithm: In the hallway image there may be strongly defined lines (such as doorframes) far down the hallway, yet they would be normally excluded from a regular Hough transform because these lines are so much smaller than lines physically closer to the camera. To account for this, the Hough transform adds a weighting biased towards objects closer to the vanishing point. The line segments found serve two purposes. First, the line segment will lie on one of the surfaces, so the end of this segment can be used to estimate which line that passes through the vanishing point defines the boundary between one surface to the next. Second, the line segment can be used to find features. The most obvious example of this is the line that outlines a door or connecting hallway. From this information, a 3D wireframe model of the hallway can be built. Approach Our current progress consists of successfully extracting useful 3D data for a corridor model from a single image. Some of this data can be seen visualized in the images below. In these images, the red lines are the relevant lines running to the vanishing point, the yellow line are those which are the detected transitions between the floor and the walls, the purple lines are the detected transitions from the walls to the ceiling, and the blue lines are detected wall feature lines. The data from the first corridor image has also been annotated and used to generate a 3D wireframe model (right). While the current implementation is not perfect, it validates the potential of the approach. In virtually every case, the floor to wall transitions are detected with reasonable accuracy. Most of the false positives given in the below examples are in the detection of the wall features and are expected to be remedied by combining wireframe data from multiple successive images. Progress and Results Future Work Create reliable single image 3D reconstruction of corridors using a smartphone; Use multiple reconstructions to increase confidence and get absolute distance information; Case studies to rigorously test and compare the accuracy, speed, and resource consumption of the methods described here in comparison to other common methods; The integration of these methods with existing methods and technologies. Acknowledgments This work is partially supported by US National Science Foundation Emerging Frontiers in Research and Innovation Program under Award No. EFRI The proposed future work of this thesis has been granted a PSC- CUNY enhanced research grant (Award # ). Additionally, the members of the CCNY Visual Computing Lab have given continual support and help. Greg Olmschenk and Zhigang Zhu Department of Computer Science, CUNY Graduate Center and City College 3D Hallway Modeling Using a Single Image Conclusions We have a presented a real-time, low resource 3D corridor reconstruction method from a single camera. The perspective based Hough transform algorithm allows for the fast and reliable reconstruction of accurate 3D models with low resources. With this, the current approach should be useful as an aid to the visually impaired as both a standalone approach as well as a method used in a combined approached.