Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 University.
Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.
High-Resolution Three- Dimensional Sensing of Fast Deforming Objects Philip Fong Florian Buron Stanford University This work supported by:
Kawada Industries Inc. has introduced the HRP-2P for Robodex 2002
Rotary Encoder. Wikipedia- Definition  A rotary encoder, also called a shaft encoder, is an electro- mechanical device that converts the angular position.
Simultaneous Localization & Mapping - SLAM
Francisco Barranco Cornelia Fermüller Yiannis Aloimonos Event-based contour motion estimation.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Imaging Systems X-Rays. Imaging Systems: Shared Elements 1.Where did the energy come from? 2.What happens when the energy interacts with matter? 3.How.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
3D Video Generation and Service Based on a TOF Depth Sensor in MPEG-4 Multimedia Framework IEEE Consumer Electronics Sung-Yeol Kim Ji-Ho Cho Andres Koschan.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
3D Mapping Robots Intelligent Robotics School of Computer Science Jeremy Wyatt James Walker.
Motion based Correspondence for Distributed 3D tracking of multiple dim objects Ashok Veeraraghavan.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Image Forgery Detection by Gamma Correction Differences.
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
3-D Computer Vision Using Structured Light Prepared by Burak Borhan.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Digital Images The nature and acquisition of a digital image.
Overview and Mathematics Bjoern Griesbach
Post-rendering Cel Shading & Bloom Effect
Conventional and Computed Tomography
Slide 1 ROBOT VISION  2000 Jaskaran Singh ROBOT VISION.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
1 CCTV SYSTEMS CCD VERSUS CMOS COMPARISON CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor) image sensors are two different.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Sensors. Sensors are for Perception Sensors are physical devices that measure physical quantities. – Such as light, temperature, pressure – Proprioception.
Real-Time Phase-Stamp Range Finder with Improved Accuracy Akira Kimachi Osaka Electro-Communication University Neyagawa, Osaka , Japan 1August.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Perception Introduction Pattern Recognition Image Formation
3D SLAM for Omni-directional Camera
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Stereo Many slides adapted from Steve Seitz.
Image Processing Jitendra Malik. Different kinds of images Radiance images, where a pixel value corresponds to the radiance from some point in the scene.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
TDI-CIS扫描MTF模型 李林
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Digital Camera TAVITA SU’A. Overview ◦Digital Camera ◦Image Sensor ◦CMOS ◦CCD ◦Color ◦Aperture ◦Shutter Speed ◦ISO.
Autonomous Robots Vision © Manfred Huber 2014.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach.
Digital Image Processing CSC331
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Design and Calibration of a Multi-View TOF Sensor Fusion System Young Min Kim, Derek Chan, Christian Theobalt, Sebastian Thrun Stanford University.
range from cameras stereoscopic (3D) camera pairs illumination-based
Electronics Lecture 5 By Dr. Mona Elneklawi.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
DIGITAL SIGNAL PROCESSING
Simultaneous Localization and Mapping
Ali Ercan & Ulrich Barnhoefer
دکتر سعید شیری قیداری & فصل 4 کتاب
Common Classification Tasks
Range Imaging Through Triangulation
Image Segmentation.
Elementary Mechanics of Fluids Lab # 3 FLOW VISUALIZATION
Elementary Mechanics of Fluids Lab # 3 FLOW VISUALIZATION
Presentation transcript:

Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara

Robots Gaining immense importance Presence of robots being felt in all walks of life. Image detection has become a prerequisite for effective navigation. The robot should be able to ‘extract’ all the necessary information from its sensors.

Image detection Conventional 2D images detect brightness but don’t detect depth. Therefore 3D Time of Flight Cameras are being used. The depth information is depicted using color codes. 3D ToF cameras combine the accurate distance measurements and camera based system. A final discussion about PMD and the psuedo four phase shift algorithm

Introduction Four building blocks of navigation 1. perception-robot must be able to interpret meaningful data using the sensors 2. localization- the robot must be able to determine its position with regard to the environment 3. cognition- the robot must be able to determine its path 4. motion control- the mechanical traversal along the planned path

Simultaneous Localization and Mapping (SLAM) [3]. In most cases, the processes of exploring an unknown environment through maps and determining the relative position are performed simultaneously through a process known as Simultaneous Localization and Mapping

Several methods to obtainb 3D images An image from stereo vision camera which provides 3D details of an object can be fused with the measurements of a 2D laser range finder. Stereo vision requires complicated algorithms and powerful sensors to construct its occupancy grid and despite all these, it is prone to error

SfM= Structure from Motion Works assuming that the object is going to move. Trajectories of points are used to estimate dimensions. Technique will not work if object is dynamic(like flowing water)

Stereo Vision v/s Kinetic depth technique In Stereo Vision, the image and the data from the laser range finders corresponding to the same time has to be overlapped to obtain a 3D vision. In Kinetic depth technique, the image of the same object has to be taken at two different time intervals- either ways, both techniques require data fusion which requires computing power.

Laser Range Scanners Laser Range Scanner which works on the principle of calculating the distance from the observer to a particular point. Laser Range Scanners provide sparse data sets, use mechanical components and do not provide a 3D image with one image capture

ToF cameras The time of flight cameras combine the features of active range sensors and camera based approaches and provide a complex image which contains both the intensities and also the distances of each and every point. There is no fusion of data from two separate sources and the data is being gathered continuously

Principle behind the time of flight cameras Points that are distant from the camera will take greater time to reach it. The distance to the object us calculated using properties of light and phase shift of modulation envelope of the light source. The phase and amplitude of the reflected light can be detected using various signal processing techniques. Usually, to get a high resolution CCD based sensors are employed

CMOS ToF camera CMOS chip based cameras appear most widely in the literature.

CMOS sensors usually have 64x64 pixel array and are implemented on a single chip using ordinary, low cost CMOS process. It also needs to have ADC and also a mechanism to generate high speed modulation signals The main part of the sensor design is the unique pixel structure

Unique pixel structure

The differential structure accumulates photogenerated charges in two collection nodes using two modulated gates. The gate modulation signals are synchronized with the light source, and hence depending on the phase of incoming light, one node collects more charges than the other.

Calculating the depth resolution

Resolution contd

Enhancement of Depth Images Optical noise existence, unmatched boundaries, and temporal inconsistency are the three critical problems which a ToF image suffers from. Techniques like Gaussian smoothing and quadratic Bezier curve are used for static 3D images However, for enhancement of dynamic images, we use newly designed joint bilateral filtering, color segmentation based boundary refinement, and motion estimation based temporal consistency.

Bilateral Filter Constructed using both color and depth information at the same time. After color segmenting a color image, we extract the color segment set to detect object boundaries. To minimize temporal depth flickering artifacts on stationary objects, we match previous and current frame color images.

Review of latest developments These cameras are able to provide registered dense depth and intense images, complete image acquisition and high frame rate, small and compact design. They don’t need any mobile parts and have auto- illumination

Errors and Compensations for ToF cameras Systematic Errors: 1. Depth Distortion 2. Integrated time related error 3. Built in pixel related errors 4. Amplitude related errors 5. Temperature related errors

Non Systematic Errors 1. SNR 2. Multiple light reception 3. Light scattering 4. Motion blurring

Photonic Mixer Devices

PMD cont’d Photonic Mixer Devices are also based on ToF principle and can realize a 3D image without complex electronics similar to a CMOS device In a PMD, instead of a single laser beam (which would have to be scanned over the scene to obtain 3D) the entire scene is illuminated with modulated light.

Pseudo-Four-Phase-Shift Algorithm for Performance Enhancement of 3D-TOF Vision Systems

Only two image captures instead of four are required to calculate the phase difference φ. The frame rate of PMD TOF sensors is doubled without changing the integration time Tint.

Thanks