DEPTH RANGE ACCURACY FOR PLENOPTIC CAMERAS

Slides:



Advertisements
Similar presentations
Lecture 11: Two-view geometry
Advertisements

CSE473/573 – Stereo and Multiple View Geometry
RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Shape From Light Field meets Robust PCA
Stereo.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Image or Object? Michael F. Cohen Microsoft Research.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Stereo Guest Lecture by Li Zhang
Project 1 artifact winners Project 2 questions Project 2 extra signup slots –Can take a second slot if you’d like Announcements.
Scene planes and homographies. Homographies given the plane and vice versa.
InerVis Mobile Robotics Laboratory Institute of Systems and Robotics ISR – Coimbra Contact Person: Jorge Lobo Human inertial sensor:
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Image-based Water Surface Reconstruction with Refractive Stereo Nigel Morris University of Toronto.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Automatic Camera Calibration
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
Announcements Project 1 artifact winners Project 2 questions
Structure from images. Calibration Review: Pinhole Camera.
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Image-based rendering Michael F. Cohen Microsoft Research.
Metrology 1.Perspective distortion. 2.Depth is lost.
Dynamic Refraction Stereo 7. Contributions Refractive disparity optimization gives stable reconstructions regardless of surface shape Require no geometric.
Announcements Project 3 due Thursday by 11:59pm Demos on Friday; signup on CMS Prelim to be distributed in class Friday, due Wednesday by the beginning.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Tutorial Visual Perception Towards Computer Vision
Feature Matching. Feature Space Outlier Rejection.
CSE 140: Computer Vision Camillo J. Taylor Assistant Professor CIS Dept, UPenn.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference M Ahad Multiple Cameras
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
3D Reconstruction Using Image Sequence
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
Auto-stereoscopic Light-Field Display By: Jesus Caban George Landon.
Che-An Wu Background substitution. Background Substitution AlphaMa p Trimap Depth Map Extract the foreground object and put into another background Objective.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Computer vision: models, learning and inference
3D Vision Yang Wang National ICT Australia
Guillaume-Alexandre Bilodeau
Multiple View Geometry
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Reconstruction For Rendering distribution Effect
Epipolar geometry.
Announcements Midterms graded (handed back at end of lecture)
What have we learned so far?
Learning to See in the Dark
Rob Fergus Computer Vision
Network Topology and Geometry
Multiple View Geometry for Robotics
Single Image Rolling Shutter Distortion Correction
Computer Vision Stereo Vision.
Single-view geometry Odilon Redon, Cyclops, 1914.
Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal.
Stereo vision Many slides adapted from Steve Seitz.
Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal.
Presentation transcript:

DEPTH RANGE ACCURACY FOR PLENOPTIC CAMERAS N. Monteiro, S. Marto, J. Barreto and J. Gaspar ISR LISBOA / ISR COIMBRA / LARSyS Abstract Plenoptic cameras capture the directional information of the light distribution from a scene. This is accomplished by positioning a microlens array between the main lens and the sensor. This configuration obtains multiple projections for a scene point, which allows to retrieve the point's depth on a single exposure. In recent years, several studies recover depth and shape from the lightfield data using several cues. These studies focus on non-metric reconstruction or relative depth. In this work, we formalize a projection model and define a metric reconstruction methodology for a calibrated standard plenoptic camera to evaluate the depth estimation accuracy for these cameras. The metric reconstruction methodology is applied to a public dataset containing calibration grids, at different depths. The results indicate that these cameras are capable of reconstructing depth with high metric accuracy for points near the camera. The accuracy of the depth reconstruction is improved by imposing consistency on input data. Projection and Reconstruction Let us consider the lightfield parameterized by one direction (u,v) and one position (s,t). Let us also consider the intrinsic matrix H that allows to map the lightfield in the image sensor (i,j,k,l) to the lightifled in the object space (s,t,u,v). Considering this mapping and the relation of (s,t,u,v) and a point in the object space: The projection is given by: Considering each position (s,t) as a pinhole, we can define the reconstruction as a multi-view stereo problem. Thus, considering that we have z correspondences SUBTITLE: Estimated (red) and ground truth (blue) calibration grid points obtained from the calibration datasets A, B, and C provided by Dansereau et al. [1] for different poses of the calibration pattern. Depth Estimation Experiments The reconstruction methodology is applied to calibration datasets were the calibration grids are placed at a known depth. The reconstruction is complemented with the reconstruction of randomly selected scene points obtained after projection. The findings suggest a depth range between 0 and 1.5 m, which limits the reconstruction to points near the camera. SUBTITLE: Reconstructed depth for dataset A (green), B (blue), and C (cyan) superimposed by the estimates obtained for the calibration grid points of poses 1, 5, and 9 of each dataset (red dots). References [1] Dansereau, Donald G., Oscar Pizarro, and Stefan B. Williams. "Decoding, calibration and rectification for lenselet-based plenoptic cameras." Proceedings of the IEEE conference on computer vision and pattern recognition. 2013.