Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh.

Slides:



Advertisements
Similar presentations
EUCLIDEAN POSITION ESTIMATION OF FEATURES ON A STATIC OBJECT USING A MOVING CALIBRATED CAMERA Nitendra Nath, David Braganza ‡, and Darren Dawson EUCLIDEAN.
Advertisements

Two-View Geometry CS Sastry and Yang
N-view factorization and bundle adjustment CMPUT 613.
Two-view geometry.
1 Long-term image-based motion estimation Dennis Strelow.
Camera calibration and epipolar geometry
A Generic Concept for Camera Calibration Peter Sturm and Srikumar Ramaligam Sung Huh CPSC 643 Individual Presentation 4 April 15, 2009.
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
Motion from image and inertial measurements (additional slides) Dennis Strelow Carnegie Mellon University.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Uncalibrated Epipolar - Calibration
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
Projected image of a cube. Classical Calibration.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely.
COMP322/S2000/L271 Stereo Imaging Ref.V.S.Nalwa, A Guided Tour of Computer Vision, Addison Wesley, (ISBN ) Slides are adapted from CS641.
Multiple View Geometry. THE GEOMETRY OF MULTIPLE VIEWS Reading: Chapter 10. Epipolar Geometry The Essential Matrix The Fundamental Matrix The Trifocal.
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Automatic Camera Calibration
Computer vision: models, learning and inference
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Multi-view geometry.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
The Brightness Constraint
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
SS5305 – Motion Capture Initialization 1. Objectives Camera Setup Data Capture using a Single Camera Data Capture using two Cameras Calibration Calibration.
Epipolar geometry Epipolar Plane Baseline Epipoles Epipolar Lines
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Geometry of Multiple Views
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh Carnegie Mellon University.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
3D reconstruction from uncalibrated images
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Reconstruction from Two Calibrated Views Two-View Geometry
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alex Norton Advisor: Dr. Huggins February 28, 2012 Senior Project Progress Report Bradley University.
1 Long-term image-based motion estimation Dennis Strelow and Sanjiv Singh.
Geometry Rigid Transformations What do the words below mean? (think 1 min, discuss in group 1 min) Translation Reflection Rotation A SLIDE. The object.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh Carnegie Mellon University.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Paper – Stephen Se, David Lowe, Jim Little
The Brightness Constraint
Epipolar geometry.
The Brightness Constraint
The Brightness Constraint
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Lecture 8: Image alignment
Calibration and homographies
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh

Overview (1) Projection model for omnidirectional cameras that accounts for the full rotation and translation between camera and mirror Projection model handles noncentral omnidirectional cameras Calibration algorithm determines relative position from one omnidirectional image of known 3D targets

Overview (2) One image sufficient for accurate calibration of transformation Full calibration allows shape-from-motion and epipolar matching even if camera- mirror misalignment is severe Full model improves shape-from-motion and epipolar geometry results even if the camera and mirror are closely aligned

Omnidirectional cameras

Omnidirectional projections (1) The mirror point m determines the projection

Omnidirectional projections (2) Finding the mirror point is… one-dimensional (z only) if the mirror and camera are assumed aligned closed form for aligned single viewpoint cameras

Omnidirectional projections (3) If the camera and mirror are not aligned, then two constraints determine m

Equiangular Cameras (1)

Equiangular cameras (2) Relative rotation, translation between axes distorts projections

Calibration (1)

Calibration (2) Least squares error to be minimized: Known: 2D projections x i, 3D points p i Unknown: Camera position R c, t c ; mirror-to- camera transformation (implicit in ∏ )

Experiments (1) Basic questions about calibration: 1. Does the calibration produce the correct mirror-to-camera transformation? 2. Is the model correct, e.g., is it possible to perform SFM with misaligned a mirror? 3. Is the full model worthwhile if the mirror is nearly aligned?

Experiments (2) Three lab sequences Mirror and camera axes: 1. Closely aligned 2. Moderate misalignment 3. Severe misalignment

Experiments (3)

Experiments (4) Performed shape-from-motion on each sequence using each of three calibrations: A. Calibrate nothing B. Calibrate mirror-camera distance C. Calibrate rotation and translation Calibration B is interesting because computing the projection in this case is a one-dimensional problem

Experiments (5): residuals Difference between observed target image location and reprojected location (pixels) Cal. ACal. BCal. C Seq Seq Seq

Experiments (6): values Sequences differ mainly in t x Standard deviations are small t x (cm) Seq ± Seq ± Seq ±

Experiments (7): apex reproj. Difference between observed screw center and reprojected mirror apex Difference Seq pixels Seq pixels Seq pixels

Experiments (8): SFM Shape-from-motion average reprojection errors (pixels) and depth errors (cm) Cal. ACal. BCal. C Seq / / / 2.0 Seq / / / 1.9 Seq / / / 1.9

Experiments (9): epipolar error Average distance in pixels from epipolar line to correct match Cal. ACal. BCal. C Seq Seq Seq