Today: Calibration What are the camera parameters?

Slides:



Advertisements
Similar presentations
Zhengyou Zhang Vision Technology Group Microsoft Research
Advertisements

Computer Vision, Robert Pless
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
RANSAC and mosaic wrap-up cs129: Computational Photography James Hays, Brown, Fall 2012 © Jeffrey Martin (jeffrey-martin.com) most slides from Steve Seitz,
More Mosaic Madness : Computational Photography Alexei Efros, CMU, Fall 2011 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from.
RANSAC and mosaic wrap-up cs195g: Computational Photography James Hays, Brown, Spring 2010 © Jeffrey Martin (jeffrey-martin.com) most slides from Steve.
Computer vision: models, learning and inference
Chapter 6 Feature-based alignment Advanced Computer Vision.
Structure from motion.
Single-view metrology
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Announcements Project 2 questions Midterm out on Thursday
CS485/685 Computer Vision Prof. George Bebis
Uncalibrated Epipolar - Calibration
Camera model Relation between pixels and rays in space ?
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Lecture 20: Light, color, and reflectance CS6670: Computer Vision Noah Snavely.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Panorama signups Panorama project issues? Nalwa handout Announcements.
Lecture 16: Single-view modeling, Part 2 CS6670: Computer Vision Noah Snavely.
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
Camera calibration and single view metrology Class 4 Read Zhang’s paper on calibration
Panoramas and Calibration : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Rick Szeliski.
Camera Calibration from Planar Patterns Homework 2 Help SessionCS223bStanford University Mitul Saha (courtesy: Jean-Yves Bouguet, Intel)
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 02/26/15 1.
CSE 576, Spring 2008Projective Geometry1 Lecture slides by Steve Seitz (mostly) Lecture presented by Rick Szeliski.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Projective Geometry and Single View Modeling CSE 455, Winter 2010 January 29, 2010 Ames Room.
Single-view modeling, Part 2 CS4670/5670: Computer Vision Noah Snavely.
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
Camera Geometry and Calibration Thanks to Martial Hebert.
Last Lecture (optical center) origin principal point P (X,Y,Z) p (x,y) x y.
Camera calibration Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz,, Fred Pighin and Marc Pollefyes.
Geometric Camera Models
Vision Review: Image Formation Course web page: September 10, 2002.
Project 2 –contact your partner to coordinate ASAP –more work than the last project (> 1 week) –you should have signed up for panorama kitssigned up –sign.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Affine Structure from Motion
Single-view geometry Odilon Redon, Cyclops, 1914.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
EECS 274 Computer Vision Geometric Camera Calibration.
EECS 274 Computer Vision Affine Structure from Motion.
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
Calibration.
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction.
Project 1 Due NOW Project 2 out today –Help session at end of class Announcements.
EECS 274 Computer Vision Projective Structure from Motion.
Project 2 went out on Tuesday You should have a partner You should have signed up for a panorama kit Announcements.
More Mosaic Madness © Jeffrey Martin (jeffrey-martin.com)
Depth from disparity (x´,y´)=(x+D(x,y), y)
Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/17
Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/4/19
Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/4/15
More Mosaic Madness : Computational Photography
More Mosaic Madness © Jeffrey Martin (jeffrey-martin.com)
Digital Visual Effects Yung-Yu Chuang
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Projective geometry Readings
More Mosaic Madness © Jeffrey Martin (jeffrey-martin.com)
Single-view geometry Odilon Redon, Cyclops, 1914.
Announcements Project 2 extension: Friday, Feb 8
Presentation transcript:

Today: Calibration What are the camera parameters? Where are the light sources? What are the camera parameters? What is the mapping from radiance to pixel color?

Why Calibrate? Want to solve for 3D geometry Alternative approach Solve for 3D shape without known cameras Structure from motion (unknown extrinsics) Self calibration (unknown intrinsics & extrinsics) Why bother pre-calibrating the camera? Simplifies the 3D reconstruction problem fewer parameters to solve for later on Improves accuracy Not too hard to do Eliminates certain ambiguities (scale of scene)

Applications 3D Modeling Match Move Image-Based Rendering Images courtesy of Brett Allen (“Vision for Graphics”, winter `01) Light field capture and rendering (Levoy & Hanrahan, 96)

Camera Parameters So far we’ve talked about: There is also focal length principal (and nodal) point radial distortion CCD dimensions aperture There is also optical center orientation digitizer parameters

Do we need all this stuff? Usually simplify to “computable stuff” Intrinsics: scale factor (“focal length”) aspect ratio principle point radial distortion Extrinsics optical center camera orientation How does this relate to projection matrix?

Projection Models Orthographic Weak Perspective Affine Perspective Projective

The Projection Matrix Matrix Projection: M can be decomposed into t  R  project  A intrinsics (A) projection orientation position

Goal of Calibration Learn mapping from 3D to 2D Can take different forms: Projection matrix: Camera parameters: General mapping

Calibration: Basic Idea Place a known object in the scene identify correspondence between image and scene compute mapping from scene to image Problem: must know geometry very accurately how to get this info?

Alternative: Multi-plane calibration Images courtesy Jean-Yves Bouguet, Intel Corp. Advantage Only requires a plane Don’t have to know positions/orientations Good code available online! Zhengyou Zhang’s web site: http://research.microsoft.com/~zhang/Calib/ Intel’s OpenCV library: http://www.intel.com/research/mrl/research/opencv/ Matlab version by Jean-Yves Bouget: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html Disadvantages?

Alternative: Multi-plane calibration Images courtesy Jean-Yves Bouguet, Intel Corp. Need 3D -> 2D correspondence User provided (lots ‘O clicking) User seeded (some clicking) Fully automatic?

Courtesy of Bruce Culbertson, HP Labs Chromaglyphs Courtesy of Bruce Culbertson, HP Labs http://www.hpl.hp.com/personal/Bruce_Culbertson/ibr98/chromagl.htm

Projector Calibration A projector is the “inverse” of a camera has the same parameters, light just flows in reverse how to figure out where the projector is? Basic idea first calibrate the camera wrt. projection screen now we can compute 3D coords of each projected point use standard camera calibration routines to find projector parameters since we known 3D -> projector mapping

Calibration Approaches Possible approaches (not comprehensive!) Experimental design planar patterns non-planar grids Optimization techniques direct linear regression non-linear optimization Cues 3D -> 2D vanishing points special camera motions panorama stitching circular camera movement Want accuracy ease of use usually a trade-off

Properties of Projection Preserves Lines and conics Incidence Invariants (cross-ratio) can show that the only transformations that preserve lines and incidence are the projective transformations Does not preserve Lengths Angles Parallelism

Estimating the Projection Matrix Place a known object in the scene identify correspondence between image and scene compute mapping from scene to image

Direct Linear Calibration

Direct Linear Calibration What error function are we minimizing? Can solve for mij by linear least squares

Nonlinear estimation Feature measurement equations Minimize “image-space error” How to minimize e(M)? Non-linear regression (least squares), Popular choice: Levenberg-Marquardt [Press’92]

Statistical estimation Feature measurement equations Likelihood of measurements given M Minimize C wrt. M gives maximum likelihood estimate (MLE) covariance specified by Hessian-1 (inverse of second deriv matrix of C) Negative Log likelihood

Camera matrix calibration Advantages: very simple to formulate and solve can recover K [R | t] from M using RQ decomposition [Golub & VanLoan 96] Disadvantages? doesn’t model radial distortion more unknowns than true degrees of freedom (sometimes) need a separate camera matrix for each new view

Separate intrinsics / extrinsics New feature measurement equations Use non-linear minimization e.g., Levenberg-Marquardt [Press’92] Standard technique in photogrammetry, computer vision, computer graphics [Tsai 87] – also estimates k1 (freeware @ CMU) http://www.cs.cmu.edu/afs/cs/project/cil/ftp/html/v-source.html [Zhang 99] – estimates k1, k2, easier to use than Tsai code available from Zhang’s web site and in Intel’s OpenCV http://research.microsoft.com/~zhang/Calib/ http://www.intel.com/research/mrl/research/opencv/ Matlab version by Jean-Yves Bouget: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html i – features j – images

Calibration from (unknown) Planes What’s the image of a plane under perspective? a homography (3x3 projective transformation) preserves lines, incidence, conics H depends on camera parameters (A, R, t) where Given 3 homographies, can compute A, R, t

Calibration from Planes 1. Compute homography Hi for 3+ planes Doesn’t require knowing 3D Does require mapping between at least 4 points on plane and in image (both expressed in 2D plane coordinates) 2. Solve for A, R, t from H1, H2, H3 1plane if only f unknown 2 planes if (f,uc,vc) unknown 3+ planes for full K 3. Introduce radial distortion model Solve for A, R, t, k1, k2 nonlinear optimization (using Levenberg-Marquardt)