Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002.

Slides:



Advertisements
Similar presentations
C280, Computer Vision Prof. Trevor Darrell Lecture 2: Image Formation.
Advertisements

Single-view geometry Odilon Redon, Cyclops, 1914.
Epipolar Geometry.
Computer Vision, Robert Pless
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
Computer vision: models, learning and inference
Two-View Geometry CS Sastry and Yang
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
Camera calibration and epipolar geometry
Camera Models A camera is a mapping between the 3D world and a 2D image The principal camera of interest is central projection.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
Announcements. Projection Today’s Readings Nalwa 2.1.
Used slides/content with permission from
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
CS485/685 Computer Vision Prof. George Bebis
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Computer Vision : CISC4/689
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
The Pinhole Camera Model
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Cameras, lenses, and calibration
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Instance-level recognition I. - Camera geometry and image alignment Josef Sivic INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Cameras Course web page: vision.cis.udel.edu/cv March 22, 2003  Lecture 16.
Introduction à la vision artificielle III Jean Ponce
Camera Geometry and Calibration Thanks to Martial Hebert.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Image Formation Fundamentals Basic Concepts (Continued…)
Homogeneous Coordinates (Projective Space) Let be a point in Euclidean space Change to homogeneous coordinates: Defined up to scale: Can go back to non-homogeneous.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Metrology 1.Perspective distortion. 2.Depth is lost.
Stereo Course web page: vision.cis.udel.edu/~cv April 11, 2003  Lecture 21.
Geometric Camera Models
Single View Geometry Course web page: vision.cis.udel.edu/cv April 9, 2003  Lecture 20.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Affine Structure from Motion
Single-view geometry Odilon Redon, Cyclops, 1914.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
EECS 274 Computer Vision Affine Structure from Motion.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Feature Matching. Feature Space Outlier Rejection.
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Single-view geometry Odilon Redon, Cyclops, 1914.
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
CS682, Jana Kosecka Rigid Body Motion and Image Formation Jana Kosecka
Calibrating a single camera
Nazar Khan PUCIT Lecture 19
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Homogeneous Coordinates (Projective Space)
Epipolar geometry.
Geometric Camera Models
Multiple View Geometry for Robotics
Single-view geometry Odilon Redon, Cyclops, 1914.
The Pinhole Camera Model
Presentation transcript:

Vision Review: Image Formation Course web page: September 10, 2002

Announcements Lecture on Thursday will be about Matlab; next Tuesday will be “Image Processing” The dates some early papers will be presented are posted. Who’s doing what is not, yet (except that I’m the first two). In particular, read “Video Mosaics” paper up to “Projective Depth Recovery” section Supporting readings: Chapters 1, 3 (through “Hue, Saturation, and Value” subsection), 5 (through 5.3.2), and 7.4 of Forsyth & Ponce

Computer Vision Review Outline Image formation Image processing Motion & Estimation Classification

The Image Formation Pipeline

Outline: Image Formation Geometry –Coordinate systems, transformations –Perspective projection –Lenses Radiometry –Light emission, interaction with surfaces Analog  Digital –Spatial sampling –Dynamic range –Temporal integration

Coordinate System Conventions unit vectors along positive axes, respectively; Right- vs. left-handed coordinates Local coordinate systems: camera, world, etc.

Homogeneous Coordinates (Projective Space) Let be a point in Euclidean space Change to homogeneous coordinates: Defined up to scale: Can go back to non-homogeneous representation as follows:

3-D Transformations: Translation Ordinarily, a translation between points is expressed as a vector addition Homogeneous coordinates allow it to be written as a matrix multiplication:

3-D Rotations: Euler Angles Can decompose rotation of about arbi- trary 3-D axis into rotations about the coordinate axes (“yaw-roll-pitch”), where: (Clockwise when looking toward the origin)

3-D Transformations: Rotation A rotation of a point about an arbitrary axis normally expressed as a multiplication by the rotation matrix is written with homogeneous coordinates as follows:

3-D Transformations: Change of Coordinates Any rigid transformation can be written as a combined rotation and translation:

3-D Transformations: Change of Coordinates Points in one coordinate sytem are transformed to the other as follows: (taking the camera to the world origin) represents the camera’s extrinsic parameters

Pinhole Camera Model Camera center Principal point Image point Camera point Image plane Focal length Optical axis

Pinhole Perspective Projection Letting the camera coordinates of the projected point be leads by similar triangles to:

Projection Matrix Using homogeneous coordinates, we can describe perspective projection with a linear equation: (by the rule for converting between homogeneous and regular coordinates)

Camera Calibration Matrix More general projection matrix allows: –Image coordinates with an offset origin (e.g., convention of upper left corner) –Non-square pixels –Skewed coordinate axes These five variables are known as the camera’s intrinsic parameters

Combining Intrinsic & Extrinsic Parameters The transformation performed by a pinhole camera on an arbitrary point can thus be written as: is called the camera matrix

Homographies 2-D to 2-D projective transformation mapping points from plane to plane (e.g., image of a plane) 3 x 3 homogeneous matrix defines homo- graphy such that for any pair of corresponding points and, from Hartley & Zisserman

Computing the Homography 8 degrees of freedom in, so 4 pairs of 2-D points are sufficient to determine it –Other combinations of points and lines also work 3 collinear points in either image are a degenerate configuration preventing a unique solution Direct Linear Transformation (DLT) algorithm: Least-squares method for estimating

DLT Homography Estimation: Each of n Correspondences Since vectors are homogeneous, are parallel, so Let be row j of, be stacked ‘s Expanding and rearranging cross product above, we obtain, where

DLT Homography Estimation: Solve System Only 2 linearly independent equations in each, so leave out 3 rd to make it 2 x 9 Stack every to get 2n x 9 Solve by computing singular value decomposition (SVD) ; is last column of Solution is improved by normalizing image coordinates before applying DLT

Applying Homographies to Remove Perspective Distortion from Hartley & Zisserman 4 point correspondences suffice for the planar building facade

Homographies for Bird’s-eye Views from Hartley & Zisserman

Homographies for Mosaicing from Hartley & Zisserman

Camera Calibration For a given camera, how to deduce so we’ll be able to predict the image locations of known points in the world accurately? Basic idea: take images of measured 3-D objects, estimate camera parameters that minimize difference between observations and predictions

Estimating K Now we have a 3-D to 2-D projective transformation described by Follow approach of DLT used for homography estimation, except now: – is 3 x 4, so need 5 1/2 point correspondences –Degeneracy occurs when 3-D points are coplanar or on a twisted cubic (space curve) with camera –Use RQ decomposition to separate estimated into and

Real Pinhole Cameras Actual pinhole cameras place the camera center between the image plane and the scene, reversing the image Problem: Size of hole leads to sharpness vs. dimness trade-off A really small hole introduces diffraction effects Solution: Light-gathering lens courtesy of Paul Debevec A (fuzzy) pinhole image

Lenses Benefits: Increase light-gathering power by focusing bundles of rays from scene points onto image points Complications –Limited depth of field –Radial, tangential distortion: Straight lines curved –Vignetting: Image darker at edges –Chromatic aberration: Focal length function of wavelength

Correcting Radial Distortion courtesy of Shawn Becker DistortedAfter correction

Modeling Radial Distortion Function of distance to camera center Approximate with polynomial Necessitates nonlinear camera calibration

Camera Calibration Software Examples of camera calibration software in Links section of course web page Will discuss Bouguet’s Matlab toolbox on Thursday courtesy of Jean-Yves Bouguet