Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.

Slides:



Advertisements
Similar presentations
Single-view geometry Odilon Redon, Cyclops, 1914.
Advertisements

The Camera : Computational Photography Alexei Efros, CMU, Fall 2006.
CS 691 Computational Photography Instructor: Gianfranco Doretto 3D to 2D Projections.
Paris town hall.
Computer vision: models, learning and inference
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
Camera calibration and epipolar geometry
Camera Models A camera is a mapping between the 3D world and a 2D image The principal camera of interest is central projection.
Structure from motion.
Single-view metrology
3D Geometry and Camera Calibration. 3D Coordinate Systems Right-handed vs. left-handedRight-handed vs. left-handed x yz x yz.
Announcements. Projection Today’s Readings Nalwa 2.1.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
Lecture 16: Single-view modeling, Part 2 CS6670: Computer Vision Noah Snavely.
Single-view geometry Odilon Redon, Cyclops, 1914.
The Pinhole Camera Model
Projected image of a cube. Classical Calibration.
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 02/26/15 1.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Cameras, lenses, and calibration
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
Wednesday March 23 Kristen Grauman UT-Austin
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
EEM 561 Machine Vision Week 10 :Image Formation and Cameras
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2006 Lecture 4 Camera Calibration Professor Sebastian Thrun.
Projective Geometry and Single View Modeling CSE 455, Winter 2010 January 29, 2010 Ames Room.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/05/15 Many slides adapted from Lana Lazebnik,
Epipolar Geometry and Stereo Vision Computer Vision ECE 5554 Virginia Tech Devi Parikh 10/15/13 These slides have been borrowed from Derek Hoiem and Kristen.
Epipolar geometry The fundamental matrix and the tensor
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Recap from Wednesday Two strategies for realistic rendering capture real world data synthesize from bottom up Both have existed for 500 years. Both are.
Single-view Metrology and Camera Calibration Computer Vision Derek Hoiem, University of Illinois 01/25/11 1.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
Vision Review: Image Formation Course web page: September 10, 2002.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Computational Photography Derek Hoiem, University of Illinois
Single-view geometry Odilon Redon, Cyclops, 1914.
Ch. 3: Geometric Camera Calibration
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Camera Model Calibration
Single-view geometry Odilon Redon, Cyclops, 1914.
Lecture 18: Cameras CS4670 / 5670: Computer Vision KavitaBala Source: S. Lazebnik.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Calibrating a single camera
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
Undergrad HTAs / TAs Help me make the course better!
The Camera : Computational Photography
Depth from disparity (x´,y´)=(x+D(x,y), y)
CS 790: Introduction to Computational Vision
Lecture 13: Cameras and geometry
The Camera : Computational Photography
Multiple View Geometry for Robotics
Announcements Midterm out today Project 1 demos.
Single-view geometry Odilon Redon, Cyclops, 1914.
Camera model and calibration
Presentation transcript:

Peripheral drift illusion

Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow

Why multiple views? Structure and depth are inherently ambiguous from single views. Images from Lana Lazebnik

Why multiple views? Structure and depth are inherently ambiguous from single views. Optical center P1 P2 P1’=P2’

Camera parameters Camera frame 1 Intrinsic parameters: Image coordinates relative to camera  Pixel coordinates Extrinsic parameters: Camera frame 1  Camera frame 2 Camera frame 2 Extrinsic params: rotation matrix and translation vector Intrinsic params: focal length, pixel sizes (mm), image center point, radial distortion parameters We’ll assume for now that these parameters are given and fixed.

Geometry for a simple stereo system First, assuming parallel optical axes, known camera parameters (i.e., calibrated cameras):

baseline optical center (left) optical center (right) Focal length World point image point (left) image point (right) Depth of p

Assume parallel optical axes, known camera parameters (i.e., calibrated cameras). What is expression for Z? Similar triangles (p l, P, p r ) and (O l, P, O r ): Geometry for a simple stereo system disparity

Depth from disparity image I(x,y) image I´(x´,y´) Disparity map D(x,y) (x´,y´)=(x+D(x,y), y) So if we could find the corresponding points in two images, we could estimate relative depth…

What did we need to know? Correspondence for every pixel. Sort of like project 2, but project 2 is “sparse” and we need “dense” correspondence. Calibration for the cameras.

Where do we need to search?

How do we calibrate a camera?

World vs Camera coordinates

Image formation Let’s design a camera – Idea 1: put a piece of film in front of an object – Do we get a reasonable image? Slide source: Seitz

Pinhole camera Idea 2: add a barrier to block off most of the rays – This reduces blurring – The opening known as the aperture Slide source: Seitz

Pinhole camera Figure from Forsyth f f = focal length c = center of the camera c

Projection: world coordinates  image coordinates X

Homogeneous coordinates Conversion Converting to homogeneous coordinates homogeneous image coordinates homogeneous scene coordinates Converting from homogeneous coordinates

World vs Camera coordinates

Slide Credit: Saverese Projection matrix x: Image Coordinates: (u,v,1) K: Intrinsic Matrix (3x3) R: Rotation (3x3) t: Translation (3x1) X: World Coordinates: (X,Y,Z,1) OwOw iwiw kwkw jwjw R,T

Pinhole Camera Model x: Image Coordinates: (u,v,1) K: Intrinsic Matrix (3x3) R: Rotation (3x3) t: Translation (3x1) X: World Coordinates: (X,Y,Z,1)

K Slide Credit: Saverese Projection matrix Intrinsic Assumptions Unit aspect ratio Optical center at (0,0) No skew Extrinsic Assumptions No rotation Camera at (0,0,0)

Remove assumption: known optical center Intrinsic Assumptions Unit aspect ratio No skew Extrinsic Assumptions No rotation Camera at (0,0,0)

Remove assumption: square pixels Intrinsic Assumptions No skew Extrinsic Assumptions No rotation Camera at (0,0,0)

Remove assumption: non-skewed pixels Intrinsic AssumptionsExtrinsic Assumptions No rotation Camera at (0,0,0) Note: different books use different notation for parameters

Oriented and Translated Camera OwOw iwiw kwkw jwjw t R

Allow camera translation Intrinsic AssumptionsExtrinsic Assumptions No rotation

3D Rotation of Points, : Rotation around the coordinate axes, counter-clockwise: p p’p’p’p’  y z Slide Credit: Saverese

Allow camera rotation

Degrees of freedom 56

Beyond Pinholes: Radial Distortion Common in wide-angle lenses or for special applications (e.g., security) Creates non-linear terms in projection Usually handled by through solving for non-linear terms and then correcting image Image from Martin Habbecke Corrected Barrel Distortion

How to calibrate the camera?

Calibrating the Camera Use an scene with known geometry – Correspond image points to 3d points – Get least squares solution (or non-linear solution)

How do we calibrate a camera?

Method 1 – homogeneous linear system Solve for m’s entries using linear least squares Ax=0 form [U, S, V] = svd(A); M = V(:,end); M = reshape(M,[],3)';

Method 2 – nonhomogeneous linear system Solve for m’s entries using linear least squares Ax=b form M = A\Y; M = [M;1]; M = reshape(M,[],3)';

Calibration with linear method Advantages – Easy to formulate and solve – Provides initialization for non-linear methods Disadvantages – Doesn’t directly give you camera parameters – Doesn’t model radial distortion – Can’t impose constraints, such as known focal length Non-linear methods are preferred – Define error as difference between projected points and measured points – Minimize error using Newton’s method or other non-linear optimization