Computer Vision Spring 2006 15-385,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.

Slides:



Advertisements
Similar presentations
Announcements. Structure-from-Motion Determining the 3-D structure of the world, and/or the motion of a camera using a sequence of images taken by a moving.
Advertisements

MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Computer vision: models, learning and inference
Two-View Geometry CS Sastry and Yang
Two-view geometry.
Lecture 8: Stereo.
Camera calibration and epipolar geometry
Camera Models A camera is a mapping between the 3D world and a 2D image The principal camera of interest is central projection.
Structure from motion.
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
CS485/685 Computer Vision Prof. George Bebis
Calibration Dorit Moshe.
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Structure-from-EgoMotion (based on notes from David Jacobs, CS-Maryland) Determining the 3-D structure.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Lec 21: Fundamental Matrix
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Visualization- Determining Depth From Stereo Saurav Basu BITS Pilani 2002.
COMP322/S2000/L271 Stereo Imaging Ref.V.S.Nalwa, A Guided Tour of Computer Vision, Addison Wesley, (ISBN ) Slides are adapted from CS641.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Epipolar geometry The fundamental matrix and the tensor
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Metrology 1.Perspective distortion. 2.Depth is lost.
Geometric Camera Models
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #16.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
EECS 274 Computer Vision Affine Structure from Motion.
stereo Outline : Remind class of 3d geometry Introduction
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Reconstruction from Two Calibrated Views Two-View Geometry
Geometry Reconstruction March 22, Fundamental Matrix An important problem: Determine the epipolar geometry. That is, the correspondence between.
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
Stereo March 8, 2007 Suggested Reading: Horn Chapter 13.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Lecture 16: Image alignment
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
Depth from disparity (x´,y´)=(x+D(x,y), y)
Two-view geometry Computer Vision Spring 2018, Lecture 10
Structure from motion Input: Output: (Tomasi and Kanade)
Lecture 3: Camera Rotations and Homographies
Multiple View Geometry for Robotics
Multi-view geometry.
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15

Binocular Stereo - Calibration Lecture #15

Binocular Stereo

Stereo Reconstruction - RECAP The Stereo Problem –Shape from two (or more) images –Biological motivation knowncameraviewpoints

Disparity and Depth - RECAP scene left image right image baseline Assume that we know corresponds to From perspective projection (define the coordinate system as shown above)

Disparity and Depth - RECAP is the disparity between corresponding left and right image points disparity increases with baseline b inverse proportional to depth scene left image right image baseline

field of view of stereo Vergence one pixel uncertainty of scenepoint Optical axes of the two cameras need not be parallel Field of view decreases with increase in baseline and vergence Accuracy increases with baseline and vergence

Binocular Stereo Calibration RELATIVE ORIENTATION: We need to know position and orientation of one camera with respect to the other, before computing depth of scene points. ABSOLUTE ORIENTATION: We may need to know position and orientation of a stereo system with respect to some external system (for example, a 3D scanner).

Binocular Stereo Calibration - Notation Left Camera Frame Right Camera Frame We need to transform one coordinate frame to another. The transformation includes a rotation and a translation: : Translation of Left frame w.r.t Right : Rotation of Left frame w.r.t Right

Binocular Stereo Calibration - Notation In matrix notation, we can write as:

Binocular Stereo Calibration - Notation We can expand as:

Orthonormality Constraints (a) Rows of R are perpendicular vectors (b) Each row of R is a unit vector NOTE: Constraints are NON-LINEAR! They can only be used once (do not change with scene points).

Absolute Orientation We have measured some scene points using Stereo System L and Stereo System R Find Orientation (Translation and Rotation) of system L w.r.t system R. Useful for merging partial depth information from different views. Left System L Right System R Problem: Given Find

How many scene points are needed? Each scene point gives 3 equations: Six additional equations from orthonormality of Rotation matrix For n scene points, we have (3n + 6) equations and 12 unknowns It appears that 2 scene points suffice. But orthonormal constraints are non-linear. THREE NON-COLLINEAR SCENE POINTS ARE SUFFICIENT (see Horn)

Solving an Over-determined System Generally, more than 3 points are used to find the 12 unknowns Formulate Error for scene point i as: Find that minimize: Orthonormality Constraint

Relative Orientation

Find Orientation (Translation and Rotation) between two cameras (within the same stereo system). We must do this before using the stereo system. Left Camera L Right Camera R Problem: Here, we DO NOT know both We only know the image coordinates in the two cameras CORRESPOND to the same scene point!

Relative Orientation Again, we start with: OR: Assume: we know focal length of both cameras Then: Hence, (same for right camera)

Problem Formulation We know: Find: That satisfies : And satisfies the 6 orthonormality constraints

Scale Ambiguity Same image coordinates can be generated by doubling Hence, we can find only upto a scale factor! So, fix scale by using constraint: (1 additional equation)

How many scene points are needed? If we have n pairs of image coordinates: Number of equations: 3n = 3n + 7 Number of unknowns: 2n + 12 n = 5 points will give us equal number of equations & unknowns. In theory, 5 points indeed are good enough if chosen carefully. However, in practice, more points are used and an over-determined system of equations is solved as before.

Eye and the Brain

Next Class Optical Flow and Motion Reading: Horn, Chapter 12.