CSE 185 Introduction to Computer Vision Stereo 2.

Slides:



Advertisements
Similar presentations
Lecture 11: Two-view geometry
Advertisements

Stereo Vision Reading: Chapter 11
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the.
Stereo and Projective Structure from Motion
Two-view geometry.
Recap from Previous Lecture Tone Mapping – Preserve local contrast or detail at the expense of large scale contrast. – Changing the brightness within.
Lecture 8: Stereo.
Stereo.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2012.
Camera calibration and epipolar geometry
Structure from motion.
Last Time Pinhole camera model, projection
CS6670: Computer Vision Noah Snavely Lecture 17: Stereo
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Lecture 21: Multiple-view geometry and structure from motion
Stereopsis Mark Twain at Pool Table", no date, UCR Museum of Photography.
The plan for today Camera matrix
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Stereo and Structure from Motion
Lecture 20: Two-view geometry CS6670: Computer Vision Noah Snavely.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
Project 1 artifact winners Project 2 questions Project 2 extra signup slots –Can take a second slot if you’d like Announcements.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/12/11 Many slides adapted from Lana Lazebnik,
9 – Stereo Reconstruction
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/05/15 Many slides adapted from Lana Lazebnik,
Structure from images. Calibration Review: Pinhole Camera.
Multi-view geometry.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Epipolar geometry Epipolar Plane Baseline Epipoles Epipolar Lines
Stereo Vision Reading: Chapter 11 Stereo matching computes depth from two or more images Subproblems: –Calibrating camera positions. –Finding all corresponding.
Stereo Readings Szeliski, Chapter 11 (through 11.5) Single image stereogram, by Niklas EenNiklas Een.
Stereo Many slides adapted from Steve Seitz.
Project 2 code & artifact due Friday Midterm out tomorrow (check your ), due next Fri Announcements TexPoint fonts used in EMF. Read the TexPoint.
Stereo Many slides adapted from Steve Seitz. Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image image 1image.
Computer Vision, Robert Pless
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Announcements Project 3 due Thursday by 11:59pm Demos on Friday; signup on CMS Prelim to be distributed in class Friday, due Wednesday by the beginning.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
stereo Outline : Remind class of 3d geometry Introduction
776 Computer Vision Jan-Michael Frahm Spring 2012.
Solving for Stereo Correspondence Many slides drawn from Lana Lazebnik, UIUC.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
Multiple View Geometry and Stereo. Overview Single camera geometry – Recap of Homogenous coordinates – Perspective projection model – Camera calibration.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Epipolar Geometry and Stereo Vision
제 5 장 스테레오.
Stereo CSE 455 Linda Shapiro.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
CSE 455 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz
Stereo and Structure from Motion
Epipolar geometry.
Epipolar geometry continued
Two-view geometry.
Two-view geometry.
Stereo vision Many slides adapted from Steve Seitz.
Presentation transcript:

CSE 185 Introduction to Computer Vision Stereo 2

Depth from disparity x’x’ z f x C C’C’ X baseline f (X – X’) / f = baseline / z X – X’ = (baseline*f) / z z = (baseline*f) / (X – X’) d=X-X’ (disparity) z is inversely proportional to d

Outline Human stereopsis Stereograms Epipolar geometry and the epipolar constraint –Case example with parallel optical axes –General case with calibrated cameras

The two cameras need not have parallel optical axes. vs. General case with calibrated camera

Given p in left image, where can corresponding point p’ be? Stereo correspondence constraint

Stereo correspondence constraints

Geometry of two views constrains where the corresponding pixel for some image point in the first view must occur in the second view. It must be on the line carved out by a plane connecting the world point and optical centers. Epipolar constraint

Epipolar Plane Epipole Epipolar Line Baseline Epipole Epipolar geometry

Epipolar geometry: terms Baseline: line joining the camera centers Epipole: point of intersection of baseline with image plane Epipolar plane: plane containing baseline and world point Epipolar line: intersection of epipolar plane with the image plane All epipolar lines intersect at the epipole An epipolar plane intersects the left and right image planes in epipolar lines Why is the epipolar constraint useful?

This is useful because it reduces the correspondence problem to a 1D search along an epipolar line. Epipolar constraint

Example

OlOl OrOr OlOl OrOr What do the epipolar lines look like?

Example: Converging camera

Where are the epipoles? Example: Parallel camera

What would the epipolar lines look like if the camera moves directly forward? Example: Forward motion

e e’e’ Epipole has same coordinates in both images. Points move along lines radiating from e: “Focus of expansion”

Epipolar Constraint: Calibrated Case Essential Matrix (Longuet-Higgins, 1981) 3 ×3 skew-symmetric matrix: rank=2 M ’=( R, t)

Epipolar Constraint: Uncalibrated Case Fundamental Matrix (Faugeras and Luong, 1992) are normalized image coordinate

Fundamental matrix Let p be a point in left image, p’ in right image Epipolar relation –p maps to epipolar line l’ –p’ maps to epipolar line l Epipolar mapping described by a 3x3 matrix F It follows that l’l’ l p p’p’ What is the physical meaning?

Fundamental matrix This matrix F is called –Essential Matrix when image intrinsic parameters are known –Fundamental Matrix more generally (uncalibrated case) Can solve F from point correspondences –Each (p, p’) pair gives one linear equation in entries of F –F has 9 entries, but really only 7 or 8 degrees of freedom. –With 8 points it is simple to solve for F, but it is also possible with 7 points

Stereo image rectification

Reproject image planes onto a common plane parallel to the line between camera centers Pixel motion is horizontal after this transformation Two homographies (3x3 transform), one for each input image reprojection

Rectification example

The correspondence problem Epipolar geometry constrains our search, but we still have a difficult correspondence problem.

Basic stereo matching algorithm If necessary, rectify the two stereo images to transform epipolar lines into scanlines For each pixel x in the first image –Find corresponding epipolar scanline in the right image –Examine all pixels on the scanline and pick the best match x’ –Compute disparity x-x’ and set depth(x) = fB/(x-x’)

Matching cost LeftRight scanline Slide a window along the right scanline and compare contents of that window with the reference window in the left image Matching cost: SSD or normalized correlation Correspondence search

LeftRight scanline SSD Correspondence search

LeftRight scanline Norm. corr Correspondence search

Effect of window size W = 3W = 20 Smaller window +More detail –More noise Larger window +Smoother disparity maps –Less detail

Textureless surfaces Occlusions, repetition Non-Lambertian surfaces, specularities Failure cases

Results with window search Window-based matchingGround truth Data

Beyond window-based matching So far, matches are independent for each point What constraints or priors can we add?

Stereo constraints/priors Uniqueness –For any point in one image, there should be at most one matching point in the other image

Stereo constraints/priors Uniqueness –For any point in one image, there should be at most one matching point in the other image Ordering –Corresponding points should be in the same order in both views

Stereo constraints/priors Uniqueness –For any point in one image, there should be at most one matching point in the other image Ordering –Corresponding points should be in the same order in both views Ordering constraint doesn’t hold

Priors and constraints Uniqueness –For any point in one image, there should be at most one matching point in the other image Ordering –Corresponding points should be in the same order in both views Smoothness –We expect disparity values to change slowly (for the most part)

Try to coherently match pixels on the entire scanline Different scanlines are still optimized independently Left imageRight image Scanline stereo

“Shortest paths” for scan-line stereo Left imageRight image Can be implemented with dynamic programming correspondence q p Left occlusion t Right occlusion s

Coherent stereo on 2D grid Scanline stereo generates streaking artifacts Can’t use dynamic programming to find spatially coherent disparities/ correspondences on a 2D grid

Stereo matching as energy minimization I1I1 I2I2 D Random field interpretation Energy functions of this form can be minimized using graph cuts W1(i )W1(i )W 2 (i+D(i )) D(i )D(i ) data term smoothness term

Many of these constraints can be encoded in an energy function and solved using graph cuts Graph cuts Ground truth For the latest and greatest: Y. Boykov, O. Veksler, and R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, PAMI 2001Fast Approximate Energy Minimization via Graph Cuts Before Graph cut

Project “structured” light patterns onto the object –Simplifies the correspondence problem –Allows us to use only one camera camera projector L. Zhang, B. Curless, and S. M. Seitz. Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming. 3DPVT 2002Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming. Active stereo with structured light

Kinect: Structured infrared light

Potential matches for x have to lie on the corresponding line l’. Potential matches for x’ have to lie on the corresponding line l. Summary: Epipolar constraint x x’x’ X x’x’ X x’x’ X

Summary Epipolar geometry –Epipoles are intersection of baseline with image planes –Matching point in second image is on a line passing through its epipole –Fundamental matrix maps from a point in one image to a line (its epipolar line) in the other –Can solve for F given corresponding points (e.g., interest points) Stereo depth estimation –Estimate disparity by finding corresponding points along scanlines –Depth is inverse to disparity

Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates Camera 1 Camera 2 Camera 3 R 1,t 1 R 2,t 2 R 3,t 3 ? ? ? ?

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same: It is impossible to recover the absolute scale of the scene!

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same More generally: if we transform the scene using a transformation Q and apply the inverse transformation to the camera matrices, then the images do not change

Projective structure from motion Given: m images of n fixed 3D points x ij = P i X j, i = 1,…, m, j = 1, …, n Problem: estimate m projection matrices P i and n 3D points X j from the mn corresponding points x ij x1jx1j x2jx2j x3jx3j XjXj P1P1 P2P2 P3P3

Projective structure from motion Given: m images of n fixed 3D points x ij = P i X j, i = 1,…, m, j = 1, …, n Problem: estimate m projection matrices P i and n 3D points X j from the mn corresponding points x ij With no calibration info, cameras and points can only be recovered up to a 4x4 projective transformation Q: X → QX, P → PQ -1 We can solve for structure and motion when 2mn >= 11m +3n – 15 For two cameras, at least 7 points are needed

Projective ambiguity

Bundle adjustment Non-linear method for refining structure and motion Minimizing reprojection error x1jx1j x2jx2j x3jx3j XjXj P1P1 P2P2 P3P3 P1XjP1Xj P2XjP2Xj P3XjP3Xj

Noah Snavely, Steven M. Seitz, Richard Szeliski, "Photo tourism: Exploring photo collections in 3D," SIGGRAPH 2006Photo tourism: Exploring photo collections in 3D Photosynth