Epipolar Geometry and Stereo Vision Computer Vision ECE 5554 Virginia Tech Devi Parikh 10/15/13 These slides have been borrowed from Derek Hoiem and Kristen.

Slides:



Advertisements
Similar presentations
Epipolar Geometry.
Advertisements

Lecture 11: Two-view geometry
Stereo and Projective Structure from Motion
Two-View Geometry CS Sastry and Yang
Jan-Michael Frahm, Enrique Dunn Spring 2012
Two-view geometry.
Lecture 8: Stereo.
Camera calibration and epipolar geometry
Structure from motion.
Computer Vision : CISC 4/689
Review Previous section: – Feature detection and matching – Model fitting and outlier rejection.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Lecture 21: Multiple-view geometry and structure from motion
Lecture 20: Two-view geometry CS6670: Computer Vision Noah Snavely.
Single-view geometry Odilon Redon, Cyclops, 1914.
Projected image of a cube. Classical Calibration.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
Multiple View Geometry
Multiple View Geometry. THE GEOMETRY OF MULTIPLE VIEWS Reading: Chapter 10. Epipolar Geometry The Essential Matrix The Fundamental Matrix The Trifocal.
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
Wednesday March 23 Kristen Grauman UT-Austin
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Epipolar geometry Class 5. Geometric Computer Vision course schedule (tentative) LectureExercise Sept 16Introduction- Sept 23Geometry & Camera modelCamera.
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/12/11 Many slides adapted from Lana Lazebnik,
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/05/15 Many slides adapted from Lana Lazebnik,
Computer vision: models, learning and inference
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
Lecture 04 22/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Stereo Course web page: vision.cis.udel.edu/~cv April 11, 2003  Lecture 21.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
Announcements Project 3 due Thursday by 11:59pm Demos on Friday; signup on CMS Prelim to be distributed in class Friday, due Wednesday by the beginning.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Geometry of Multiple Views
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
stereo Outline : Remind class of 3d geometry Introduction
Feature Matching. Feature Space Outlier Rejection.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Single-view geometry Odilon Redon, Cyclops, 1914.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
Multiple View Geometry and Stereo. Overview Single camera geometry – Recap of Homogenous coordinates – Perspective projection model – Camera calibration.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
CSE 185 Introduction to Computer Vision Stereo 2.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Epipolar Geometry and Stereo Vision
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
Undergrad HTAs / TAs Help me make the course better!
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
3D Photography: Epipolar geometry
Epipolar geometry continued
Reconstruction.
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
Presentation transcript:

Epipolar Geometry and Stereo Vision Computer Vision ECE 5554 Virginia Tech Devi Parikh 10/15/13 These slides have been borrowed from Derek Hoiem and Kristen Grauman. Derek adapted many slides from Lana Lazebnik, Silvio Saverese, Steve Seitz, and took many figures from Hartley & Zisserman. 1

Announcements PS1 grades are out – Max: 120 – Min: 42 – Median and Mean: 98 – 17 students > 100 PS3 due Monday (10/21) night Progress on projects Feedback forms 2

Recall Image formation geometry Vanishing points and vanishing lines Pinhole camera model and camera projection matrix Homogeneous coordinates Vanishing point Vanishing line Vanishing point Vertical vanishing point (at infinity) Slide source: Derek Hoiem 3

This class: Two-View Geometry Epipolar geometry – Relates cameras from two positions Next time: Stereo depth estimation – Recover depth from two images 4

Why multiple views? Structure and depth are inherently ambiguous from single views. Images from Lana Lazebnik 5 Slide credit: Kristen Grauman

Why multiple views? Structure and depth are inherently ambiguous from single views. Optical center X1 X2 x1’=x2’ 6 Slide credit: Kristen Grauman

What cues help us to perceive 3d shape and depth? 7 Slide credit: Kristen Grauman

Shading [Figure from Prados & Faugeras 2006] 8 Slide credit: Kristen Grauman

Focus/defocus [figs from H. Jin and P. Favaro, 2002] Images from same point of view, different camera parameters 3d shape / depth estimates 9 Slide credit: Kristen Grauman

Texture [From A.M. Loh. The recovery of 3-D structure using visual texture patterns. PhD thesis]A.M. Loh. The recovery of 3-D structure using visual texture patterns. 10 Slide credit: Kristen Grauman

Perspective effects Image credit: S. Seitz 11 Slide credit: Kristen Grauman

Motion Figures from L. Zhang 12 Slide credit: Kristen Grauman

Stereo photography and stereo viewers Invented by Sir Charles Wheatstone, 1838 Image from fisher-price.com Take two pictures of the same subject from two slightly different viewpoints and display so that each eye sees only one of the images. Slide credit: Kristen Grauman 13

Slide credit: Kristen Grauman 14

Slide credit: Kristen Grauman 15

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Slide credit: Kristen Grauman 16

Slide credit: Kristen Grauman 17

Recall: Image Stitching Two images with rotation/zoom but no translation ff'. x x'x' X 18

Estimating depth with stereo Stereo: shape from “motion” between two views scene point optical center image plane 19

Two cameras, simultaneous views Single moving camera and static scene Stereo vision 20

Depth from Stereo Goal: recover depth by finding image coordinate x’ that corresponds to x f xx’ Baseline B z CC’ X f X x x' 21 Disparity: Hold up your finger. View it with one eye at a time. Alternate.

Depth from Stereo Goal: recover depth by finding image coordinate x’ that corresponds to x Sub-Problems 1.Calibration: How do we recover the relation of the cameras (if not already known)? 2.Correspondence: How do we search for the matching point x’? X x x' 22

Correspondence Problem We have two images taken from cameras with different intrinsic and extrinsic parameters How do we match a point in the first image to a point in the second? How can we constrain our search? x ? 23

Key idea: Epipolar constraint 24

Potential matches for x have to lie on the corresponding line l’. Potential matches for x’ have to lie on the corresponding line l. Key idea: Epipolar constraint x x’ X X X 25

Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the other camera center Baseline – line connecting the two camera centers Epipolar geometry: notation X xx’ 26

Epipolar Lines - intersections of epipolar plane with image planes (always come in corresponding pairs) Epipolar geometry: notation X xx’ Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the other camera center Baseline – line connecting the two camera centers 27

Example: Converging cameras 28

Example: Motion parallel to image plane 29

Example: Forward motion What would the epipolar lines look like if the camera moves directly forward? 30

e e’ Example: Forward motion Epipole has same coordinates in both images. Points move along lines radiating from e: “Focus of expansion” 31

X xx’ Epipolar constraint: Calibrated case Given the intrinsic parameters of the cameras: 1.Convert to normalized coordinates by pre-multiplying all points with the inverse of the calibration matrix; set first camera’s coordinate system to world coordinates Homogeneous 2d point (3D ray towards X) 2D pixel coordinate (homogeneous) 3D scene point 3D scene point in 2 nd camera’s 3D coordinates 32

X xx’ Epipolar constraint: Calibrated case Given the intrinsic parameters of the cameras: 1.Convert to normalized coordinates by pre-multiplying all points with the inverse of the calibration matrix; set first camera’s coordinate system to world coordinates 2.Define some R and t that relate X to X’ as below 33

An aside: cross product Vector cross product takes two vectors and returns a third vector that’s perpendicular to both inputs. So here, c is perpendicular to both a and b, which means the dot product = 0. Slide source: Kristen Grauman 34

Epipolar constraint: Calibrated case xx’ X 35 Algebraically…

Another aside: Matrix form of cross product Can be expressed as a matrix multiplication. Slide source: Kristen Grauman 36

Essential Matrix (Longuet-Higgins, 1981) Essential matrix X xx’ 37

X xx’ Properties of the Essential matrix E x’ is the epipolar line associated with x’ (l = E x’) E T x is the epipolar line associated with x (l’ = E T x) E e’ = 0 and E T e = 0 E is singular (rank two) E has five degrees of freedom –(3 for R, 2 for t because it’s up to a scale) Drop ^ below to simplify notation 38

Epipolar constraint: Uncalibrated case If we don’t know K and K’, then we can write the epipolar constraint in terms of unknown normalized coordinates: X xx’ 39

The Fundamental Matrix Fundamental Matrix (Faugeras and Luong, 1992) Without knowing K and K’, we can define a similar relation using unknown normalized coordinates 40

Properties of the Fundamental matrix F x’ is the epipolar line associated with x’ (l = F x’) F T x is the epipolar line associated with x (l’ = F T x) F e’ = 0 and F T e = 0 F is singular (rank two): det(F)=0 F has seven degrees of freedom: 9 entries but defined up to scale, det(F)=0 X xx’ 41

Estimating the Fundamental Matrix 8-point algorithm – Least squares solution using SVD on equations from 8 pairs of correspondences – Enforce det(F)=0 constraint using SVD on F 7-point algorithm – Use least squares to solve for null space (two vectors) using SVD and 7 pairs of correspondences – Solve for linear combination of null space vectors that satisfies det(F)=0 Minimize reprojection error – Non-linear least squares Note: estimation of F (or E) is degenerate for a planar scene. 42

8-point algorithm 1.Solve a system of homogeneous linear equations a.Write down the system of equations 43

8-point algorithm 1.Solve a system of homogeneous linear equations a.Write down the system of equations b.Solve f from Af=0 using SVD Matlab: [U, S, V] = svd(A); f = V(:, end); F = reshape(f, [3 3])’; 44

Need to enforce singularity constraint 45

8-point algorithm 1.Solve a system of homogeneous linear equations a.Write down the system of equations b.Solve f from Af=0 using SVD 2.Resolve det(F) = 0 constraint using SVD Matlab: [U, S, V] = svd(A); f = V(:, end); F = reshape(f, [3 3])’; Matlab: [U, S, V] = svd(F); S(3,3) = 0; F = U*S*V’; 46

8-point algorithm 1.Solve a system of homogeneous linear equations a.Write down the system of equations b.Solve f from Af=0 using SVD 2.Resolve det(F) = 0 constraint by SVD Notes: Use RANSAC to deal with outliers (sample 8 points) – How to test for outliers? Solve in normalized coordinates – mean=0 – standard deviation ~= (1,1,1) – just like with estimating the homography for stitching 47

Comparison of homography estimation and the 8-point algorithm Homography (No Translation)Fundamental Matrix (Translation) Assume we have matched points x x’ with outliers 48

Homography (No Translation)Fundamental Matrix (Translation) Comparison of homography estimation and the 8-point algorithm Assume we have matched points x x’ with outliers Correspondence Relation 1.Normalize image coordinates 2.RANSAC with 4 points – Solution via SVD 3.De-normalize: 49

Comparison of homography estimation and the 8-point algorithm Homography (No Translation)Fundamental Matrix (Translation) Correspondence Relation 1.Normalize image coordinates 2.RANSAC with 8 points – Initial solution via SVD – Enforce by SVD 3.De-normalize: Correspondence Relation 1.Normalize image coordinates 2.RANSAC with 4 points – Solution via SVD 3.De-normalize: Assume we have matched points x x’ with outliers 50

7-point algorithm Faster (need fewer points) and could be more robust (fewer points), but also need to check for degenerate cases 51

“Gold standard” algorithm Use 8-point algorithm to get initial value of F Use F to solve for P and P’ (discussed later) Jointly solve for 3d points X and F that minimize the squared re-projection error X xx' See Algorithm 11.2 and Algorithm 11.3 in HZ (pages ) for details 52

Comparison of estimation algorithms 8-pointNormalized 8-pointNonlinear least squares Av. Dist pixels0.92 pixel0.86 pixel Av. Dist pixels0.85 pixel0.80 pixel 53

We can get projection matrices P and P’ up to a projective ambiguity CodeCode: function P = vgg_P_from_F(F) [U,S,V] = svd(F); e = U(:,3); P = [-vgg_contreps(e)*F e]; See HZ p K’*translation K’*rotation If we know the intrinsic matrices (K and K’), we can resolve the ambiguity 54

From epipolar geometry to camera calibration Estimating the fundamental matrix is known as “weak calibration” If we know the calibration matrices of the two cameras, we can estimate the essential matrix: E = K T FK’ The essential matrix gives us the relative rotation and translation between the cameras, or their extrinsic parameters 55

Let’s recap… Fundamental matrix song 56

Next time: Stereo Fuse a calibrated binocular stereo pair to produce a depth image image 1image 2 Dense depth map 57

Next class: structure from motion 58

Questions? See you Thursday! 59 Slide credit: Devi Parikh