Structure from Motion Computer Vision CS 143, Brown James Hays 11/18/11 Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz,

Slides:



Advertisements
Similar presentations
Structure from motion.
Advertisements

Epipolar Geometry.
The fundamental matrix F
Lecture 11: Two-view geometry
3D reconstruction.
Stereo and Projective Structure from Motion
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Two-View Geometry CS Sastry and Yang
N-view factorization and bundle adjustment CMPUT 613.
Two-view geometry.
Lecture 8: Stereo.
Lecture 11: Structure from Motion
Camera calibration and epipolar geometry
Structure from motion.
Algorithms & Applications in Computer Vision
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple-view Reconstruction from Points and Lines
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Stereo and Structure from Motion
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Structure-from-EgoMotion (based on notes from David Jacobs, CS-Maryland) Determining the 3-D structure.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/12/11 Many slides adapted from Lana Lazebnik,
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Automatic Camera Calibration
Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/05/15 Many slides adapted from Lana Lazebnik,
Camera Geometry and Calibration Thanks to Martial Hebert.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Structure from Motion Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/06/12 Many slides adapted from Lana Lazebnik, Silvio Saverese,
Structure from Motion, Feature Tracking, and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/15/10 Many slides adapted.
CSCE 643 Computer Vision: Structure from Motion
Computer Vision – Lecture 17
3D Reconstruction Jeff Boody. Goals ● Reconstruct 3D models from a sequence of at least two images ● No prior knowledge of the camera or scene ● Use the.
Perceptual and Sensory Augmented Computing Computer Vision WS 08/09 Computer Vision – Lecture 17 Structure-from-Motion Bastian Leibe RWTH Aachen.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Affine Structure from Motion
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
EECS 274 Computer Vision Affine Structure from Motion.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Structure from Motion ECE 847: Digital Image Processing
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Structure from Motion Paul Heckbert, Nov , Image-Based Modeling and Rendering.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
EECS 274 Computer Vision Projective Structure from Motion.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
Projective 2D geometry course 2 Multiple View Geometry Comp Marc Pollefeys.
CSE 185 Introduction to Computer Vision Stereo 2.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Epipolar Geometry and Stereo Vision
Epipolar geometry.
Structure from motion Input: Output: (Tomasi and Kanade)
Uncalibrated Geometry & Stratification
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Structure from motion.
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

Structure from Motion Computer Vision CS 143, Brown James Hays 11/18/11 Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz, and Martial Hebert

This class: structure from motion Recap of epipolar geometry – Depth from two views Affine structure from motion

Recap: Epipoles Point x in left image corresponds to epipolar line l’ in right image Epipolar line passes through the epipole (the intersection of the cameras’ baseline with the image plane

Recap: Fundamental Matrix Fundamental matrix maps from a point in one image to a line in the other If x and x’ correspond to the same 3d point X:

Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates Camera 1 Camera 2 Camera 3 R 1,t 1 R 2,t 2 R 3,t 3 ? ? ? Slide credit: Noah Snavely ?

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same: It is impossible to recover the absolute scale of the scene!

How do we know the scale of image content?

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same More generally: if we transform the scene using a transformation Q and apply the inverse transformation to the camera matrices, then the images do not change

Projective structure from motion Given: m images of n fixed 3D points x ij = P i X j, i = 1,…, m, j = 1, …, n Problem: estimate m projection matrices P i and n 3D points X j from the mn corresponding points x ij x1jx1j x2jx2j x3jx3j XjXj P1P1 P2P2 P3P3 Slides from Lana Lazebnik

Projective structure from motion Given: m images of n fixed 3D points x ij = P i X j, i = 1,…, m, j = 1, …, n Problem: estimate m projection matrices P i and n 3D points X j from the mn corresponding points x ij With no calibration info, cameras and points can only be recovered up to a 4x4 projective transformation Q: X → QX, P → PQ -1 We can solve for structure and motion when 2mn >= 11m +3n – 15 For two cameras, at least 7 points are needed

Types of ambiguity Projective 15dof Affine 12dof Similarity 7dof Euclidean 6dof Preserves intersection and tangency Preserves parallellism, volume ratios Preserves angles, ratios of length Preserves angles, lengths With no constraints on the camera calibration matrix or on the scene, we get a projective reconstruction Need additional information to upgrade the reconstruction to affine, similarity, or Euclidean

Projective ambiguity

Affine ambiguity Affine

Affine ambiguity

Similarity ambiguity

Bundle adjustment Non-linear method for refining structure and motion Minimizing reprojection error x1jx1j x2jx2j x3jx3j XjXj P1P1 P2P2 P3P3 P1XjP1Xj P2XjP2Xj P3XjP3Xj

Photo synth Noah Snavely, Steven M. Seitz, Richard Szeliski, "Photo tourism: Exploring photo collections in 3D," SIGGRAPH 2006Photo tourism: Exploring photo collections in 3D

Structure from motion under orthographic projection 3D Reconstruction of a Rotating Ping-Pong Ball C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2): , November 1992.Shape and motion from image streams under orthography: A factorization method. Reasonable choice when Change in depth of points in scene is much smaller than distance to camera Cameras do not move towards or away from the scene

Structure from motion Let’s start with affine cameras (the math is easier) center at infinity

Affine projection for rotated/translated camera x X a1a1 a2a2

Affine structure from motion Affine projection is a linear mapping + translation in inhomogeneous coordinates 1.We are given corresponding 2D points (x) in several frames 2.We want to estimate the 3D points (X) and the affine parameters of each camera (A) x X a1a1 a2a2 Projection of world origin

Step 1: Simplify by getting rid of t: shift to centroid of points for each camera 2d normalized point (observed) 3d normalized point Linear (affine) mapping

Affine structure from motion Centering: subtract the centroid of the image points For simplicity, assume that the origin of the world coordinate system is at the centroid of the 3D points After centering, each normalized point x ij is related to the 3D point X i by

Suppose we know 3D points and affine camera parameters … then, we can compute the observed 2d positions of each point Camera Parameters (2mx3) 3D Points (3xn) 2D Image Points (2mxn)

Affine structure from motion Centering: subtract the centroid of the image points For simplicity, assume that the origin of the world coordinate system is at the centroid of the 3D points After centering, each normalized point x ij is related to the 3D point X i by

Affine structure from motion Given: m images of n fixed 3D points: x ij = A i X j + b i, i = 1,…, m, j = 1, …, n Problem: use the mn correspondences x ij to estimate m projection matrices A i and translation vectors b i, and n points X j The reconstruction is defined up to an arbitrary affine transformation Q (12 degrees of freedom): We have 2mn knowns and 8m + 3n unknowns (minus 12 dof for affine ambiguity) Thus, we must have 2mn >= 8m + 3n – 12 For two views, we need four point correspondences

What if we instead observe corresponding 2d image points? Can we recover the camera parameters and 3d points? cameras (2 m) points (n) What rank is the matrix of 2D points?

Affine structure from motion Let’s create a 2m × n data (measurement) matrix: cameras (2 m × 3) points (3 × n) The measurement matrix D = MS must have rank 3! C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2): , November 1992.Shape and motion from image streams under orthography: A factorization method.

Factorizing the measurement matrix Source: M. Hebert AX

Factorizing the measurement matrix Source: M. Hebert Singular value decomposition of D:

Factorizing the measurement matrix Source: M. Hebert Singular value decomposition of D:

Factorizing the measurement matrix Source: M. Hebert Obtaining a factorization from SVD:

Factorizing the measurement matrix Obtaining a factorization from SVD: Source: M. Hebert This decomposition minimizes |D-MS| 2

Affine ambiguity The decomposition is not unique. We get the same D by using any 3×3 matrix C and applying the transformations A → AC, X →C -1 X That is because we have only an affine transformation and we have not enforced any Euclidean constraints (like forcing the image axes to be perpendicular, for example) Source: M. Hebert

Orthographic: image axes are perpendicular and scale is 1 This translates into 3m equations in L = CC T : A i L A i T = Id, i = 1, …, m Solve for L Recover C from L by Cholesky decomposition: L = CC T Update M and S: M = MC, S = C -1 S Eliminating the affine ambiguity x X a1a1 a2a2 a 1 · a 2 = 0 |a 1 | 2 = |a 2 | 2 = 1 Source: M. Hebert

Algorithm summary Given: m images and n tracked features x ij For each image i, center the feature coordinates Construct a 2m × n measurement matrix D: – Column j contains the projection of point j in all views – Row i contains one coordinate of the projections of all the n points in image i Factorize D: – Compute SVD: D = U W V T – Create U 3 by taking the first 3 columns of U – Create V 3 by taking the first 3 columns of V – Create W 3 by taking the upper left 3 × 3 block of W Create the motion (affine) and shape (3D) matrices: A = U 3 W 3 ½ and X = W 3 ½ V 3 T Eliminate affine ambiguity Source: M. Hebert

Dealing with missing data So far, we have assumed that all points are visible in all views In reality, the measurement matrix typically looks something like this: One solution: – solve using a dense submatrix of visible points – Iteratively add new cameras cameras points

A nice short explanation Class notes from Lischinksi and Gruber

Reconstruction results (project 5) C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2): , November 1992.Shape and motion from image streams under orthography: A factorization method.

Project 5 1.Detect interest points (e.g., Harris) Image derivatives 2. Square of derivatives 3. Gaussian filter g(  I ) IxIx IyIy Ix2Ix2 Iy2Iy2 IxIyIxIy g(I x 2 )g(I y 2 ) g(I x I y ) 4. Cornerness function – both eigenvalues are strong har 5. Non-maxima suppression

Project 5 2. Correspondence via Lucas-Kanade tracking a)Initialize (x’,y’) = (x,y) b)Compute (u,v) by c)Shift window by (u, v): x’=x’+u; y’=y’+v; d)(extra credit) Recalculate I t e)(extra credit) Repeat steps 2-4 until small change Use interpolation for subpixel values 2 nd moment matrix for feature patch in first image displacement I t = I(x’, y’, t+1) - I(x, y, t) Original (x,y) position

Project 5 3. Get Affine camera matrix and 3D points using Tomasi-Kanade factorization Solve for orthographic constraints

Project 5 Tips – Helpful matlab functions: interp2, meshgrid, ordfilt2 (for getting local maximum), svd, chol – When selecting interest points, must choose appropriate threshold on Harris criteria or the smaller eigenvalue, or choose top N points – Vectorize to make tracking fast (interp2 will be the bottleneck) – Get tracking working on one point for a few frames before trying to get it working for all points