Lecture 11: Structure from Motion

Slides:



Advertisements
Similar presentations
Structure from motion.
Advertisements

Epipolar Geometry.
The fundamental matrix F
Lecture 11: Two-view geometry
3D reconstruction.
Stereo and Projective Structure from Motion
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Two-View Geometry CS Sastry and Yang
N-view factorization and bundle adjustment CMPUT 613.
Self-calibration.
Structure from motion Class 9 Read Chapter 5. 3D photography course schedule (tentative) LectureExercise Sept 26Introduction- Oct. 3Geometry & Camera.
Two-view geometry.
Camera calibration and epipolar geometry
Structure from motion.
Algorithms & Applications in Computer Vision
Projective structure from motion
Stanford CS223B Computer Vision, Winter 2005 Lecture 11: Structure From Motion 2 Sebastian Thrun, Stanford Rick Szeliski, Microsoft Hendrik Dahlkamp and.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Computer Vision Projective structure from motion Marc Pollefeys COMP 256 Some slides and illustrations from J. Ponce, …
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Multiple-view Reconstruction from Points and Lines
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Many slides and illustrations from J. Ponce
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Stereo and Structure from Motion
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Computer Vision Structure from motion Marc Pollefeys COMP 256 Some slides and illustrations from J. Ponce, A. Zisserman, R. Hartley, Luc Van Gool, …
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Affine structure from motion
Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Automatic Camera Calibration
Computer vision: models, learning and inference
Euclidean cameras and strong (Euclidean) calibration Intrinsic and extrinsic parameters Linear least-squares methods Linear calibration Degenerate point.
Multi-view geometry.
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
Homogeneous Coordinates (Projective Space) Let be a point in Euclidean space Change to homogeneous coordinates: Defined up to scale: Can go back to non-homogeneous.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Structure from Motion Computer Vision CS 143, Brown James Hays 11/18/11 Many slides adapted from Derek Hoiem, Lana Lazebnik, Silvio Saverese, Steve Seitz,
Structure from Motion Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/06/12 Many slides adapted from Lana Lazebnik, Silvio Saverese,
CSCE 643 Computer Vision: Structure from Motion
Computer Vision – Lecture 17
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Perceptual and Sensory Augmented Computing Computer Vision WS 08/09 Computer Vision – Lecture 17 Structure-from-Motion Bastian Leibe RWTH Aachen.
Affine Structure from Motion
Single-view geometry Odilon Redon, Cyclops, 1914.
EECS 274 Computer Vision Affine Structure from Motion.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Structure from Motion ECE 847: Digital Image Processing
Geometry Reconstruction March 22, Fundamental Matrix An important problem: Determine the epipolar geometry. That is, the correspondence between.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
EECS 274 Computer Vision Projective Structure from Motion.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
CSE 185 Introduction to Computer Vision Stereo 2.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
Structure from motion Input: Output: (Tomasi and Kanade)
Uncalibrated Geometry & Stratification
Multi-view geometry.
Structure from motion.
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

Lecture 11: Structure from Motion C280, Computer Vision Prof. Trevor Darrell trevor@eecs.berkeley.edu Lecture 11: Structure from Motion

Roadmap Previous: Image formation, filtering, local features, (Texture)… Tues: Feature-based Alignment Stitching images together Homographies, RANSAC, Warping, Blending Global alignment of planar models Today: Dense Motion Models Local motion / feature displacement Parametric optic flow No classes next week: ICCV conference Oct 6th: Stereo / ‘Multi-view’: Estimating depth with known inter-camera pose Oct 8th: ‘Structure-from-motion’: Estimation of pose and 3D structure Factorization approaches Global alignment with 3D point models

Last time: Stereo Human stereopsis & stereograms Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras Correspondence search The Essential and the Fundamental Matrix Multi-view stereo

Today: SFM SFM problem statement Factorization Projective SFM Bundle Adjustment Photo Tourism “Rome in a day:

Structure from motion Lazebnik

Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding points in 3D? Correspondence (stereo matching): Given a point in just one image, how does it constrain the position of the corresponding point in another image? Camera geometry (motion): Given a set of corresponding points in two or more images, what are the camera matrices for these views? Lazebnik

Structure from motion Given: m images of n fixed 3D points xij = Pi Xj , i = 1, … , m, j = 1, … , n Problem: estimate m projection matrices Pi and n 3D points Xj from the mn correspondences xij x1j x2j x3j Xj P1 P2 P3 Lazebnik

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same: It is impossible to recover the absolute scale of the scene! Lazebnik

Structure from motion ambiguity If we scale the entire scene by some factor k and, at the same time, scale the camera matrices by the factor of 1/k, the projections of the scene points in the image remain exactly the same More generally: if we transform the scene using a transformation Q and apply the inverse transformation to the camera matrices, then the images do not change Lazebnik

Projective ambiguity Lazebnik

Projective ambiguity Lazebnik

Affine ambiguity Affine Lazebnik

Affine ambiguity Lazebnik

Similarity ambiguity Lazebnik

Similarity ambiguity Lazebnik

Hierarchy of 3D transformations Projective 15dof Preserves intersection and tangency Affine 12dof Preserves parallellism, volume ratios Similarity 7dof Preserves angles, ratios of length Euclidean 6dof Preserves angles, lengths With no constraints on the camera calibration matrix or on the scene, we get a projective reconstruction Need additional information to upgrade the reconstruction to affine, similarity, or Euclidean Lazebnik

Structure from motion Let’s start with affine cameras (the math is easier) center at infinity Lazebnik

Recall: Orthographic Projection Special case of perspective projection Distance from center of projection to image plane is infinite Projection matrix: Image World Lazebnik Slide by Steve Seitz

Affine cameras Orthographic Projection Parallel Projection Lazebnik

Projection of world origin Affine cameras A general affine camera combines the effects of an affine transformation of the 3D space, orthographic projection, and an affine transformation of the image: Affine projection is a linear mapping + translation in inhomogeneous coordinates x X a1 a2 Projection of world origin Lazebnik

Affine structure from motion Given: m images of n fixed 3D points: xij = Ai Xj + bi , i = 1,… , m, j = 1, … , n Problem: use the mn correspondences xij to estimate m projection matrices Ai and translation vectors bi, and n points Xj The reconstruction is defined up to an arbitrary affine transformation Q (12 degrees of freedom): We have 2mn knowns and 8m + 3n unknowns (minus 12 dof for affine ambiguity) Thus, we must have 2mn >= 8m + 3n – 12 For two views, we need four point correspondences Lazebnik

Affine structure from motion Centering: subtract the centroid of the image points For simplicity, assume that the origin of the world coordinate system is at the centroid of the 3D points After centering, each normalized point xij is related to the 3D point Xi by Lazebnik

Affine structure from motion Let’s create a 2m × n data (measurement) matrix: cameras (2 m) points (n) C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):137-154, November 1992.

Affine structure from motion Let’s create a 2m × n data (measurement) matrix: points (3 × n) cameras (2 m × 3) The measurement matrix D = MS must have rank 3! C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):137-154, November 1992.

Factorizing the measurement matrix Lazebnik Source: M. Hebert

Factorizing the measurement matrix Singular value decomposition of D: Lazebnik Source: M. Hebert

Factorizing the measurement matrix Singular value decomposition of D: Lazebnik Source: M. Hebert

Factorizing the measurement matrix Obtaining a factorization from SVD: Lazebnik Source: M. Hebert

Factorizing the measurement matrix Obtaining a factorization from SVD: This decomposition minimizes |D-MS|2 Lazebnik Source: M. Hebert

Affine ambiguity The decomposition is not unique. We get the same D by using any 3×3 matrix C and applying the transformations M → MC, S →C-1S That is because we have only an affine transformation and we have not enforced any Euclidean constraints (like forcing the image axes to be perpendicular, for example) Lazebnik Source: M. Hebert

Eliminating the affine ambiguity Orthographic: image axes are perpendicular and scale is 1 This translates into 3m equations in L = CCT : Ai L AiT = Id, i = 1, …, m Solve for L Recover C from L by Cholesky decomposition: L = CCT Update M and S: M = MC, S = C-1S a1 · a2 = 0 x |a1|2 = |a2|2 = 1 a2 X a1 Lazebnik Source: M. Hebert

Algorithm summary Given: m images and n features xij For each image i, center the feature coordinates Construct a 2m × n measurement matrix D: Column j contains the projection of point j in all views Row i contains one coordinate of the projections of all the n points in image i Factorize D: Compute SVD: D = U W VT Create U3 by taking the first 3 columns of U Create V3 by taking the first 3 columns of V Create W3 by taking the upper left 3 × 3 block of W Create the motion and shape matrices: M = U3W3½ and S = W3½ V3T (or M = U3 and S = W3V3T) Eliminate affine ambiguity Lazebnik Source: M. Hebert

Reconstruction results C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2):137-154, November 1992.

Dealing with missing data So far, we have assumed that all points are visible in all views In reality, the measurement matrix typically looks something like this: cameras points Lazebnik

Dealing with missing data Possible solution: decompose matrix into dense sub-blocks, factorize each sub-block, and fuse the results Finding dense maximal sub-blocks of the matrix is NP-complete (equivalent to finding maximal cliques in a graph) Incremental bilinear refinement Perform factorization on a dense sub-block (2) Solve for a new 3D point visible by at least two known cameras (linear least squares) (3) Solve for a new camera that sees at least three known 3D points (linear least squares) F. Rothganger, S. Lazebnik, C. Schmid, and J. Ponce. Segmenting, Modeling, and Matching Video Clips Containing Multiple Moving Objects. PAMI 2007.

Further Factorization work Factorization with uncertainty Factorization for indep. moving objects (now) Factorization for articulated objects (now) Factorization for dynamic objects (now) Perspective factorization (next week) Factorization with outliers and missing pts. (Irani & Anandan, IJCV’02) (Costeira and Kanade ‘94) (Yan and Pollefeys ‘05) (Bregler et al. 2000, Brand 2001) (Sturm & Triggs 1996, …) (Jacobs ‘97 (affine), Martinek & Pajdla‘01 Aanaes’02 (perspective)) Pollefeys

Structure from motion of multiple moving objects Pollefeys

Structure from motion of multiple moving objects Pollefeys

Shape interaction matrix Shape interaction matrix for articulated objects looses block diagonal structure Costeira and Kanade’s approach is not usable for articulated bodies (assumes independent motions) Pollefeys

Articulated motion subspaces Motion subspaces for articulated bodies intersect (Yan and Pollefeys, CVPR’05) (Tresadern and Reid, CVPR’05) Joint (1D intersection) (joint=origin) (rank=8-1) Hinge (2D intersection) (hinge=z-axis) (rank=8-2) Exploit rank constraint to obtain better estimate Also for non-rigid parts if (Yan & Pollefeys, 06?)

Results Toy truck Student Segmentation Segmentation Intersection Pollefeys

Articulated shape and motion factorization (Yan and Pollefeys, 2006?) Automated kinematic chain building for articulated & non-rigid obj. Estimate principal angles between subspaces Compute affinities based on principal angles Compute minimum spanning tree Pollefeys

Structure from motion of deforming objects (Bregler et al ’00; Brand ‘01) Extend factorization approaches to deal with dynamic shapes Pollefeys

Representing dynamic shapes (fig. M.Brand) represent dynamic shape as varying linear combination of basis shapes Pollefeys

Projecting dynamic shapes (figs. M.Brand) Rewrite: Pollefeys

Dynamic image sequences One image: (figs. M.Brand) Multiple images Pollefeys

Dynamic SfM factorization? Problem: find J so that M has proper structure Pollefeys

Dynamic SfM factorization (Bregler et al ’00) Assumption: SVD preserves order and orientation of basis shape components Pollefeys

Results (Bregler et al ’00) Pollefeys

Dynamic SfM factorization (Brand ’01) constraints to be satisfied for M constraints to be satisfied for M, use to compute J hard! (different methods are possible, not so simple and also not optimal) Pollefeys

Non-rigid 3D subspace flow Same is also possible using optical flow in stead of features, also takes uncertainty into account (Brand ’01) Pollefeys

Results (Brand ’01) Pollefeys

Results (Bregler et al ’01) Pollefeys

Projective structure from motion Given: m images of n fixed 3D points zij xij = Pi Xj , i = 1,… , m, j = 1, … , n Problem: estimate m projection matrices Pi and n 3D points Xj from the mn correspondences xij x1j x2j x3j Xj P1 P2 P3 Lazebnik

Projective structure from motion Given: m images of n fixed 3D points zij xij = Pi Xj , i = 1,… , m, j = 1, … , n Problem: estimate m projection matrices Pi and n 3D points Xj from the mn correspondences xij With no calibration info, cameras and points can only be recovered up to a 4x4 projective transformation Q: X → QX, P → PQ-1 We can solve for structure and motion when 2mn >= 11m +3n – 15 For two cameras, at least 7 points are needed Lazebnik

Projective SFM: Two-camera case Compute fundamental matrix F between the two views First camera matrix: [I|0] Second camera matrix: [A|b] Then b: epipole (FTb = 0), A = –[b×]F Lazebnik F&P sec. 13.3.1

Projective factorization points (4 × n) cameras (3 m × 4) D = MS has rank 4 If we knew the depths z, we could factorize D to estimate M and S If we knew M and S, we could solve for z Solution: iterative approach (alternate between above two steps) Lazebnik

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image – calibration points cameras Lazebnik

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image – calibration Refine and extend structure: compute new 3D points, re-optimize existing points that are also seen by this camera – triangulation points cameras Lazebnik

Sequential structure from motion Initialize motion from two images using fundamental matrix Initialize structure For each additional view: Determine projection matrix of new camera using all the known 3D points that are visible in its image – calibration Refine and extend structure: compute new 3D points, re-optimize existing points that are also seen by this camera – triangulation Refine structure and motion: bundle adjustment points cameras Lazebnik

Bundle adjustment Non-linear method for refining structure and motion Minimizing reprojection error Xj P1Xj x3j x1j P3Xj P2Xj x2j P1 P3 P2 Lazebnik

Self-calibration Self-calibration (auto-calibration) is the process of determining intrinsic camera parameters directly from uncalibrated images For example, when the images are acquired by a single moving camera, we can use the constraint that the intrinsic parameter matrix remains fixed for all the images Compute initial projective reconstruction and find 3D projective transformation matrix Q such that all camera matrices are in the form Pi = K [Ri | ti] Can use constraints on the form of the calibration matrix: zero skew Lazebnik

Summary: Structure from motion Ambiguity Affine structure from motion: factorization Dealing with missing data Projective structure from motion: two views Projective structure from motion: iterative factorization Bundle adjustment Self-calibration Lazebnik

Photo Tourism: Exploring Photo Collections in 3D (Included presentation…available from http://phototour.cs.washington.edu/) Photo Tourism: Exploring Photo Collections in 3D Noah Snavely Steven M. Seitz University of Washington Richard Szeliski Microsoft Research © 2006 Noah Snavely

http://grail.cs.washington.edu/rome/

“Rome in a day”: Coliseum video http://grail.cs.washington.edu/rome/

“Rome in a day”: Trevi video http://grail.cs.washington.edu/rome/

“Rome in a day”: St. Peters video http://grail.cs.washington.edu/rome/

Slide Credits Svetlana Lazebnik Marc Pollefeys Noah Snaveley & co-authors

Today: SFM SFM problem statement Factorization Projective SFM Bundle Adjustment Photo Tourism “Rome in a day:

Roadmap Previous: Image formation, filtering, local features, (Texture)… Tues: Feature-based Alignment Stitching images together Homographies, RANSAC, Warping, Blending Global alignment of planar models Today: Dense Motion Models Local motion / feature displacement Parametric optic flow No classes next week: ICCV conference Oct 6th: Stereo / ‘Multi-view’: Estimating depth with known inter-camera pose Oct 8th: ‘Structure-from-motion’: Estimation of pose and 3D structure Factorization approaches Global alignment with 3D point models