Presentation is loading. Please wait.

Presentation is loading. Please wait.

Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely.

Similar presentations


Presentation on theme: "Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely."— Presentation transcript:

1 Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely

2 Readings Snavely, Seitz, Szeliski, Photo Tourism: Exploring Photo Collections in 3D. SIGGRAPH 2006. http://phototour.cs.washington.edu/Photo_Tourism.pdf Supplementary reading: Szeliski and Kang. Recovering 3D shape and motion from image streams using non-linear least squares. J. Visual Communication and Image Representation, 1993. http://hpl.hp.com/techreports/Compaq-DEC/CRL-93-3.pdf

3 Problem: Drift copy of first image (x n,y n ) (x 1,y 1 ) – add another copy of first image at the end – this gives a constraint: y n = y 1 – there are a bunch of ways to solve this problem add displacement of (y 1 – y n )/(n - 1) to each image after the first compute a global warp: y’ = y + ax run a big optimization problem, incorporating this constraint – best solution, but more complicated – known as “bundle adjustment”

4 Global optimization Minimize a global energy function: – What are the variables? The translation t j = (x j, y j ) for each image – What is the objective function? We have a set of matched features p i,j = (u i,j, v i,j ) For each point match (p i,j, p i,j+1 ): p i,j+1 – p i,j = t j+1 – t j I1I1 I2I2 I3I3 I4I4 p 1,1 p 1,2 p 1,3 p 2,2 p 2,3 p 2,4 p 3,3 p 3,4 p 4,4 p 4,1

5 Global optimization I1I1 I2I2 I3I3 I4I4 p 1,1 p 1,2 p 1,3 p 2,2 p 2,3 p 2,4 p 3,3 p 3,4 p 4,4 p 4,1 p 1,2 – p 1,1 = t 2 – t 1 p 1,3 – p 1,2 = t 3 – t 2 p 2,3 – p 2,2 = t 3 – t 2 … v 4,1 – v 4,4 = y 1 – y 4 minimize w ij = 1 if feature i is visible in images j and j+1 0 otherwise

6 Global optimization I1I1 I2I2 I3I3 I4I4 p 1,1 p 1,2 p 1,3 p 2,2 p 2,3 p 2,4 p 3,3 p 3,4 p 4,4 p 4,1 A 2m x 2n 2n x 1 x 2m x 1 b

7 Global optimization Defines a least squares problem: minimize Solution: Problem: there is no unique solution for ! (det = 0) We can add a global offset to a solution and get the same error A 2m x 2n 2n x 1 x 2m x 1 b

8 Ambiguity in global location Each of these solutions has the same error Called the gauge ambiguity Solution: fix the position of one image (e.g., make the origin of the 1 st image (0,0)) (0,0) (-100,-100) (200,-200)

9 Solving for camera parameters Projection equation Recap: a camera is described by several parameters Translation t of the optical center from the origin of world coords Rotation R of the image plane focal length f, principle point (x’ c, y’ c ), pixel size (s x, s y ) blue parameters are called “extrinsics,” red are “intrinsics” K

10 Solving for camera parameters Projection equation The projection matrix models the cumulative effect of all parameters Useful to decompose into a series of operations projectionintrinsicsrotationtranslation identity matrix A camera is described by several parameters Translation T of the optical center from the origin of world coords Rotation R of the image plane focal length f, principle point (x’ c, y’ c ), pixel size (s x, s y ) blue parameters are called “extrinsics,” red are “intrinsics” K

11 Solving for camera rotation Instead of spherically warping the images and solving for translation, we can directly solve for the rotation R j of each camera Can handle tilt / twist

12 Solving for camera rotation What if we want to handle tilt / twist? – [images here] Instead of doing spherical warp and solving for translations, directly solve for rotation R i of each camera (u,v,f)(x,y,z) f (u,v,f) R

13 Solving for rotations R1R1 R2R2 f I1I1 I2I2 (u 12, v 12 ) (u 11, v 11 ) (u 11, v 11, f) = p 11 R 1 p 11 R 2 p 22

14 Solving for rotations minimize

15 3D rotations How many degrees of freedom are there? How do we represent a rotation? – Rotation matrix (too many degrees of freedom) – Euler angles (e.g. yaw, pitch, and roll) – Quaternions (4-vector on unit sphere) Usually involves non-linear optimization

16 Solving for rotations and translations Structure from motion (SfM) Unlike with panoramas, we often need to solve for structure (3D point positions) as well as motion (camera parameters)

17 Structure from motion Camera 1 Camera 2 Camera 3 R 1,t 1 R 2,t 2 R 3,t 3 p1p1 p4p4 p3p3 p2p2 p5p5 p6p6 p7p7 minimize f (R, T, P)f (R, T, P)

18 p 1,1 p 1,2 p 1,3 Image 1 Image 2 Image 3 x1x1 x4x4 x3x3 x2x2 x5x5 x6x6 x7x7 R1,t1R1,t1 R2,t2R2,t2 R3,t3R3,t3

19 Structure from motion Input: images with points in correspondence p i,j = (u i,j,v i,j ) Output structure: 3D location x i for each point p i motion: camera parameters R j, t j Objective function: minimize reprojection error Reconstruction (side) (top)

20 p 1,1 p 1,2 p 1,3 Image 1 Image 2 Image 3 x1x1 x4x4 x3x3 x2x2 x5x5 x6x6 x7x7 R1,t1R1,t1 R2,t2R2,t2 R3,t3R3,t3

21 SfM objective function Given point x and rotation and translation R, t Minimize sum of squared reprojection errors: predicted image location observed image location

22 Solving structure from motion Minimizing g is difficult: – g is non-linear due to rotations, perspective division – lots of parameters: 3 for each 3D point, 6 for each camera – difficult to initialize – gauge ambiguity: error is invariant to a similarity transform (translation, rotation, uniform scale) Many techniques use non-linear least-squares (NLLS) optimization (bundle adjustment) – Levenberg-Marquardt is one common algorithm for NLLS – Lourakis, The Design and Implementation of a Generic Sparse Bundle Adjustment Software Package Based on the Levenberg- Marquardt Algorithm, http://www.ics.forth.gr/~lourakis/sba/http://www.ics.forth.gr/~lourakis/sba/ – http://en.wikipedia.org/wiki/Levenberg-Marquardt_algorithm http://en.wikipedia.org/wiki/Levenberg-Marquardt_algorithm

23 Extensions to SfM Can also solve for intrinsic parameters (focal length, radial distortion, etc.) Can use a more robust function than squared error, to avoid fitting to outliers For more information, see: Triggs, et al, “Bundle Adjustment – A Modern Synthesis”, Vision Algorithms 2000.

24 Photo Tourism Structure from motion on Internet photo collections

25

26 Photo Tourism

27 Photo Tourism overview Scene reconstruction Photo Explorer Input photographs [Note: change to Trevi for consistency] Relative camera positions and orientations Point cloud Sparse correspondence

28 Scene reconstruction Feature detection Pairwise feature matching Pairwise feature matching Incremental structure from motion Correspondence estimation

29 Feature detection Detect features using SIFT [Lowe, IJCV 2004]

30 Feature detection Detect features using SIFT [Lowe, IJCV 2004]

31 Feature detection Detect features using SIFT [Lowe, IJCV 2004]

32 Feature matching Match features between each pair of images

33 Feature matching Refine matching using RANSAC [Fischler & Bolles 1987] to estimate fundamental matrices between pairs

34 Incremental structure from motion

35

36

37 Problem size Trevi Fountain collection 466 input photos + > 100,000 3D points = very large optimization problem

38 Photo Tourism overview Scene reconstruction Photo Explorer Input photographs

39 Photo Explorer

40 Demo

41 Overhead map

42 Prague Old Town Square

43 Annotations

44 Reproduced with permission of Yahoo! Inc. © 2005 by Yahoo! Inc. YAHOO! and the YAHOO! logo are trademarks of Yahoo! Inc.

45 Annotations

46 Yosemite

47

48

49 Two-view structure from motion Simpler case: can consider motion independent of structure Let’s first consider the case where K is known – Each image point (u i,j, v i,j, 1) can be multiplied by K -1 to form a 3D ray – We call this the calibrated case K

50 Notes on two-view geometry How can we express the epipolar constraint? Answer: there is a 3x3 matrix E such that p' T Ep = 0 E is called the essential matrix epipolar plane epipolar line p p'

51 Properties of the essential matrix p' T Ep = 0 Ep is the epipolar line associated with p e and e' are called epipoles: Ee = 0 and E T e' = 0 E can be solved for with 5 point matches – see Nister, An efficient solution to the five-point relative pose problem. PAMI 2004. epipolar plane epipolar line p p' EpEp e e'

52 Properties of the essential matrix epipolar plane epipolar line p p' EpEp e e'

53 The Fundamental matrix If K is not known, then we use a related matrix called the Fundamental matrix, F – Called the uncalibrated case F = K -T E K -1 F can be solved for linearly with eight points, or non-linearly with six or seven points

54 More information Paper: “Photo Tourism: Exploring photo collections in 3D,” http://phototour.cs.washington.edu/Photo_Tourism.pdf http://phototour.cs.washington.edu http://labs.live.com/photosynth

55 Properties of the Essential Matrix E T p is the epipolar line associated with p. E e’=0 and E T e=0. E is singular. E has two equal non-zero singular values (Huang and Faugeras, 1989). 55 T T

56 Fundamental matrix

57 Give example here

58 Eight-point algorithm

59 Multi-view structure from motion

60 Factorization [Steal partly from Rick’s slides]

61 Bundle adjustment [copy some from Rick’s slides]

62 What about bad matches?

63 What if the frames aren’t given in order?

64 Photo Tourism [describe the process here]

65 Photo Tourism [show some results here]


Download ppt "Global Alignment and Structure from Motion Computer Vision CSE455, Winter 2008 Noah Snavely."

Similar presentations


Ads by Google