Presentation is loading. Please wait.

Presentation is loading. Please wait.

Structure from Motion (With a digression on SVD).

Similar presentations


Presentation on theme: "Structure from Motion (With a digression on SVD)."— Presentation transcript:

1 Structure from Motion (With a digression on SVD)

2 Structure from Motion For now, static scene and moving cameraFor now, static scene and moving camera – Equivalently, rigidly moving scene and static camera Limiting case of stereo with many camerasLimiting case of stereo with many cameras Limiting case of multiview camera calibration with unknown targetLimiting case of multiview camera calibration with unknown target Given n points and N camera positions, have 2nN equations and 3n+6N unknownsGiven n points and N camera positions, have 2nN equations and 3n+6N unknowns

3 Approaches Obtaining point correspondencesObtaining point correspondences – Optical flow – Stereo methods: correlation, feature matching Solving for points and camera motionSolving for points and camera motion – Nonlinear minimization (bundle adjustment) – Various approximations…

4 OrthographicApproximation Simplest SFM case: camera approximated by orthographic projectionSimplest SFM case: camera approximated by orthographic projection PerspectiveOrthographic

5 Weak Perspective An orthographic assumption is sometimes well approximated by a telephoto lensAn orthographic assumption is sometimes well approximated by a telephoto lens Weak Perspective

6 Consequences of Orthographic Projection Scene can be recovered up to scaleScene can be recovered up to scale Translation perpendicular to image plane can never be recoveredTranslation perpendicular to image plane can never be recovered

7 Orthographic Structure from Motion Method due to Tomasi & Kanade, 1992Method due to Tomasi & Kanade, 1992 Assume n points in space p 1 … p nAssume n points in space p 1 … p n Observed at N points in time at image coordinates (x ij, y ij )Observed at N points in time at image coordinates (x ij, y ij ) – Feature tracking, optical flow, etc.

8 Orthographic Structure from Motion Write down matrix of dataWrite down matrix of data Points  Frames 

9 Orthographic Structure from Motion Step 1: find translationStep 1: find translation Translation parallel to viewing direction can not be obtainedTranslation parallel to viewing direction can not be obtained Translation perpendicular to viewing direction equals motion of average position of all pointsTranslation perpendicular to viewing direction equals motion of average position of all points

10 Orthographic Structure from Motion Subtract average of each rowSubtract average of each row

11 Orthographic Structure from Motion Step 2: try to find rotationStep 2: try to find rotation Rotation at each frame defines local coordinate axes,, andRotation at each frame defines local coordinate axes,, and ThenThen

12 Orthographic Structure from Motion So, can write where R is a “rotation” matrix and S is a “shape” matrixSo, can write where R is a “rotation” matrix and S is a “shape” matrix

13 Orthographic Structure from Motion Goal is to factorGoal is to factor Before we do, observe that rank( ) = 3 (in ideal case with no noise)Before we do, observe that rank( ) = 3 (in ideal case with no noise) Proof:Proof: – Rank of R is 3 unless no rotation – Rank of S is 3 iff have noncoplanar points – Product of 2 matrices of rank 3 has rank 3 With noise, rank( ) might be > 3With noise, rank( ) might be > 3

14 Digression: Singular Value Decomposition (SVD) Handy mathematical technique that has application to many problemsHandy mathematical technique that has application to many problems Given any m  n matrix A, algorithm to find matrices U, V, and W such thatGiven any m  n matrix A, algorithm to find matrices U, V, and W such that A = U W V T U is m  n and orthonormal V is n  n and orthonormal W is n  n and diagonal

15 SVD Treat as black box: code widely available ( svd(A,0) in Matlab)Treat as black box: code widely available ( svd(A,0) in Matlab)

16 SVD The w i are called the singular values of AThe w i are called the singular values of A If A is singular, some of the w i will be 0If A is singular, some of the w i will be 0 In general rank(A) = number of nonzero w iIn general rank(A) = number of nonzero w i SVD is mostly unique (up to permutation of singular values, or if some w i are equal)SVD is mostly unique (up to permutation of singular values, or if some w i are equal)

17 SVD and Inverses Why is SVD so useful?Why is SVD so useful? Application #1: inversesApplication #1: inverses A -1 =(V T ) -1 W -1 U -1 = V W -1 U TA -1 =(V T ) -1 W -1 U -1 = V W -1 U T This fails when some w i are 0This fails when some w i are 0 – It’s supposed to fail – singular matrix Pseudoinverse: if w i =0, set 1/w i to 0 (!)Pseudoinverse: if w i =0, set 1/w i to 0 (!) – “Closest” matrix to inverse – Defined for all (even non-square) matrices

18 SVD and Least Squares Solving Ax=b by least squaresSolving Ax=b by least squares x=pseudoinverse(A) times bx=pseudoinverse(A) times b Compute pseudoinverse using SVDCompute pseudoinverse using SVD – Lets you see if data is singular – Even if not singular, ratio of max to min singular values (condition number) tells you how stable the solution will be – Set 1/w i to 0 if w i is small (even if not exactly 0)

19 SVD and Eigenvectors Let A=UWV T, and let x i be i th column of VLet A=UWV T, and let x i be i th column of V Consider A T A x i :Consider A T A x i : So elements of W are squared eigenvalues and columns of V are eigenvectors of A T ASo elements of W are squared eigenvalues and columns of V are eigenvectors of A T A

20 SVD and Eigenvectors Eigenvectors of A T A turned up in finding least-squares solution to Ax=0Eigenvectors of A T A turned up in finding least-squares solution to Ax=0 Solution is eigenvector corresponding to smallest eigenvalueSolution is eigenvector corresponding to smallest eigenvalue

21 SVD and Matrix Similarity One common definition for the norm of a matrix is the Frobenius norm:One common definition for the norm of a matrix is the Frobenius norm: Frobenius norm can be computed from SVDFrobenius norm can be computed from SVD So changes to a matrix can be evaluated by looking at changes to singular valuesSo changes to a matrix can be evaluated by looking at changes to singular values

22 SVD and Matrix Similarity Suppose you want to find best rank-k approximation to ASuppose you want to find best rank-k approximation to A Answer: set all but the largest k singular values to zeroAnswer: set all but the largest k singular values to zero Can form compact representation by eliminating columns of U and V corresponding to zeroed w iCan form compact representation by eliminating columns of U and V corresponding to zeroed w i

23 SVD and PCA Principal Components Analysis (PCA): approximating a high-dimensional data set with a lower-dimensional subspacePrincipal Components Analysis (PCA): approximating a high-dimensional data set with a lower-dimensional subspace Original axes * * * * * * * *** * * *** * * * * * * * * * Data points First principal component Second principal component

24 SVD and PCA Data matrix with points as rows, take SVDData matrix with points as rows, take SVD Columns of V k are principal componentsColumns of V k are principal components Value of w i gives importance of each componentValue of w i gives importance of each component

25 SVD and Orthogonalization The matrix U is the “closest” orthonormal matrix to AThe matrix U is the “closest” orthonormal matrix to A Yet another useful application of the matrix-approximation properties of SVDYet another useful application of the matrix-approximation properties of SVD Much more stable numerically than Graham-Schmidt orthogonalizationMuch more stable numerically than Graham-Schmidt orthogonalization Find rotation given general affine matrixFind rotation given general affine matrix

26 End of Digression Goal is to factor into R and SGoal is to factor into R and S Let’s apply SVD!Let’s apply SVD! But should have rank 3  all but 3 of the w i should be 0But should have rank 3  all but 3 of the w i should be 0 Extract the top 3 w i, together with the corresponding columns of U and VExtract the top 3 w i, together with the corresponding columns of U and V

27 Factoring for Orthographic Structure from Motion After extracting columns, U 3 has dimensions 2N  3 (just what we wanted for R)After extracting columns, U 3 has dimensions 2N  3 (just what we wanted for R) W 3 V 3 T has dimensions 3  n (just what we wanted for S)W 3 V 3 T has dimensions 3  n (just what we wanted for S) So, let R * =U 3, S * =W 3 V 3 TSo, let R * =U 3, S * =W 3 V 3 T

28 Affine Structure from Motion The i and j entries of R * are not, in general, unit length and perpendicularThe i and j entries of R * are not, in general, unit length and perpendicular We have found motion (and therefore shape) up to an affine transformationWe have found motion (and therefore shape) up to an affine transformation This is the best we could do if we didn’t assume orthographic cameraThis is the best we could do if we didn’t assume orthographic camera

29 Ensuring Orthogonality Since can be factored as R * S *, it can also be factored as (R * Q)(Q -1 S * ), for any QSince can be factored as R * S *, it can also be factored as (R * Q)(Q -1 S * ), for any Q So, search for Q such that R = R * Q has the properties we wantSo, search for Q such that R = R * Q has the properties we want

30 Ensuring Orthogonality Want orWant or Let T = QQ TLet T = QQ T Equations for elements of T – solve by least squaresEquations for elements of T – solve by least squares Ambiguity – add constraintsAmbiguity – add constraints

31 Ensuring Orthogonality Have found T = QQ THave found T = QQ T Find Q by taking “square root” of TFind Q by taking “square root” of T – Cholesky decomposition if T is positive definite – General algorithms (e.g. sqrtm in Matlab)

32 Orthogonal Structure from Motion Let’s recap:Let’s recap: – Write down matrix of observations – Find translation from avg. position – Subtract translation – Factor matrix using SVD – Write down equations for orthogonalization – Solve using least squares, square root At end, get matrix R = R * Q of camera positions and matrix S = Q -1 S * of 3D pointsAt end, get matrix R = R * Q of camera positions and matrix S = Q -1 S * of 3D points

33 Results Image sequenceImage sequence [Tomasi & Kanade]

34 Results Tracked featuresTracked features [Tomasi & Kanade]

35 Results Reconstructed shapeReconstructed shape [Tomasi & Kanade] Front view Top view


Download ppt "Structure from Motion (With a digression on SVD)."

Similar presentations


Ads by Google