Presentation is loading. Please wait.

Presentation is loading. Please wait.

11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.

Similar presentations


Presentation on theme: "11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University."— Presentation transcript:

1 11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University of Cambridge

2 11/25/03 Motivation 3D models are needed in graphics, reverse engineering and model-based tracking. 3D models are needed in graphics, reverse engineering and model-based tracking. Want to be able to do real-time tracking. Want to be able to do real-time tracking.

3 11/25/03 System Input/Output

4 11/25/03 Other Approaches Optical flow/ structure from motion Optical flow/ structure from motion (Tomasi & Kanade, 1992) (Tomasi & Kanade, 1992) - Acquire a dense set of depth measurements - Batch method: not real-time Point matching between images. Point matching between images. - Feature extraction followed by geometric constraint enforcement Edge extraction followed by line matching between 3 views using trifocal tensors Edge extraction followed by line matching between 3 views using trifocal tensors

5 11/25/03 Improvement Previous approaches used single line segments, but 2D wireframes allow high level user constraints to reduce the number of degrees of freedom (6 degree of freedom Euclidean motion constraint) since each new line segment adds 4 degrees of freedom. Previous approaches used single line segments, but 2D wireframes allow high level user constraints to reduce the number of degrees of freedom (6 degree of freedom Euclidean motion constraint) since each new line segment adds 4 degrees of freedom.

6 11/25/03 3D Positions of Lines Internal Camera parameters are known. Internal Camera parameters are known. Initial and final camera matrices are known through querying robot (arm) for camera pose. Initial and final camera matrices are known through querying robot (arm) for camera pose. Edge correspondence preserved using tracking. Edge correspondence preserved using tracking. 3D position of lines computed by triangulation. 3D position of lines computed by triangulation.

7 11/25/03 Single Line Tracking Sample points are initialized along each line segment. Sample points are initialized along each line segment. Search perpendicular to the line for local maxima of the intensity gradient. Search perpendicular to the line for local maxima of the intensity gradient. New line position is chosen to minimize the sum squared distance to the measured edge positions. New line position is chosen to minimize the sum squared distance to the measured edge positions.

8 11/25/03 Single Line Tracking

9 11/25/03 Triangulation (Single Line tracking) Finding 3D line by intersecting the rays corresponding to the ends of the line in the first image with the plane defined by the line in the second image. Finding 3D line by intersecting the rays corresponding to the ends of the line in the first image with the plane defined by the line in the second image.

10 11/25/03 Finding 3D Line Find the 3D line by intersecting the world line defined by the point (u, v) in the first image, with the world plane defined by the line Find the 3D line by intersecting the world line defined by the point (u, v) in the first image, with the world plane defined by the line in the second, is equivalent to solving the linear equations

11 11/25/03 Limitations Object edges which project to epipolar lines may not be tracked. Object edges which project to epipolar lines may not be tracked. In case of a pure camera translation, epipolar lines move parallel to themselves (radially with respect to the epipole); but the component of a line ’ s motion parallel to itself is not observable locally.

12 11/25/03 2D Wireframe Tracking Similar to line segment tracking, least squares method is used to minimize the sum of the squared edge measurements from the wireframe. Similar to line segment tracking, least squares method is used to minimize the sum of the squared edge measurements from the wireframe.

13 11/25/03 2D Wireframe Tracking The vertex image motions are stacked into the P-dimensional vector p, and the measurements are stacked into the D-dimensional vector d 0. The vertex image motions are stacked into the P-dimensional vector p, and the measurements are stacked into the D-dimensional vector d 0. D is the new measurement vector due to the motion p, and M is the DxP measurement matrix. D is the new measurement vector due to the motion p, and M is the DxP measurement matrix. Least squares is used to minimize the sum squared measurement error |d| 2. Least squares is used to minimize the sum squared measurement error |d| 2.

14 11/25/03 2D Wireframe Tracking The least squares solution is: The least squares solution is: But in general it is not unique. It can contain arbitrary components in the right nullspace of M, corresponding to displacements of the vertex image positions that do not change the measurements. Adding a small constant to the diagonal of M prevents instability. But in general it is not unique. It can contain arbitrary components in the right nullspace of M, corresponding to displacements of the vertex image positions that do not change the measurements. Adding a small constant to the diagonal of M prevents instability.

15 11/25/03 3D Model Building 2D wireframe tracking preserves point correspondence. 2D wireframe tracking preserves point correspondence. 3D position of the vertices can be calculated from 2 views using triangulation. 3D position of the vertices can be calculated from 2 views using triangulation. Observations from multiple views can be combined by maintaining a 3D pdf for each vertex p(X). Observations from multiple views can be combined by maintaining a 3D pdf for each vertex p(X). 3D pdf is updated on the basis of the tracked image position of the point, and the known camera.

16 11/25/03 3D Model Building A 3D pdf has surfaces of constant probability defined by rays through a circle in the image plane. This pdf is approximated as a 3D Gaussian of infinite variance in the direction of the ray through the image point, and equal, finit, variance in the perpendicular plane. A 3D pdf has surfaces of constant probability defined by rays through a circle in the image plane. This pdf is approximated as a 3D Gaussian of infinite variance in the direction of the ray through the image point, and equal, finit, variance in the perpendicular plane.

17 11/25/03 3D Model Building The 3D pdf is the likelihood of the tracked point position, conditioned on the current 3D position estimate – p(w|X). The 3D pdf is the likelihood of the tracked point position, conditioned on the current 3D position estimate – p(w|X). Multiply this by the prior pdf to get the posterior pdf: Multiply this by the prior pdf to get the posterior pdf:

18 11/25/03 3D Model Building X is Gaussian with mean m p and covariance matrix C p, w|X is Gaussian with mean m l, covariance matrix C l, and X|w is Gaussian with mean m and covariance matrix C. X is Gaussian with mean m p and covariance matrix C p, w|X is Gaussian with mean m l, covariance matrix C l, and X|w is Gaussian with mean m and covariance matrix C. These are the Kalman filter equations used to maintain 3D pdfs for each point. These are the Kalman filter equations used to maintain 3D pdfs for each point.

19 11/25/03 Triangulation (3D Model Building) Instead of using multiple rays that pass through the image point as in the case of single line tracking, probability distribution is used. Instead of using multiple rays that pass through the image point as in the case of single line tracking, probability distribution is used.

20 11/25/03 Combining Tracking and Model Building There are 6 degrees of freedom corresponding to Euclidean position in space (3 translations and 3 rotations) for a rigid body. There are 6 degrees of freedom corresponding to Euclidean position in space (3 translations and 3 rotations) for a rigid body. A wireframe of P/2 points has a P-dimensional vector of vertex image positions. A wireframe of P/2 points has a P-dimensional vector of vertex image positions.

21 11/25/03 Model-based 2D Tracking The velocity of an image point for a normalized camera moving with translational velocity U and rotating with angular velocity w about its optical center is The velocity of an image point for a normalized camera moving with translational velocity U and rotating with angular velocity w about its optical center is where Z c is the depth in camera coordinates and (u, v) are the image coordinates.

22 11/25/03 Model-based 2D Tracking Stacking the image point velocities into a P- dimensional vector results in Stacking the image point velocities into a P- dimensional vector results in Each vector v i forms a basis for the 6D subspace of Euclidean motions in P space. Each vector v i forms a basis for the 6D subspace of Euclidean motions in P space.

23 11/25/03 Model-based 2D Tracking Pros: Pros: Converting a P degree of freedom tracking problem into a 6 degree of freedom one. Cons: Cons: The accuracy of the model (and the accuracy of the subspace of its Euclidean motion) is poor initially. Conclusion: Accumulate 3D information from observations and progressively apply stronger constraints. Conclusion: Accumulate 3D information from observations and progressively apply stronger constraints.

24 11/25/03 Probabilistic 2D Tracking A second Kalman filter is used to apply weighted constraints to the 2D tracking. A second Kalman filter is used to apply weighted constraints to the 2D tracking. The constraints are encoded in a full PxP prior covariance matrix. The constraints are encoded in a full PxP prior covariance matrix. A Euclidean motion constraint can be included by using a prior covariance matrix of the form A Euclidean motion constraint can be included by using a prior covariance matrix of the form

25 11/25/03 Probabilistic 2D Tracking Writing P as and assume λ i are independent to get: Writing P as and assume λ i are independent to get: The variance of the image motion is large in the directions corresponding to Euclidean motion, and 0 in all other directions. The variance of the image motion is large in the directions corresponding to Euclidean motion, and 0 in all other directions. The weights can be adjusted to vary the strength of the constraints. The weights can be adjusted to vary the strength of the constraints.

26 11/25/03 Probabilistic 2D Tracking To combine tracking and model building, errors due to incorrect estimation of depth are permitted, weighted by the uncertainty in the depth of the 3D point. To combine tracking and model building, errors due to incorrect estimation of depth are permitted, weighted by the uncertainty in the depth of the 3D point. Only components of image motion due to camera translation depend on depth. Only components of image motion due to camera translation depend on depth.

27 11/25/03 Probabilistic 2D Tracking For a 1 standard deviation error in the inverse depth, the image motions are For a 1 standard deviation error in the inverse depth, the image motions are Stacking the image point velocities into the P- dimensional vector to get Stacking the image point velocities into the P- dimensional vector to get

28 11/25/03 Probabilistic 2D Tracking Let Let Then Then Ignore terms due to coupling between points to get Ignore terms due to coupling between points to get The depth variance for each point can be computed from its 3D pdf by σ Zc = u t Cu, where u is a unit vector along the optical axis and C is the 3D covariance matrix. The depth variance for each point can be computed from its 3D pdf by σ Zc = u t Cu, where u is a unit vector along the optical axis and C is the 3D covariance matrix.

29 11/25/03 Probabilistic 2D Tracking The final form of the prior covariance matrix is The final form of the prior covariance matrix is Which allows image motion due to Euclidean motion of the vertices in 3D, and also due to errors in the depth estimation of these vertices. Which allows image motion due to Euclidean motion of the vertices in 3D, and also due to errors in the depth estimation of these vertices.

30 11/25/03 Basic Ideas 1. Wireframe geometry specification via user input. Can occur at any stage, allowing objects to be reconstructed in parts.

31 11/25/03 Basic Ideas 2. 2D tracking Kalman filter. Takes edge measurements and updates a pdf for the vertex image positions. Maintains a full PxP covariance matrix for the image positions. 2. 2D tracking Kalman filter. Takes edge measurements and updates a pdf for the vertex image positions. Maintains a full PxP covariance matrix for the image positions.

32 11/25/03 Basic Ideas 3. 3D position Kalman filter. Takes known camera, and estimate vertex image positions, and updates a pdf for the 3D vertex positions. Maintains separate 3x3 covariance matrices for the 3D positions. 3. 3D position Kalman filter. Takes known camera, and estimate vertex image positions, and updates a pdf for the 3D vertex positions. Maintains separate 3x3 covariance matrices for the 3D positions.

33 11/25/03 Algorithm Flow Combined tracking and model building algorithm. Combined tracking and model building algorithm. 3D position updates are performed intermittently. 3D position updates are performed intermittently.

34 11/25/03 Results Real time tracking and 3D reconstruction of church image. Real time tracking and 3D reconstruction of church image.

35 11/25/03 Results ME block – constructed in 2 stages exploiting weighted model-based tracking constraints. ME block – constructed in 2 stages exploiting weighted model-based tracking constraints.

36 11/25/03 Results Propagation of 3D pdfs. Propagation of 3D pdfs. Evolution of model from initial planar hypothesis. Evolution of model from initial planar hypothesis.

37 11/25/03 Results Objects reconstructed using the Model Acquisition System, with surfaces identified by hand. Objects reconstructed using the Model Acquisition System, with surfaces identified by hand. Computer generated image using reconstructed objects. Computer generated image using reconstructed objects.

38 11/25/03 Thanks! Q&A Q&A Happy Thanksgiving!!! Happy Thanksgiving!!!


Download ppt "11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University."

Similar presentations


Ads by Google