Presentation is loading. Please wait.

Presentation is loading. Please wait.

Structure and Motion from Line Segments in Multiple Images Camillo J. Taylor, David J. Kriegman Presented by David Lariviere.

Similar presentations


Presentation on theme: "Structure and Motion from Line Segments in Multiple Images Camillo J. Taylor, David J. Kriegman Presented by David Lariviere."— Presentation transcript:

1 Structure and Motion from Line Segments in Multiple Images Camillo J. Taylor, David J. Kriegman Presented by David Lariviere

2 Primary Goal Given a series of images with known corresponding line segments, calculate the relative locations of the cameras imaging the scene and the three-dimensional locations of the line segments.

3 Some Previous Work (1981) Longuet-Higgins. “A computer algorithm for reconstructing a scene from two projections.” (1990) Vieville. “Estimation of 3D- motion and structure from tracking 2D-lines in a sequence of images.” (1992) Tomasi, Kanade. “Shape and motion from image streams under orthography.”

4 Problem Characterization Instead of using generalized scenes and points, focus on rigid scenes with clear edges as features. Advantages of lines as features: –Occur frequently in man-made environments. –Easily located and tracked –More accurately localized than points because there is more information available in corroboration.

5 Algorithm Overview Determine a non-linear objective function whose minimization leads to an estimate of scene structure. In this case, estimate 3D camera locations/orientations and locations of line segments in 3D, and then reproject the lines onto the estimated image planes. The difference between the predicted projected lines and the actually observed lines is the error function to minimize.

6 Objective Function p i : i th 3D line q j : j th camera position/orientation u ij : observed edge i in image j. m images n lines F: reprojection of line p i onto the image plane of camera q j.

7 Notation – Line Representation Represent a line in 3D space by (v,d) –v: unit vector pointing in direction of the line –d: vector from origin to closest point on the line. m: normal vector of the plane defined by the camera center and line. Edge in image plane defined by m x x + m y y + m z = 0

8 Notation – Reference Frames Relate location/orientation of each camera to some world base frame.

9 Summary of Parameters Camera Location (t j ): 3 DOF Camera Orientation (R j ): 3 DOF Line Location/Orientation (v,d): 4 DOF Requires at least 6 edge correspondences in 3 images.

10 Reprojection Error Visible endpoints (x 1,y 1 ) & (x 2, y 2 ) Calculate minimal distance between observed and predicted lines for every point integrated on interval between endpoints. Normalize error by dividing by length of observed edge.

11 Algorithm Primary Algorithm for minimizing non- linear function: minimize line reprojection error through gradient decent to find local minimum: –Randomly generate initial values. –Iteratively follow function along steepest descent to reach local minimum. If local minimum error is below a certain threshold, accept. Else, generate new initial values and try again. Quality of initial values influence heavily the number of iterations required before the function converges.

12 Initial Value Estimation In order to decrease computational cost, additional steps are added to acquire acceptable starting values for gradient decent: User inputs range for camera orientations (R j ) and values of R j within that range are randomly chosen. Holding constant estimates from (1), estimate v i subject to a constraint equation. Improve estimate from (2) by now minimizing same constraint equation with both v i and R j as free parameters. Generate initial estimates of d i and t j, using a second constraint equation. Provide estimates from (3) and (4) as starting values for gradient decent.

13 Constraint Equations From the defined relations: One can derive: Which provides two constraint equations:

14 Results 1.Simulation Results: 1.measuring tolerance to noise, rate of returns due to increased number of images/features, and rate of convergence of global minimization. 2.Comparing proposed method to previous linear methods 2.Real-world Results

15 Simulation Results: Main Results: –The algorithm is much more sensitive to errors in edge endpoints than error in the calibrated camera center. –Holding maximum baseline constant, increasing the number of images beyond 6 or the number of lines beyond 50 does not improve accuracy. –Small number of large-baseline images superior to many small-baseline images. –Rate of convergence of global decent minimization algorithm is highly dependant on initial range of theta.

16 Simulation Results Continued

17 Comparison to Linear Method This method is significantly less sensitive to noise than the leading linear algorithm 1 1 J. Weng, Y. Liu, T. S. Huang, and N. Ahuja, “Estimating motion/structure from line correspondences”

18 Real-world Results

19 Real-world Results…

20 Real-world Results: Hallway

21 Discussion Initial estimation optimizations improve calculation speed. Algorithm is very insensitive to noise Future improvements: –Automate edge correspondence tracking by using video. –Impose edge-intersection and other geometric restrictions (coplanarity, parallelism, etc).

22 Modeling and Rendering Architecture from Photographs: A hybrid geometry- and image-based approach Paul E. Debevec, Camillo J. Taylor, Jitendra Malik

23 Overview Apply previous paper’s methods to modeling architectural scenes with restricted geometry. Utilize model-based stereo to extract precise geometry from a sparse set of large-baseline photographs. Utilize 3D models and view- dependant photographs to construct photorealistic computer-generated views.

24 Architectural Models: Blocks User starts by choosing geometric primitives (blocks) to represent the basic geometry of the building Block: “hierarchical model of a parametric polyhedral primitive” –Parametrized by base vertex and P o and other various properties (width, height, length, etc).

25 Block Relations Hierarchy of blocks are used to describe the various geometric primitives that make up the basic architecture. User manually maps corresponding edges in images to the edges of the blocks. Blocks are related by constraints on their relations in terms of location and orientation: –For example, ensure that the bottom of one block sits on top of the top of another block. Values of blocks are stored symbolically, meaning if one specifies a series of blocks to be parallel, then only one variable is used to enforce this restriction across all blocks. g i (X): rigid transformation mapping one block to adjacent block. P w (x): block vertex in world coordinates v w (x): line orientation in world orientation

26 Block Relations Continued…

27 Advantages of Blocks Well model most architectural scenes Implicitly contain features commonly found in architecture (ex: parallel edges, right angles) Manipulation by user is easier due to reduced number of parameters. Surfaces are pre-defined by the model, removing the need to calculate them from edges. Number of parameters are greatly reduced when performing minimization of cost function.

28 Single Image Examples:

29 Estimation of 3D Structure Very similar to previous paper: Estimate parameters of camera (R, t) and edges (v, d) which minimize the reprojection error. Differences: –Many edges are defined with relation to one another, meaning fewer variables. –Apply horizontal/vertical constraints on v i to more accurately estimate R j. –Instead of using gradient decent, the authors use Newton-Raphson method to minimize the non-linear error function.

30 View-Dependant Texture Mapping Once camera and edge locations/orientations are known, project images onto block models. If multiple images of same area exist, apply weighted averaging to fuse multiple images. –Weights are inversely proportional to the difference in angle between the virtual view being synthesized and the camera location/orientation which took the particular image. Possible to divide planes into faces, and only calculate the weighted average for one value and apply it to the entire face.

31 Example of Texture-Mapping

32 Model-based Stereopsis Use known scene geometry and camera locations to rectify large-baseline images before performing stereo. Allows for the avoidance of foreshore- shortening problems which can be very large when images are taken far apart. Maintain epipolar constraint by projecting offset image onto model and then reprojecting onto key image’s image plane to create rectified image for use in stereopsis.

33 Model-based Stereopsis Example

34 Discussion For architectural scenes that generally fit the allowed geometric primitives, approach works quite well. Future Possible Improvements: –Additional models: surfaces of revolution –Estimate BRDF –Devise method of selecting best images to use for rendering of novel views.

35 Questions?


Download ppt "Structure and Motion from Line Segments in Multiple Images Camillo J. Taylor, David J. Kriegman Presented by David Lariviere."

Similar presentations


Ads by Google