Presentation is loading. Please wait.

Presentation is loading. Please wait.

Panoramas.

Similar presentations


Presentation on theme: "Panoramas."— Presentation transcript:

1 Panoramas

2 Creating Full View Panoramic Image Mosaics and Environment Maps
Richard Szeliski and Heung-Yeung Shum Microsoft Research

3 Outline Main Contribution Introduction Details Results

4 Contributions Novel approach to creating full view panoramic mosaics from image sequence Not necessarily pure horizontal camera panning Do not require any controlled motions or constraints Represent image mosaics using a set of transforms Fast and robust Method to recover camera focal length Extract environment map from image mosaic

5 Introduction Image-based rending IBR without depth info
Realistic rendering without geometry models IBR without depth info Only support user panning, rotation and zoom QuickTime VR, Surround Video… Cylindrical image, spherical maps..

6 Introduction To capture panoramic images Hardware-intensive methods
Using panoramic camera to get cylindrical image Using a lens with a large field of view (fisheye lens) Mirrored pyramids and parabolic mirrors Hardware-intensive methods Take a series regular picture or video and stitch them together Require carefully-controlled camera motion Produce only cylindrical image

7 Novel Algorithms They use 3-parameter rotational motion model: fewer unknowns, more robust Instead of general 8-parameter planar perspective motion model Estimate focal length from a set of 8-parameter perspective registration Gap closing

8 Cylindrical Panoramas
Cylindrical panorama is easy to construct Coordinate transformation X Y Z cylindrical image

9 Cylindrical Panorama With ideal pinhole camera and known f,
Distortion: horizontal lines becomes curved

10 Spherical Panorama X Y Z (X,Y,Z)

11 Motion Model Warp image to cylindrical panorama
Ideal horizontal panning sequence Rotation->Translation in angle In practice Vertical translation to compensate for vertical jitter and optical twist On WARPED Image!

12 Motion Recovery Estimate incremental translation
by minimizing the intensity error Taylor series expansion Current intensity or color error Image gradient of I_1 at position x’i

13 Motion Recovery Minimization -> Least square solution

14 Motion Recovery Large initial displacement
Coarse to fine optimization scheme J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani. Hierarchical model-based motion estimation

15 Motion Recovery Large initial displacement
Coarse to fine optimization scheme Discontinuities in intensity or color between images being composed Feathering algorithm: weighted by distance map

16 Limitations of Cylindrical or Spherical Panorama
Only handle pure panning motion Ill-sampling at north pole and south pole cause big registration errors Require knowing the focal length f Estimation of focal length of lens by registering images is not very accurate

17 Perspective Panoramas
Planar perspective transform between images using 8 parameters For example, if only translation, m2, m5 are the unknowns

18 Perspective Panoramas
Iteratively update transform matrix using Resampling image I_1 with x’’=(I+D)Mx to

19 Perspective Panoramas
Minimize

20 Perspective Panoramas
Least square minimization Hessian Matrix Accumulated gradient or residual

21 Perspective Panoramas
Works well if initial estimates of correct transformation are close enough Slow convergence Get stuck in local minima

22 Rotational Panoramas Cameras centered at the origin
Simplicity of rotation: set c_x=c_y=0, pixel start from image center

23 Rotational Panoramas Camera rotating around its center of projection
Focal length is known and the same for all images V_k = V_l = V Angular velocity

24 Rotational Panoramas Incremental rotation matrix (Rodriguez’s formula)

25 Rotational Panoramas is the deformation matrix as Jacobian of

26 Rotational Panoramas 3 parameters Incremental rotation vector
Update R_k in Much easier and more intuitive to interactively adjust

27 Estimate Focal Length 1/ 1/

28 Estimate Focal Length If fixed focal length, take average of f_0 and f_1: If multiple focal length for every images, use median value for final estimate We can also update the focal length as part of the image registration process using least squares approach

29 Closing gap in a panorama
Matching the first image and the last one Compute the gap angle Distribute the gap angle evenly across the whole sequence Modify rotations by Update focal length Only works for 1D panorama where the camera is continuously turning in the same direction

30 Conclusion Does not place constraints on how the images to be taken with hand held cameras Accurate and robust Estimate only 3 rotation parameters instead of 8 parameters in general perspective transforms Increases accuracy, flexibility and ease of use Focal length estimation

31 Results

32 Photographing Long Scenes with Multi-Viewpoint Panoramas

33 Abstract Multi-viewpoint panoramas of long, roughly planar scenes
Façades of buildings along a city street Panoramas are composed of relatively large regions of ordinary perspective User interactions to identify the dominant plane To draw strokes indicating various high-level goals Markov Random Field optimization

34 Introduction Long scene Hard to take photographs at one point
Wider field of view: large distortion Single perspective photographs are not very effective at conveying long scenes Street side in a city Bank of a river Aisle of a grocery store

35 Introduction Take photographs Output
Walk along the other side and take handheld photographs at intervals of one large step (roughly one meter) Output a single panorama that visualizes the entire extent of the scene captured in the input photographs and resembles what a human would see when walking along the street

36 Contributions A practical approach to creating high quality, high-resolution, multi-viewpoint panoramas with a simple and casual capture method. A number of novel techniques, including An objective function that describes desirable properties of a multi-viewpoint panorama, and A novel technique for propagating user-drawn strokes that annotate 3D objects in the scene

37 Related Work Single-viewpoint panoramas Strip Panoramas
Rotating a camera around its optical center Strip Panoramas Translating camera Orthographic projection along the horizontal axis Perspective along vertical Varying strip width by depth estimation of appearance optimization High-speed video camera and special setups

38 Strip Panoramas Exhibit distortion for scene with varying depths, especially if these depth variations occur across the vertical axis of the image Created from video sequences, and still images created from video rarely have the same quality as those captured by a still camera Low resolution, compression artifacts Capturing a suitable video can be cumbersome

39 Approach Inspired by the work of artist Michael Koller
Multi-viewpoint panoramas of San Francisco streets Large regions of ordinary perspective photographs Artfully seamed together to hide the transitions Attractive and informative

40 multi-viewpoint panoramas
Each object in the scene is rendered from a viewpoint roughly in front of it to avoid perspective distortion. The panoramas are composed of large regions of linear perspective seen from a viewpoint where a person would naturally stand (for example, a city block is viewed from across the street, rather than from some faraway viewpoint). Local perspective effects are evident; objects closer to the image plane are larger than objects further away, and multiple vanishing points can be seen. The seams between these perspective regions do not draw attention; that is, the image appears natural and continuous.

41 Properties A dominant plane in the scene Not attempt to
Create multi-viewpoint panoramas that turn around street corners show all four sides of a building

42 System Overview Pre-processing Takes the source images
Removes radial distortion Recovers the camera projection matrices Compensates for exposure variation Panorama Surface Defines the picture surface Source photographs are then projected onto this surface Composition Selects a viewpoint for each pixel in the output panorama Interactively refine by drawing strokes An overview of our system for creating a multi-viewpoint panorama from a sequence of still photographs. Our system has three main phases. In the preprocessing stage, our system takes the source images and removes radial distortion, recovers the camera projection matrices, and compensates for exposure variation. In the next stage, the user defines the picture surface on which the panorama is formed; the source photographs are then projected onto this surface. Finally, our system selects a viewpoint for each pixel in the output panorama using an optimization approach. The user can optionally choose to interactively refine this result by drawing strokes that express various types of constraints, which are used during additional iterations of the optimization.

43 Capture Images Use a digital SLR camera with auto-focus and manually control the exposure to avoid large exposure shifts Use a fisheye lens to insure a wide field of view for some data

44 Pre-Processing Recover projection matrices of each camera so that we can later project the source images onto a picture surface Use structure-from-motion system [Hartley and Zisserman 2004] built by Snavely et al. [2006] Using sift for keypoint detection and matching Bundle adjustment

45 Exposure Compensation
Adjust the exposure of the various photographs so that they match better in overlapping regions Recover the radiometric response function of each photograph [Mitsunaga and Nayar 1999] Simpler approach Least squares for all the pairs of SIFT match

46 Picture Surface Selection
The picture surface should be roughly aligned with the dominant plane of the scene Extrude in Y direction

47 Picture Surface Selection
Define the coordinate system of the recovered 3D scene Automatic: fit a plane to the camera viewpoints using principal component analysis The dimension of greatest variation (the first principal component) is the new x-axis, and The dimension of least variation the new y-axis Interactive: user selects a few of these projected points that lie along the desired axes Draw the curve in the xz plane that defines the picture surface

48 Picture Surface Selection
Easy to identify for street scenes River bank: hard to specify by drawing strokes The user selects clusters of scene points that should lie along the picture surface The system fits a third-degree polynomial z(x) to the z-coordinates of these 3D scene points as a function of their x-coordinates

49 Sample Picture Surface
Project each S(I,j) on picture surface to source photograph One of the source images used to create the panorama in Figure 1. This source image is then projected onto the picture surface shown in Figure 4, after a circular crop to remove poorly sampled areas at the edges of the fisheye image

50 Average Image the average image of all the projected sources
the average image after unwarping to straighten the ground plane and cropping

51 Interactive Refinement
Small drifts that can accumulate during structure-from-motion estimation and lead to ground planes that slowly curve The user clicks a few points along the average image to indicate y values of the image that should be warped straight Resample and crop the average image after unwarping to straighten the ground plane and cropping

52 Viewpoint Selection How to choose color for each pixel on panorama from one of the source image I_i(p) Determine L(p): L is the image no.

53 Viewpoint Selection Optimization using Markov Random Field
Cost function: Gives straight-on view Seamless transition Encourages the panorama to resemble the average image in areas where the scene geometry intersects the picture surface

54 Viewpoint Selection Minimize the overall cost function for each pixel and each pair of neighboring pixel Solve using min-cut optimization compute the panorama at a lower resolution so that the MRF optimization can be computed in reasonable time create higher-resolution versions using the hierarchical approach described by Agarwala et al. [2005]. We thus composite the final Panorama in the gradient domain to smooth errors across these seams

55 Viewpoint Selection Not vertical strips

56 Interactive Refinement
The user should be able to express desired changes to the panorama without tedious manual editing of the exact seam locations. Three types of strokes: View selection: use certain viewpoint Seam suppression: no seam should pass an object Inpainting: eliminate undesirable features

57 Interactive Refinement

58 Results 1 hour to capture images (100) and 20 mins for interactions
Not for every scene Suburban scenes with a range of different depths More results can be found at


Download ppt "Panoramas."

Similar presentations


Ads by Google