Presentation is loading. Please wait.

Presentation is loading. Please wait.

03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.

Similar presentations


Presentation on theme: "03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just."— Presentation transcript:

1 03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just 2 days left Projects: I would have expected to hear more from people by now

2 03/09/05© 2005 University of Wisconsin Today Image Based Rendering

3 03/09/05© 2005 University of Wisconsin Plenoptic Modeling (McMillan and Bishop, 1995) Input: Set of panoramic images gathered by panning a video camera about its vertical optical axis on a tripod The algorithm: –Determines the properties of each camera and registers the images associated with each pan –Determines the relative locations of each camera –Determines the depth of each point seen by multiple cameras –Generates new views by reconstructing the plenoptic function from the available samples

4 03/09/05© 2005 University of Wisconsin Fitting Each Camera Problem: –Given several images all taken with the same camera rotated about its optical center –Determine the camera parameters –Determine the angle between each image Approach: –Multi-stage optimization –First stage estimates angle between images and focal length –Second stage refines and gets remaining parameters When done, can map a pixel in one image to its correct position in any other images

5 03/09/05© 2005 University of Wisconsin Locating the Cameras Given cylindrical projections (panoramic images) for two cameras User identifies 100-500 tie-points (points seen in both images) Each tie-point defines two rays – one from the center of each camera through the tie-point –These rays should intersect at the world location for the point Minimization step finds the camera locations and some other parameters that minimize the perpendicular distance between all the rays

6 03/09/05© 2005 University of Wisconsin Determining Disparity The minimization algorithm gives us the world location of the tie-points, but what about the rest of the points in the image? Use standard computer vision techniques to find the remaining disparities –Disparity is the angular difference between the locations of a point in two images –Directly related to the depth of the point Makes heavy use of the epipolar constraint

7 03/09/05© 2005 University of Wisconsin The Epipolar Constraint The location of a point in one image constrains it to lie on a ray in space passing through the camera and image point This ray projects to a curve in the second image –Line for planar projection –Sine curve for cylindrical projection Since the point must lie on the ray in world space, it must lie on the curve in the second image Reduces the search for correspondences to a linear one along the line

8 03/09/05© 2005 University of Wisconsin Panoramas

9 03/09/05© 2005 University of Wisconsin Reconstructing a New View The disparity from a known cylinder pair can be transferred to a new cylinder, and then re-projected onto a plane (the image) –Can all be done in one step Have to resolve occlusion problems –Two points from the reference image could map to the same point in the output image –Solution: Define a fill ordering that guarantees correct occlusion (an important contribution of this paper) Also have to fill holes

10 03/09/05© 2005 University of Wisconsin Working from Photos Image-based rendering obviously relies heavily on computer vision techniques –Particularly: Depth from stereo –The problem is very hard with real images –These techniques are not perfect! Sampling remains a problem –Images tend to appear blurry –Relatively little work on reconstruction algorithms

11 03/09/05© 2005 University of Wisconsin Light Field Rendering or Lumigraphs Aims: –Sample the plenoptic function, or light field, densely –Store the samples in a data structure that is easy to access –Rendering is simply averaging of samples The plenoptic function gives the radiance passing through a point in space in a particular direction In free space: Gives the radiance along a line –Recall that radiance is constant along a line

12 03/09/05© 2005 University of Wisconsin Storing Light Fields Each sample of the light field represents radiance along a line Required operations: –Store the radiance associated with an oriented line –Look up the radiance of lines that are “close” to a desired line Hence, we need some way of describing, or parameterizing, oriented lines –A line is a 4D object –There are several possible parameterizations

13 03/09/05© 2005 University of Wisconsin Mental Exercise What are some parameterizations of lines in 2D? –Vectors? –Implicit formulas? –Others? How many numbers does it take to describe a line in 2D? Some parameterizations of lines in 3D? –Vectors? –Implicit? –Plucker coordinates? –Others? How many numbers in 3D?

14 03/09/05© 2005 University of Wisconsin Parameterizing Oriented Lines Desirable properties: –Efficient conversion from lines to parameters –Control over which subset of lines is of interest –Ease of uniform sampling of lines in space Parameterize lines by their intersection with two planes in arbitrary positions –Take (s,t) as intersection of line in one plane, (u,v) as intersection in other: L(s,t,u,v) –Light Slab: use two quadrilaterals (squares) and restrict each of s,t,u,v to (0,1)

15 03/09/05© 2005 University of Wisconsin A Slab

16 03/09/05© 2005 University of Wisconsin Line Space An alternate parameterization is line space –Better for looking at subset of lines and verifying sampling patterns –In 2D, parameterize lines by their angle with the x-axis, and their perpendicular distance form the origin –Extension to 3D is straightforward (really?) Every line in space maps to a point in line space, and vice versa –The two spaces are dual –Some operations are much easier in one space than the other

17 03/09/05© 2005 University of Wisconsin Verifying Sampling Patterns

18 03/09/05© 2005 University of Wisconsin Light Field Visualization

19 03/09/05© 2005 University of Wisconsin Capturing Light Fields Render synthetic images Capture digitized photographs –Use a gantry to carefully control which images are captured Makes it easy to control the light field sampling pattern Hard to build the gantry –Use a video camera Easy to acquire the images Hard to control the sampling pattern

20 03/09/05© 2005 University of Wisconsin Tightly Controlled Capture Use a computer controlled gantry to move a camera to fixed positions and take digital images Looks in at an object from outside –Must acquire multiple slabs to get full coverage –Care must be taken with camera alignment and optics Object is rotated in front of gantry to get multiple slabs –Must ensure lighting moves with the object Effectively samples light field on a regular grid, so rendering is easier

21 03/09/05© 2005 University of Wisconsin Gantry Capture

22 03/09/05© 2005 University of Wisconsin Capture from Hand Held Video Place the object on a calibrated stage –Colored to allow blue-screening –Markers to allow easy determination of camera pose Wave the camera around in front of the object –Map to help guide where more samples are required Camera must be calibrated beforehand Output: A large number of non-uniform samples Problem: Have to re-sample to get regular sampling for rendering

23 03/09/05© 2005 University of Wisconsin Video Based Capture

24 03/09/05© 2005 University of Wisconsin Re-Sampling the Light Field Basic problem: –Input: The set of irregular samples from the video capture process –Output: Estimates of the radiance on a regular grid in parameter space Algorithm outline: –Use a multi-resolution algorithm to estimate radiance in under- sampled regions –Use a binning algorithm to uniformly resample without bias

25 03/09/05© 2005 University of Wisconsin Compression Light fields samples must be dense for good rendering Dense light fields are big: 1.6GB –When rendering, samples could come from any part of the light field –All of the light field must be in memory for real-time rendering –But lots of data redundancy, so compression should do well Desirable compression scheme properties: –Random access to compressed data –Asymmetric – slow compression, fast decompression Use vector-quantization and Lempel-Ziv (gzip) –Latter only for disk storage

26 03/09/05© 2005 University of Wisconsin Render synthetic images Decide which line you wish to sample, and cast a ray, or Render an array of images from points on the (u,v) plane – pixels in the images are points on the (s,t) plane Antialiasing is essential, both in (s,t) and (u,v) –Standard anitaliasing and aperture filtering

27 03/09/05© 2005 University of Wisconsin Rendering Ray-tracing: For each pixel in the image: –Determine the ray passing through the eye and the pixel –Interpolate the radiance along that ray from the nearest rays in the light-field Texture Mapping: –Finding the (u,v) and (s,t) coordinates is exactly the texture mapping operation –Use graphics hardware to do the job, or write a software texture mapper (maybe faster – only have to texture map two polygons) Use various interpolation schemes to control aliasing

28 03/09/05© 2005 University of Wisconsin Results

29 03/09/05© 2005 University of Wisconsin Results

30 03/09/05© 2005 University of Wisconsin Exploiting Geometry When using the video capture approach, build a geometric model –Use a volume carving technique When determining the “nearest” samples for rendering, use the geometry to choose better samples This has been further extended: –Surface point used for improving sampling determines focus –By default, we want focus at the object, so use the object geometry –Using other surfaces gives depth of field and variable focus

31 03/09/05© 2005 University of Wisconsin Depth Correction

32 03/09/05© 2005 University of Wisconsin Surface Light Fields Instead of storing the complete light-field, store only lines emanating from the surface of interest –Parameterize the surface mesh (standard technique) –Choose sample points on the surface –Sample the space of rays leaving the surface from those points –When rendering, look up nearby sample points and appropriate sample rays Best for rendering complex BRDF models –An example of view dependent texturing

33 03/09/05© 2005 University of Wisconsin Surface Light Field Set-Up

34 03/09/05© 2005 University of Wisconsin Surface Light Field System Capture with range-scanners and cameras –Geometry and images Build Lumispheres and compress them –Several compression options, discussed in some detail Rendering methods –A real-time version exists

35 03/09/05© 2005 University of Wisconsin Surface Light Field Results

36 03/09/05© 2005 University of Wisconsin Surface Light Field Results PhotosRenderings

37 03/09/05© 2005 University of Wisconsin Surface Light Fields Analysis Why doesn’t this solve the photorealistic rendering problem? How could it be extended? –Precomputed Radiance Transfer – SIGGRAPH 2002

38 03/09/05© 2005 University of Wisconsin Summary Light-fields capture very dense representations of the plenoptic function –Fields can be stitched together to give walkthroughs –The data requirements are large –Sampling still not dense enough – filtering introduces blurring Next time: Using domain specific knowledge

39 03/09/05© 2005 University of Wisconsin Next Time More image based rendering


Download ppt "03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just."

Similar presentations


Ads by Google