Interactive Deformation of Light Fields Billy Chen, Marc Levoy Stanford University Eyal Ofek, Heung-Yeung Shum Microsoft Research Asia To appear in the.

Slides:



Advertisements
Similar presentations
Interactive Deformation of Light Fields Billy Chen Eyal Ofek Heung-Yeung Shum Marc Levoy.
Advertisements

Today Composing transformations 3D Transformations
Computer Graphics- SCC 342
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Graphics Pipeline.
3D Graphics Rendering and Terrain Modeling
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
CAP4730: Computational Structures in Computer Graphics Visible Surface Determination.
CHAPTER 12 Height Maps, Hidden Surface Removal, Clipping and Level of Detail Algorithms © 2008 Cengage Learning EMEA.
Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,
Informationsteknologi Thursday, November 22, 2007Computer Graphics - Class 111 Today’s class Clipping Parametric and point-normal form of lines Intersecting.
HCI 530 : Seminar (HCI) Damian Schofield.
Rendering with Concentric Mosaics Heung – Yeung Shum and Li – Wei He Presentation By: Jonathan A. Bockelman.
(conventional Cartesian reference system)
CS485/685 Computer Vision Prof. George Bebis
Light Field Mapping: Hardware-Accelerated Visualization of Surface Light Fields.
Image or Object? Michael F. Cohen Microsoft Research.
Computational Photography Light Field Rendering Jinxiang Chai.
Paper by Alexander Keller
Rendering with Concentric Mosaics Heung-Yeung Shum Li-Wei he Microsoft Research.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Visualization- Determining Depth From Stereo Saurav Basu BITS Pilani 2002.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
Introduction to 3D Computer Graphics and Virtual Reality McConnell text.
COMP 175: Computer Graphics March 24, 2015
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Chapter 10: Computer Graphics
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 10: Computer Graphics Computer Science: An Overview Tenth Edition.
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
CS 551/651 Advanced Computer Graphics Warping and Morphing Spring 2002.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
CS 376 Introduction to Computer Graphics 04 / 16 / 2007 Instructor: Michael Eckmann.
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
Computer Graphics Bing-Yu Chen National Taiwan University.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
Rendering Overview CSE 3541 Matt Boggus. Rendering Algorithmically generating a 2D image from 3D models Raster graphics.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
COMPUTER GRAPHICS CSCI 375. What do I need to know?  Familiarity with  Trigonometry  Analytic geometry  Linear algebra  Data structures  OOP.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Copyright © Curt Hill Visualization of 3D Worlds How are these images described?
Computer Graphics Chapter 6 Andreas Savva. 2 Interactive Graphics Graphics provides one of the most natural means of communicating with a computer. Interactive.
Graphics Lecture 13: Slide 1 Interactive Computer Graphics Lecture 13: Radiosity - Principles.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Vector Graphics Digital Multimedia Chap 이병희
Rendering Synthetic Objects into Real- World Scenes by Paul Debevec SIGGRAPH 98 Conference Presented By Justin N. Rogers for Advanced Comp Graphics Spring.
112/5/ :54 Graphics II Image Based Rendering Session 11.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Ray Tracing Fall, Introduction Simple idea  Forward Mapping  Natural phenomenon infinite number of rays from light source to object to viewer.
Discontinuous Displacement Mapping for Volume Graphics, Volume Graphics 2006, July 30, Boston, MA Discontinuous Displacement Mapping for Volume Graphics.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
MIT EECS 6.837, Durand and Cutler Texture Mapping & Other Fun Stuff.
1 CSCE 441: Computer Graphics Hidden Surface Removal Jinxiang Chai.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Viewing. Classical Viewing Viewing requires three basic elements - One or more objects - A viewer with a projection surface - Projectors that go from.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Chapter 10: Computer Graphics
Three Dimensional Viewing
Rendering Pipeline Fall, 2015.
- Introduction - Graphics Pipeline
Image-Based Rendering
3D Graphics Rendering PPT By Ricardo Veguilla.
Chapter 10: Computer Graphics
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
(c) 2002 University of Wisconsin
Chapter V Vertex Processing
Presentation transcript:

Interactive Deformation of Light Fields Billy Chen, Marc Levoy Stanford University Eyal Ofek, Heung-Yeung Shum Microsoft Research Asia To appear in the Symposium on Interactive 3D Graphics and Games (I3D) 2005 conference proceedings

Abstract We present a software pipeline that enables an animator to deform light fields. Our pipeline consists of three stages. -splitting the light field into sub-light fields -deforming each one -rendering them together We demonstrate our deformation pipeline using synthetic and photographically acquired light fields.

Introduction A light field enables photo-realistic rendering of objects and scenes without knowing their geometry [Levoy and Hanrahan 1996]. The ability to deform an object has many uses, including modeling and animation. In this paper we describe such a technique for deforming light fields.

Introduction Figure 1: Light field deformation enables an animator to interactively deform photo-realistic objects.

Introduction-our method With each 3D subvolume we associate a subset of the light field, namely those rays that intersect objects lying inside that subvolume. We assume that subvolumes are rectangular parallelepipeds, and we henceforth refer to them as deformation boxes or simply boxes. Stage 1:splitting the light field into sub-light fields Figure 2: An illustration of a sub-light field.

Introduction-our method The animator species the deformation by moving the vertices of the corresponding box. We then apply a 3D warp that maps the undeformed box to a deformed one, transforming the rays associated with that box. Stage 2: a deformation is applied to each sub-light field. Figure 3: The left image shows an undeformed box (in black) and several rays (pink arrows).

Introduction-our method Stage 3: rendering them together The deformed sub-light fields are joined together and rendered using a technique that preserves the occlusion ordering of the subvolumes.

Coaxial light fields [Levoy and Hanrahan 1996] defines the 4D light field as radiance along rays as a function of position and direction in a scene under fixed lighting. [Debevec et al. 2000] defines the 4D reflectance field as radiance along a particular 2D set of rays, i.e. a fixed view of the world, as a function of (2D) direction to the light source.

Coaxial light fields If one could capture an object under both changing viewpoint and changing illumination, one would have an 8D function (recently captured by [Goesele et al. 2004]). We define a different 4D slice, which we call the coaxial light field. We capture different views of an object, but with the light source fixed to the camera as it moves.

Figure 4: The top two images are from a simulated coaxial light field. Figure 5: As in Figure 4, the top two images are images from a simulated light field.

Coaxial light fields The advantage of coaxial viewing and illumination is that it ensures the correct appearance of objects under deformation, even though no geometry has been captured. This technique has several limitations. – First, the object must be diffuse; specular highlights will look reasonable when deformed, but they will not be correct. – Second, perfectly coaxial lighting contains no shadows.

Splitting the light field The purpose of splitting a light field is so that each sub-light field can undergo a different deformation. We define a sub-light field as a collection of two objects: – a bundle of rays – an associated deformation box. Our goal is to associate a deformation box for each ray bundle.

Splitting the light field For light fields from synthetic data Creating ray bundles is straightforward, since the object geometry is known. First, we split the triangle mesh algorithmically into parts. Then, in 3D, we place a deformation box over each sub-mesh. We color each submesh with a different color and capture a light field of the object.

Splitting the light field Figure 6 : shows one view of the colored mesh and the associated deformation boxes. The fish is split into three sub-light fields, each bounded by a deformation box. In this view, the rays belonging to each sub-light field are color coded. The three parts of the fish can now undergo independent warps.

Splitting the light field For light fields from captured data In the simplest case, scan lines in the images can be used to segment the light field into ray bundles. For example, in Figure 1 This would allow us to rotate only the statue's head while applying a different deformation to its body.

Projector-based segmentation of light fields To alleviate these restrictions, one can use video projectors to designate the ray bundles. To segment our light field of the teddy bear we actually capture three light fields: – 1) a coaxially illuminated one used for rendering, – 2) a light field under color-coded illumination from multiple projectors – 3) one under floodlit illumination from the same projectors.

Projector-based segmentation of light fields The camera (A) rotates in a circle in the horizontal plane. One light (B). Two projectors (C, only one shown) are placed above and outside of the gantry and illuminate the object (D). Figure 8: Using the Stanford Spherical Gantry

Projector-based segmentation of light fields Figure 7: Segmenting a teddy bear light field by using projector illumination. Also shows image B’ for one camera's view of the teddy bear light field.

Projector-based segmentation of light fields The image that each projector emits is created by hand. A person sits at a computer that has two video outputs. – One output is shown on a standard CRT monitor. – The other output is displayed through a projector, aimed at the teddy bear. We use two projectors aimed at the teddy bear's front and back respectively.

Projector-based segmentation of light fields Another one is acquired under floodlit illumination from the projectors. These images are used to normalize the data from the colored light field. Normalization is computed by the following equation: W is the floodlit image (8 bits per channel) B is the colored image B’ is the normalized color image

Projector-based segmentation of light fields Finally, deformation boxes are created to surround colored regions of the object in the light field. We first construct boxes in 3D using a modeling program like 3D Studio Max. Then, we project the boxes onto each camera's view and verify that these projections encompass the pixels that have the associated color.

Projector-based segmentation of light fields We specified ray bundles using projectors, then associated a deformation box to each bundle. An alternative approach would be to first specify the deformation boxes and then associate ray bundles to each box. One way to do this would be to specify a box in 3D and project it onto each view in the captured light field.

Deformation By warping the rays, the visual effect is as if we had deformed the object. We represent a ray in the light field by two points on its path. The deformation function warps these two points and produces a new ray that goes through them. Figure 9: Warping a ray. Below, is a conceptual illustration of the deformed boxes, the piecewise-linear deformed ray (shown in red).

Deformation We define a deformation box C as a set of eight 3D points. The animator can move any subset of C to form Cw, a set of eight warped points. The deformation D is thus a 3D function mapping C to Cw. We assume the animator does not move points to form self-intersecting polytopes.

Deformation While there are many ways to define such a deformation, any such formulation should satisfy the following three criteria: 1. D must map C to Cw 2. D must maintain C 0 continuity across deformation boxes sharing adjacent faces 3. straight lines should be preserved We implemented a deformation technique using trilinear coordinates that satisfies the above criteria and is fast and easy to compute.

Deformation Warping with trilinear coordinates for any 3D point p, we can define it in terms of three interpolation coordinates, u, v, and w. By trilinearly interpolating across the volume, p can be described in terms of the interpolation coordinates and the 8 vertices of the cube: c i are the vertices of the cube

Deformation [Warren et al. ] present a technique for convex polytopes which reduces to the same formulation in the rectilinear case. In general, this transformation satisfies criteria 1 and 2, but is not linear. We use it to warp rays by transforming the entry and exit points of a ray with respect to its associated deformation box. In addition, the warp is well defined as long as the deformation box does not self intersect.

Joining and rendering We present a description of our rendering algorithm that solves this visibility problem. Each view ray, vi, it is partitioned into segments by the deformation boxes. Let l1,…, ln be the segments along vi traversed in the forward direction. We warp l1 with the inverse deformation and call this warped ray segment w1. s1 is the value of w1 when rendering using the colored light field. t1 is the value of w1 when rendering using the coaxial light field.

Joining and rendering

Figure 10: Rendering a view ray. The left pair of images show the boxes before and after deformation.

Inverse ray warping To obtain the rays, wi, that are used in the above algorithm, we apply the inverse deformation on the ray segments li. In practice, evaluating the inverse is time consuming, so we use interpolation to approximate the inverse point. We use a hardware accelerated texture- mapping approach to quickly interpolate among the forward-warped 3D points.

Light field rendering We use a cylindrical light field parameterization (CLF) that is well suited for inward-looking light fields having large viewing angle and limited vertical parallax. Figure 12: Rendering from a cylindrical light field. r : a deformed view ray. m : the intersection point between r and the cylinder. a, b : the nearest cameras to m. p : a focal plane which is orthogonal to r. x : intersect r with p and project this point. a' and b‘ : onto the image planes of the nearest cameras.

Results Figure 13: Deforming a fish with three independent deformations. The middle box is being warped, while the front box (the head) and the back box (the tail) are rotated according to the bending of the middle box.

Results Figure 14: A deformation on a teddy bear. (a)shows a view of the original captured light field. (b) shows a deformation in which the head, arms and legs are all bended or twisted independently. (c) shows the deformation boxes used to specify the motion of the teddy bear.

Discussion and Future work We have presented a pipeline for deforming and rendering light fields. By splitting, deforming and joining sub-light fields, we can apply global and local deformation to photorealistic objects. In particular, it allows an animator to use key- frame interpolation of deformations in order to produce a light field animation.

Discussion and Future work There are limitations to deforming light fields. First, there is a trade-off between modeling effort and the level of animation control. Second, when splitting a light field, we assume there is a one-to-one mapping of rays to deformation boxes.

Discussion and Future work Our ray-warping algorithm has some limitations. To generate twist a box or taper, the animator needs to approximate the bending by further tessellating the light field into smaller boxes. An interesting extension of this work is to try to estimate the minimum necessary tessellation of a light field to perform a given transformation, with minimal of the object geometry.

Discussion and Future work We introduced the coaxial light field, where the illumination is fixed near the camera as it moves. This has the limitation that shadows are reduced and that the image appears unnatural.