Presentation on theme: "Parameterized Environment Maps"— Presentation transcript:
1Parameterized Environment Maps Ziyad Hakura, Stanford UniversityJohn Snyder, Microsoft ResearchJed Lengyel, Microsoft ResearchIn this talk, I introduce parameterized environment maps, a representation for accurate, real-time rendering of shiny objects.
2Static Environment Maps (EMs) Generated using standard techniques:Photograph a physical sphere in an environmentRender six faces of a cube from object centerTraditional, or static, environment maps achieve a reasonable approximation of reflections and are easily supported in hardware.Note that we use the abbreviation EM for environment maps.They are constructed using standard techniques, such as by taking a photograph of a physical sphere in an environment, or rendering six faces of cube from an object center.
3Ray-Traced vs. Static EM Self-reflections are missingUnfortunately, static environment maps fail to accurately reproduce local reflections.This is because the reflector is approximated as a point and the reflected environment as infinitely distant.Notice the missing self-reflections in this image generated using a static environment map.
4Here we compare the original ray traced sequence on the left with a sequence generated using a static environment map on the right.Note the missing self-reflections of the teapot spout and knob on the body and lid.
5Parameterized Environment Maps (PEM) A parameterized environment map is a sequence of environment maps recorded over a set of viewpoints.In this example, we record a sequence of environment maps over a one-dimensional viewspace.
63-Step Process 1) Preprocess: Ray-trace images at each viewpoint 2) Preprocess: Infer environment maps (EMs)3) Run-time: Blend between 2 nearest EMsCreating and using parameterized environment maps is a 3-step process.In the first step, we ray-trace images of the reflective object from a sequence of sample viewpoints.In the second step, we infer a separate environment map to match each ray-traced image.Note that unlike traditional environment maps, these environment maps are specific to the reflective object for which they are inferred.In the third step, we reconstruct an image from a particular viewpoint by blending between the two nearest environment maps.
7Environment Map Geometry An environment maps’ geometry refers to how it approximates the reflecting environment.For example, an environment can be represented by a sphere at infinity, by a finite cube, or by a finite hemisphere with planar bottom..By picking an environment map geometry that closely matches the actual environment, we obtain better predictions of how reflections move as the view changes.At the same time, we limit ourselves to simple geometry, such as cubes, spheres and ellipsoids, for fast ray intersections.In this diagram, we show how environment map texture coordinates are generated for an individual vertex.This is done by intersectiing the reflection ray with the simple geometry that approximates the environment, in this case an ellipsoid.We then map this intersection point using the environment texture mapping to find its (u,v) texture coordinates.
8Why Parameterized Environment Maps? Captures view-dependent shading in environmentAccounts for geometric error due to approximationof environment with simple geometryThere are two benefits from parameterizing environment maps.First, we can capture view-dependent shading on specular objects in the environment.For example, say the teapot is reflecting an image of a cup, and this cup is itself reflective, then we can capture the view-dependent shading on the cup using multiple environment maps.Second, we can account for geometric error caused by approximating the environment with simpler geometry.We compute environment maps by inferring them to match a ray traced image rather than by rendering from the object center.Doing this at each viewpoint, we obtain a good match that compensates for geometric error.
9How to Parameterize the Space? Experimental setup1D view space1˚ separation between views100 sampled viewpointsIn general, author specifies parametersSpace can be 1D, 2D or moreViewpoint, light changes, object motionsIn our experiments, we used a one-dimensional view space containing a total of 100 sampled viewpoints separated by 1 degree.In general, the content author specifies how to parameterize the space.The space can have more than one dimension, such as for example two view dimensions instead of just one.Furthermore, the space can be parameterized by parameters other than viewpoint, such as light changes and object motions.
10Closely match local reflections like self-reflections Ray-Traced vs. PEMClosely match local reflections like self-reflectionsUsing parameterized environment maps, we are able to closely match local reflections like self-reflections evident in each ray-traced image
11Here we compare the original ray traced sequence on the left with the approximating PEM sequence on the right.Notice the self-reflections of the teapot spout and knob on the body and lid.
12Movement Away from Viewpoint Samples Ray-TracedPEMFurthermore, we can move the viewpoint away from the ray-traced samples, plausibly maintaining these local reflection effects.[Pause]
13We show the effectiveness of PEMs in moving off the manifold. Here, we interpolate between samples viewpoints along the 1-D circle of viewpoints.Here, we move above and below the plane of sample viewpoints.We move closer and farther from the reflecting object.Finally, we take a path that combines horizontal and vertical motion.
14Previous Work Reflections on Planar Surfaces [Diefenbach96] Reflections on Curved Surfaces [Ofek98]Image-Based Rendering MethodsLight Field, Lumigraph, Surface Light Field, LDIsDecoupling of Geometry and IlluminationCabral99, Heidrich99Parameterized Texture Maps [Hakura00]There has been much previous work on efficient techniques to produce realistic reflections. listed here.[Pause]I will describe in detail differences with surface light fields and parameterized texture maps to make clear what our contribution is.
15Surface Light Fields [Miller98,Wood00] PEMDense sampling oversurface points oflow-resolution lumispheresSparse sampling overviewpoints ofhigh-resolution EMsSurface light fields represent the radiance field as a dense sampling over surface points of low-resolution lumispheres.Our approach instead shares a single high-resolution environment map over all surface points, parameterized over a sparse sampling ofviews.Because lumispheres have 2 to 3 order of magnitude fewer samples than typical environment maps, it is not surprising that current results for surface light fields demonstrate only blurry highlights. As I have just shown, we achieve mirror-like reflections with our approach.Furthermore, surface light fields require an irregular scatting of samples over the entire 4D space to reconstruct an image.Our approach accesses the single environment map appropriate for a given view, or perhaps a few closest ones, thus achieving much more spatial coherence.<diagram>
16Parameterized Texture Maps [Hakura00] LightViewCaptures realistic pre-rendered shading effectsLast year, we introduced parameterized texture maps that record a texture map inferred from offline ray-traced imagery over a set of parameters like viewpoint, light changes, and object motions.This example illustrates a 2D space representing a 1D viewspace combined with a 1D light swinging motion to produce realistic textures for a glass goblet.
17Comparison with Parameterized Texture Maps Parameterized Texture Maps [Hakura00]Static texture coordinatesPasted-on look away from sampled viewsParameterized Environment MapsBounce rays off, intersect simple geometryLayered maps for local and distant environmentBetter quality away from sampled viewsBecause parameterized texture maps use static texture coordinates, we get a pasted-on look when we move away from sampled views.In contrast, with parameterized environment maps, we compute texture coordinates on-the-fly by intersecting the rays that bounce off the surface with simple geometry that approximates the environment.In addition, we distinguish between local and distant elements in the environment and store them in separate layers to get parallax.Consequently, we achieve better quality away from the sampled views.
18Compare PEMs on the left with PTMs on the right over the same off-the-manifold viewpoint trajectory. Notice the popping artifact and pasted-on look for parameterized texture maps.
19EM Representations EM Geometry EM Mapping How reflected environment is approximatedExamples:Sphere at infinityFinite cubes, spheres, and ellipsoidsEM MappingHow geometry is represented in a 2D mapGazing ball (OpenGL) mappingCubic mappingIt is important to distinguish the geometry of an environment map from its mapping.As I mentioned earlier, an environment maps’ geometry refers to how it approximates the reflecting environment.An environment map’s mapping, on the other hand, refers to how the geometry is represented in a 2D map.Examples include the gazing ball and cubic mapping.We use the cubic mapping which avoids any singularities which may become visible from non-sampled views.
20Layered EMs Segment environment into local and distant maps Allows different EM geometries in each layerSupports parallax between layersWe segment the environment into separate maps for local and distant elements.The separation allows different EM geometries to be used to better approximate each layer, and supports parallax between layers.In our work, we use a tightly bounding ellipsoid to represent self-reflections. The more distant environment is represented as a cube.
21Segmented, Ray-Traced Images DistantLocal ColorLocal AlphaFresnelEMs are inferred for each layer separatelyThe ray tracer segments images into local and distant layers.Environment maps are inferred for each layer separately.I will now discuss each of the layers in more detail.
22Distant Layer Ray directly reaches distant environment The distant layer is constructed in the ray-tracer from rays that immediately reach the distant environment, as shown in the diagram on the right.
23Distant Layer Ray bounces more times off reflector However, it is possible for rays that bounce off the object, to hit the object again after their first bounce.In this example, a ray hits the teapot 2 more times after the first bounce, by reflecting off the spout and teapot body.
24Distant Layer Ray propagated through reflector For the distant layer, we ignore secondary bounces off the reflective object by allowing rays to pass through the object after their first bounce.In this case, a incoming ray bounces off the body, and ignores the presence of the spout to reach the distant environment.
25Local Layer Local Color Local Alpha The local layer is constructed from rays that bounce off the reflective object more than once before reaching the distant environment.The local layer’s image is a 4-channel image with transparency, because not all rays hit the teapot more than once.This transparency is encoded in the resulting environment map,so that we can see through the local layer to the distant geometry where the local geometry is absent.
26Fresnel Layer Fresnel modulation is generated at run-time The ray-tracer keeps the highly view-dependent fresnel modulation separate from the incoming radiance.We generate the fresnel modulation at run-time using a simple formula.
27HW Filter Coefficients EM InferenceA x = bUnknown EM TexelsRay-Traced ImageHW Filter CoefficientsHardwareRenderScreenEM TextureOur inference approach is based on the observation that a texel contributes to zero or more display pixels. Neglecting quantization effects, a texel that is twice as bright contributes twice as much.So we can model the hardware as a linear system. Ax = b, where matrix A represents the hardware filter coefficients mapping texels to display pixels, vector x represents the environment map to be solved for, and vector b represents the ray-traced image to be matched.We determine elements of the matrix A by performing test renderings on the hardware that isolate the contribution of each texel.In this simplified diagram, one texel in an environment map is set to one and all others set to zero. On the right is shown the corresponding impulse response on the screen when the lens object is rendered with this environment map.Considering that the matrix A is sparse, we can solve for x using conjugate gradient.
28Inferred EMs per Viewpoint DistantLocalColorAlphaHere we show environment maps inferred for one viewpoint.These environment maps consist of 6 mip-mapped images since we use a cubic mapping.Though we have inferred parameterized environment maps for the distant layer, we can actually get away with using a static non-parameterized environment map for this layer in this example, since the distant environment happens to be diffuse and far from the teapot.
29Run-Time “Over” blending mode to composite local/distant layers Fresnel modulation, F, generated on-the-fly per vertexBlend between neighboring viewpoint EMsTeapot object requires 5 texture map accesses:2 EMs (local/distant layers) at each of2 viewpoints (for smooth interpolation) and1 1D Fresnel map (for better polynomial interpolation)At run-time, we use the “over” blending mode to composite the local layer, subscript L, over the distant, subscript D, before modulating by the fresnel term, F.We use multi-pass rendering to assemble the shading layers.A purely reflective object, such as the teapot, in a 1D viewspace requires 5 texture map accesses, 2 EMs for the local/distant dual at each of 2 viewpoints for smooth interpolation, and 1 access to a 1D map for better interpolation of the high-degree polynomial involved in the fresnel term.
30Video Results Experimental setup 1D view space 1˚ separation between views100 sampled viewpointsI will now show video results for this technique.
31Layered PEM vs. Infinite Sphere PEM We first compare layered parametered environment maps with a simpler version that approximates the environment with a single sphere-at-infinity.
32Notice the wobble in the reflection of the knob on the lid. This is due to the approximation of the local environment as infinitely-distant in the infinite-sphere PEM.
33Real-time DemoWe now show a sequence captured in real-time from our viewer.
34Our viewer achieves an average of 20 frames per second with blending between adjacent viewpoints on a 700 MHz PC with Nvidia GeForce graphics accelerator.
35Summary Parameterized Environment Maps Accounts for environment’s LayeredParameterized by viewpointInferred to match ray-traced imageryAccounts for environment’sGeometryView-dependent shadingMirror-like, local reflectionsHardware-accelerated displayIn summary, we have introduced parameterized environment maps.This representation is layered, parameterized by viewpoint, and is inferred to match ray-traced images.The advantage of this representation is that it accounts for the environment’s geometry and view-dependent shading.It is well-suited for rendering mirror-like local reflections.Furthermore, this representation is easily supported in present-day graphics hardware, and has the desirable property of requiring only spatially coherent memory accesses.
36Future Work Placement/partitioning of multiple environment shells Automatic selection of EM geometryIncomplete imaging of environment “off the manifold”Refractive objectsGlossy surfacesIn future work, we are interested in further exploring the use of multiple environment map shells to better approximate the environment, including finding the optimal partitioning and placement of such shells. We also seek automatic selection of environment map geometry.Another area of future work is handling disocclusions where the reflector fails to image parts of its environment that are revealed in nearby views. This can happen if parts of the reflector are occluded or if its normals incompletely cover the sphere.Finally, we would like to apply these methods to refractive objects and glossy surfaces.
38Timing Results On the Manifold Off the Manifold 2 3 texgen time 35ms On the ManifoldOff the Manifold#geometrypasses23texgen time35ms35msframe time45ms57msFPS2217.5
39Texel Impulse Response HardwareRenderScreenTextureTo measure the hardware impulse response, render with a single texel set to 1.We use the hardware itself to characterize the mapping of the textured object through the 3D hardware. To capture the impulse response of a single texel, we set it’s value to 1 and render using the same graphics operations that the decoder uses.In this diagram, the size of the solid circle represents the intensity of the pixel value.
40Single Texel ResponseHardware rendering with a 1 in the texture map produces a set of filter coefficients for the given texel.
41Model for Single Texel one column per texel one row per screen pixel Reading the filter coefficients back from the screen creates a single column in the A matrix.
43Conclusion PEMs provide: faithful approximation to ray-traced images at pre-rendered viewpoint samplesplausible movement away from those samplesusing real-time graphics hardwareParameterized environment maps provide faithful approximation to ray-traced images at pre-rendered viewpoint samples.In addition, we have demonstrated plausible movement away from those samples using real-time graphics hardware.
44PEM vs. Static EMWe compare PEMs on the left with a static environment map on the right.