Presentation is loading. Please wait.

Presentation is loading. Please wait.

Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments Peter-Pike Sloan, Microsoft Research Jan Kautz,

Similar presentations


Presentation on theme: "Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments Peter-Pike Sloan, Microsoft Research Jan Kautz,"— Presentation transcript:

1 Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments Peter-Pike Sloan, Microsoft Research Jan Kautz, MPI Informatik John Snyder, Microsoft Research

2 Previous Work – Where We Fit
Lighting ? [Crow77] [Williams78] [Stamminger02] [Heidrich00] [Malzbender01] [Tong02] [Moeller02] Frozen Lighting Irradiance Volumes [Greger96] Surface Lightfields [Miller98,Wood00] [Chen02] [Ashikhmin02] [Blinn76] [Miller84] [Latta02] [Ramamoorthi02] [Blinn76] our technique full env. map [Moeller02] [Ashikhmin02] single area light Non-Interactive [Matusik02] Early interactive rendering techniques were limited to simple point lighting environments, and had almost no transport complexity – global effects like shadows were ignored. Algorithms to generate interactive shadows exist – most notably crows shadow volume technique and the shadow zbuffer technique by Williams – there is a nice extension to this work at this years siggraph. However these techniques are restricted to point lights. Such algorithms can be combined with multipass rendering such as with an accumulation buffer to generate soft shadows, but the number of passes gets impractically large as the light sources get bigger. Heidrich et al modeled soft shadows and inter reflections with bump maps – but the technique is limited to point lights and performance isn’t quite real-time. Polynomial texture maps can handle shadows and inter-reflections in real-time, but they too are limited to point lights. Early work also extended lighting from point lights to general environment maps. You have just seen two papers that generalize the early ideas to fairly general isotropic BRDFs. But environment map methods ignore transport complexity – they can’t model shadows or inter-reflections. A paper at this year’s rendering workshop has extended shadow volumes to deal with single spherical light sources which may achieve real time rates on the next generation of hardware. Ashikmin has a TOG paper that just came out that allows you to steer a small light source over a diffuse object. Our work handles arbitrary, low-frequency lighting environments and provides transport complexity – including shadows, inter reflections and caustics. Note that it is precisely soft shadows from low-frequency lighting that current interactive techniques have the greatest difficulty with. Some earlier work had interesting transport complexity with frozen lighting environments. In particular the Irradiance volume paper allowed you to move a diffuse object through a precomputed lighting environment, dynamicaly lighting the object. However the object didn’t shadow itself or the environment. Surface lightfields can model complex transport complexity and deal with interesting lighting environments, but the lighting environments are frozen. A recent example of that is Chens paper this year. Matusik et al have a nice paper here that captures both a surface light field and a reflectance field – allowing you to relight objects in novel lighting environments. The reconstruction is not interactive however. [Crow77] point lights [Heidrich00] Transport Complexity shadows inter- reflections simple

3 Motivation Better light integration and transport dynamic, area lights
self-shadowing interreflections For diffuse and glossy surfaces At real-time rates point light area light We think there’s good reason to concentrate on this part of the lighting/transport complexity space. We wanted to handle dynamic and large area lights, including their self-shadowing and interreflection effects. Area lights can add a sense of realism that is difficult to obtain with point lights – in the top row we have a displacement mapped sphere illuminated with a point light (casting hard shadows) and an area light (casting soft shadows.) The object shape’s is more effectively conveyed with soft shadows. Shadows also add much more realism than area lighting without shadows. In the bottom row we have a bust of the max planc head rendered on the left using ravi’s technique from last year (no shadows) and on the right using our technique. Though both are lit by an entire environment, the shadowed image on the right looks more realistic. We wanted to achieve these effects for both diffuse and glossy surfaces at real-time rendering rates. area lighting, no shadows area lighting, shadows

4 Basic Idea Preprocess for all i
Given an object placed in a lighting environment (assumed to be at infinity for now), we want to compute the reflected radiance in some view direction v. For direct illumination this results in the following integral (over the hemisphere using the variable s) evaluated at a given point in the surface with a normal. The 1st factor represents the lighting environment, the second factor is a visibility function, it’s a binary function that is 1 if the lighting environment is visible from the given point, and zero otherwise. There is a factor that represents the BRDF of the surface (how it reflects light), and finaly a projected area factor (the cosine factor.) If we represent our lighting environment as coefficients in some basis, we can plug it directly into this integral. Rearanging factors, taking advantage of the linearity of light transport, results in a sum of the lighting coefficients times this integral which depends only on the basis functions and the BRDF but no longer on the lights. If the BRDF is diffuse, or the view is frozen at a point (for example in a single image) this results in a scalar, so the object/image can be relit with a simple dot product per point. Preprocess for all i

5 Diffuse Self-Transfer
2D example, piecewise constant basis, shadows only Preprocess Project Light light Rendering This is didactic example of the process for diffuse surfaces. Given a point, compute the integral [go back to last slide] for each basis function – this gives you a coefficient, you end up with a vector of these coefficients (one per basis function.) Repeat this at every point on the object. Then you project a lighting environment into this basis (giving you the li’s from the previous slide). All you have to do to render is to compute the sum – which amounts to a dot product between the lights coefficients and the points coefficients. Doing this for every point on the objects gives you the shaded result. = = light =

6 Precomputation . Basis 16 Basis 17 Basis 18 illuminate result
This slide illustrates the precomputation for direct lighting. Each image on the right is generated by placing the head model into a lighting environment that simply consists of the corresponding basis function (SH basis in this case illustrated on the left.) This just requires rendering software that can deal with negative lights. The result is a spatialy varying set of transfer coefficients shown on the right. To reconstruct reflected radiance just compute a linear combination of the transfer coefficient images scaled by the corresponding coefficient for the lighting environment. Basis 18

7 Previous Work – Scene Relighting
[Dorsey91] opera lighting design adjusts intensity of fixed light sources [Nimeroff94] natural environments uses steerable functions for general skylight illumination [Teo97] efficient linear re-rendering generalizes to non-infinite sources, PCA to reduce basis [Debevec00] reflectance field of a face uses directional light basis for relighting faces [Dobashi95] lighting design uses SH basis for point light intensity distribution There has been an extensive amount of work on scene relighting – here we list a few of the papers. Most of these are just for re-rendering images in novel lighting environments (not objects.) One exception is the paper by dobashi, where SH are used to represent the light intensity distribution for a fixed point light source, but allowing an arbitrary view of the diffuse scene.

8 Basis Functions We use Spherical Harmonics SH have nice properties:
simple projection/reconstruction rotationally invariant (no aliasing) simple rotation simple convolution few basis functions  low freqs l=0 m=0 l=1 m=-1 l=1 m=0 The basis functions we used are the spherical harmonics – they are equivalent to the fourier basis on the plane, but mapped to the sphere. They have several nice properties: Since the basis is orthonormal projection is simple, evaluation is also very simple. The most important property for our application is that they are rotationally invariant. This means that given some lighting environment on the sphere, you can project that lighting environment into SH, rotate the basis functions and integrate them against themselves (ie: rotating the projection and re-projecting) you get identical results to rotating the original lighting environment and projecting it into the SH basis. Rotation is simple (efficient evaluation formulae, just a linear operator on the SH coefs) As Ravi and others have shown convolution is also simple. We use a small number of basis functions, that’s where the low-frequency restriction comes from. It should be noted that nothing we have shown so far requires you to use SH, other basis functions could be used instead. l=1 m=1 l=2 m=1 l=3 m=-1 l=3 m=2 l=4 m=-2

9 Diffuse Transfer Results
This set of images shows the buddha model lit in the same lighting environment, without shadows, with shadows and with shadows and inter reflections. No Shadows/Inter Shadows Shadows+Inter

10 Glossy Self-Transfer exiting radiance is view-dependent
depends on BRDF (we use Phong) If you recall the reflected radiance equation from earlier on, when the surface is diffuse (or if you stick to a single view) this just results in a scalar per basis function. When the surface is glossy and you want to be able to change the view you have to do something else. One option would be to build a transfer matrix that maps a spherical function (incident radiance) to another spherical function (exit radiance.)

11 Glossy Self-Transfer exiting radiance is view-dependent
depends on BRDF (we use Phong) represent transferred incident radiance, not exiting accounts for shadows, interreflections allows run-time BRDF changes We instead represent a transfer matrix that maps incident radiance to transferred incident radiance which can account for shadows and inter-reflections but doesn’t include the BRDF. This matrix is just a linear operator M that maps incident radiance L to transferred incident radiance Ltran. Reflected radiance for a view direction v is now computed by integrating Ltran against the BRDF. The only difference is that Ltran subsumes the visibility factor V. One reason why we factor things this way is because direct illumination you can easily change the BRDF of the surface without redoing the precomputation.

12 Transfer Matrix Precompute how global lighting  transferred lighting * p1 p1 lighting Instead we compute a transfer matrix, that maps how global lighting gets mapped to transferred incident radiance at every point on the surface. This is illustrated by the two points p1 and p2 in the slide. Notice that p1 is in a concavity, so more of the lighting environment is shadowed when pushed through the transfer matrix. Here we have a simple object with a concavity. To render this object with a given lighting enviornment (expressed in SH.) At each point on the surface we have a transfer matrix. We have to multiply the lighting environment times the transfer matrix to generate transferred incident radiance. Since P1 is in a concavity it has a narrow cone of visibility so it has a lot of shadowing. Note how the transfer matrix blocks out most of the lighting. Doing this for all points generates a spatialy varying transferred field. Note that p2 is on the convex hull of the object so it has a complete hemisphere of visibility. p2 p2 * transfer matrices transferred radiance

13 Glossy Rendering Lookup at R Computing the reflected radiance at a point p given a viewing direction v requires the evaluation of this form. L is the projection of the lighting environment into SH (once per frame) That is then operated on by the transfer matrix M, this maps global radiance to transfer radiance, in this example the point is slightly shadowed to the right. This local radiance sphere needs to be integrated against the BRDF given the fixed view direction v (this results in a vector of coefficients.) In the siggraph paper we just implimented a phong model, which just amounts to convolving the lighting environment with the appropriate lobe and evauating the spherical function in the reflection direction (basicaly the glossy equivalence of ravi’s paper from last year.) This is clearly a lot more work then the diffuse case, but if constraints are made can still result in an efficient run time. In particular if you freeze the view, you can multiply the row vector b by the matrix at every point, so that reflected radiance still just requires a dot product. If you freeze the light, you can push the light through the matrix at every point, just requiring evaluation of b dynamically and a dot product to generated reflected radiance at a given point. Integrates incident radiance against BRDF in direction v Transfer matrix (maps radiance to transferred incident radiance) Lighting Env in SH (vector) Freeze the Light Freeze the view Phong: Convolve light and evaluate in reflection direction, R

14 Glossy Transfer Results
No Shadows/Inter Shadows Shadows+Inter These are examples of a glossy buddha, it runs at under 4hz with no constraints, 16hz with a frozen lighting environment (we aren’t completely taking advantage of graphics hardware in this case) and 125hz with a frozen view. Glossy object, 50K mesh Runs at 3.6/16/125fps on 2.2Ghz P4, ATI Radeon 8500

15 Interreflections and Caustics
none 1 bounce 2 bounces Transport Paths We use an offline simulation to create the transfer vectors and matrices. When we compute this simulation we are free to simulate more complex transport effects then just shadowing including inter reflections and caustics. This simulation produces different transfer signals but has no effect on the cost of the run time. So far we have only dealt (mathematicaly) with direct illumination – ie: transport paths that start at the lighting environment (L) and directly reach the surface (P). A single inter reflection also models transport paths that originate in the lighting environment and take one bounce of the surface before reaching a point. Note the reflection of the spout onto the body of the teapot. The simulator can compute multiple bounces to get more accurate inter reflections. By including transport paths the bounce of an arbitrary specular surface denoted S we can account for caustics as well. Again, these effects have no change on the complexity or the run time. Now Jan will talk about various extensions to this method, beging with a brief synopsis of how we extended this work to handle arbitrary BRDF’s in a short paper at this years rendering workshop. Runtime is independent of transport complexity caustics

16 Arbitrary BRDFs [Kautz02]
BRDF Coefficients Let me revisit the reflectance equation in order to explain how arbitrary BRDFs are incorporated. As seen before, in order to compute exit radiance, we basically have to integrate the transferred incident light against the BRDF and the cosine factor Hn. Expanding the incident light into the SH basis again and rewriting integral, we see that the highlighted term can be precomputed, since it’s independent of the lighting. But unfortunately this term still depends on the viewing direction v for arbitrary BRDFs, we simply compute it for all directions v and tabulate the results in texture maps. Rendering is not much different than before. We take the coefficient vector of the incident lighting and multiply it with the transfer matrix to get transferred radiance taking self-shadowing into account. Unfortunately, the resulting lighting vector is still in global coordinates, whereas the BRDF is expressed in the local tangent frame, so we rotate the lighting from global coordinates into the local coordinate frame of p using the matrix Rp. We then take the tabulated BRDF coefficients for the current viewing direction from the texture map and compute the inner product with the lighting coefficients, resulting in the final radiance leaving p in direction v. [Mention Westin!] =

17 Arbitrary BRDF Results
Here are a few results. On the left you can see renderings with anisotropic BRDFs, in the middle renderings with the Ashikhmin-Shirley BRDF and with measured vinyl. On the right you can see spatially varying BRDFs. All of these examples can be rendered at interactive rates. Anisotropic BRDFs Other BRDFs Spatially Varying

18 Neighborhood Transfer
Allows to cast shadows/caustics onto arbitrary receivers Store how object scatters/blocks light around itself (transfer matrices on grid) * lighting Up to this point all we’ve talked about is *surface* self-transfer which represents how an object scatters and blocks light onto *itself*. We can generalize this to something we call *neighborhood* transfer, which allows us to cast shadows or caustics from on object onto arbitrary receivers, which may also move around. The idea is to preprocess how an object scatters and blocks light *around itself*. As we’ve seen, transfer matrices store how global incident lighting transforms to local incident lighting taking self-shadowing into account. We can now use them to store how an object transfers radiance to neighboring space, using a grid of transfer matrices sampled all around the object. After applying all the matrices to the global incident light, we get a coefficient vector for every point on the grid representing transferred radiance. For example, this vector here now represents the transferred lighting taking the shadow from the object into account. And the same down here. Now if we intersect a receiver with this grid, we know at every point on the receiver what the transferred radiance is, i.e. how light is shadowed because of the blue object here. //No matter what the receiver is or how we move it around, we can get proper shadows and even reflections on it cast by the blue object. receiver receiver * transfer matrices transferred radiance

19 Neighborhood Transfer Results
64x64x8 neighborhood diffuse receiver timings on 2.2Ghz P4, ATI Radeon 8500 4fps if light changes 120fps for constant light We can use neighborhood transfer to cast a shadow from this hang-glider onto a terrain. We use a 64x64x8 neighborhood around the glider to store transfer matrices, with which we intersect the terrain. As you can see the closer the glider gets to the terrain the sharper the shadow becomes. It runs at 120fps if the lighting is fixed, and 4fps if the lighting changes (because the matrix/vector multiplication has to be done for every point on the grid).

20 Volumes Diffuse volume: 32x32x32 grid
As an extension, we can also apply our technique to volumes. Here you can see a diffuse cloud illuminated by an area source including self-shadowing and scattering. It runs with about 40fps. Here we’ve also used dynamic lighting, ie. We don’t use a precomputed environment map, but we sample the incident lighting on the fly by rendering the emitter geometry into a cube map, reading it back and projecting it into SH (which turns out to be very fast). Diffuse volume: 32x32x32 grid Runs at 40fps on 2.2Ghz P4, ATI 8500 Here: dynamic lighting

21 Local Lighting using Radiance Sampling
single sample (at center = light at ) multi-sample locations multi-sample result So far, we’ve made the standard environment map assumption that lighting is at infinity, but of course when a light source is close to the object, sampling incident radiance only at the object’s center is inaccurate (as seen on the left). We instead sample the incident radiance at multiple points and blend between the samples. In this example, 8 samples chosen in a preprocess using the well-known “Iterated Closest Point” algorithm are sufficient to give good accuracy. Note the using multiple incident radiance samples is correct for shadowing, but not for interreflections. [This is because the shadowed result only depends on lighting incident at a point, but the interreflected result depends on lighting over the entire object.] Sample incident radiance at multiple points Choose sample points over object using ICP from VQ Correct for shadows but not interreflections

22 Light Size vs. SH Order 0° 20° 40° n=2 linear n=3 quadratic n=4 cubic
Here you can see a table comparing light source size and SH order. A single blocker polygon casts a shadow onto a diffuse receiver. Our precomputation was run for different orders of SH up to n=26. Renderings were done with three different light source sizes. We used quartic SH for all our renderings, since this was a good compromise between quality and memory consumption. Note that this uses a vector of 25 coefficients for diffuse transfer and 25x25 matrices for glossy. The point light should cast a rectangular shadow, as shown in the raytraced image, whereas our method produces soft approximations to that. The more SH you use, the better the approximation. Higher SH can cause ringing artifacts, but they can be reduced by windowing, i.e. attenuating higher frequencies. For a 20 degree light source, n=26 produces a shadow which is indistinguishable from the RT image and even the quintics do a reasonable job. A 40 degree light source, only needs cubic SH to get accurate results. 40° n=2 linear n=3 quadratic n=4 cubic n=5 quartic n=6 quintic n=26 n=26 windowed RT

23 Results Live Demo (Radeon 9700)
Peter-Pike is now going to show a few live demos.

24 Conclusions Contributions: Fast, arbitrary dynamic lighting
on surfaces or in volumes Includes shadows and interreflections Works for diffuse and glossy BRDFs Limitations: Works only for low-frequency lighting Rigid objects only, no deformation In conclusions we have presented a method for - fast arbitrary dynamic lighting on surfaces or in volumes - including shadows and interreflections - no aliasing artifacts - working for both diffuse and glossy BRDFs On the con-side, the technique only works - for low-frequency lighting - and doesn’t allow object deformations (unless precomputed)

25 Future Work Practical glossy transfer Enhanced preprocessing
Eliminate frozen view/light constraints Compress matrices/vectors Enhanced preprocessing Subsurface scattering, dispersion Simulator optimization Adaptive sampling of transfer over surface Deformable objects As for future work, - we’d like to eliminate the frozen light/view constraints. - matrices and vectors should be compressed, since memory consumption is quite large at the moment. By extending the simulator we could easily include subsurface scattering and dispersion. Deformable objects like articulated characters are also important and haven’t been addressed yet. Finally, we’d like to sample radiance in dynamic scenes to guarantee an accurate solution using as small a set of sample points as possible.

26 Questions? Acknowledgements Thanks to:
Jason Mitchell & Michael Doggett (ATI) Matthew Papakipos (NVidia) Paul Debevec for light probes Stanford Graphics Lab for Buddha model Michael Cohen, Chas Boyd, Hans-Peter Seidel for early discussions and support Questions? We would like to thank the following people and institutions. Thanks for your attention. And we’ll be happy to answer your questions…

27 Performance Model # Verts GF4 4600 FPS R300 Pre-compute Max 50,060 215
304 1.1h Buddha 49,990 191 269 2.5h Tweety 48,668 240 326 1.2h Tyra 100,000 118 179 2.4h Teapot 152,413 93 154 4.4h

28 Matrix Formulation

29 Results – Preprocessing
Model Type Sampling Preproc. FPS head diffuse 50K vert. 1.1h 129 ring 256x256 t. 8m 94 buddha 2.5h 125 glossy ..125 tyra 100K vert. 2.4h 83 ..83 teapot 150K vert. 4.4h ..49 cloud 32x32x32 15m 40 glider neighb. 64x64x8 3h ..120

30 Previous Work – Precomputed Transport
[Greger96] irradiance volumes move diffuse object through precomputed lighting [Miller98,Wood00,Chen02] surface lightfields frozen lighting environments [Ashikmin02] steerable illumination textures steers small light source over diffuse object [Matusik02] image-based 3D photography surface lightfield + reflectance field – not interactive There has been a reasonable amount of previous and concurent work dealing with precomputed radiance transfer. In the irradiance volumes paper a diffuse reciever can move through a complex (but frozen) lighting environment. However the object is lit without any transport complexity and cannot influence the lighting environment. There have been several papers dealing with surface light fields, including a nice one by chen at this years siggraph. These papers however assume a frozen lighting environment as well. Ashikmin has a TOG paper that just came out that allows you to steer a small light source over a diffuse object. Matusik et al have a nice paper here that captures both a surface light field and a reflectance field – allowing you to relight objects in novel lighting environments. The reconstruction is not interactive however.

31 Dynamic Lighting Sample incident lighting on-the-fly Results
precompute textures for SH basis functions use cube map parameterization render into 6 cube map faces around p read images back projection: dot-product between cube maps Results low-resolution cube maps sufficient: 6x16x16 average error: 0.2%, worst-case: 0.5% takes 1.16 ms on P3-933Mhz, ATI 8500 Many rendering technique use a single, static environment to illuminate a scene. We’d like to allow the lighting to change over time. For example, lighting can change as clouds move across the sky, or as lights or objects that cast shadows are moved around in a room. Dynamic lighting is easily handled in our approach by sampling it on-the-fly, every frame. Sampling is done by rendering a cube map of the surrounding emitters using graphics hardware, then reading it back and projecting it into SH. In the video you can see an example of two emitters moving around the head. Every frame we sample the incident radiance from these emitters to generate correct area lighting on the head including shadows. As it turns out, low-resolution cube maps are sufficient for sampling radiance. A 6x16x16 map has an everage error of 0.2% and a worst-case error 0.5%. It takes about 1.16ms for reading it back and projecting it into SH.


Download ppt "Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments Peter-Pike Sloan, Microsoft Research Jan Kautz,"

Similar presentations


Ads by Google