# Normal Mapping for Precomputed Radiance Transfer

## Presentation on theme: "Normal Mapping for Precomputed Radiance Transfer"— Presentation transcript:

Normal Mapping for Precomputed Radiance Transfer
Peter-Pike Sloan Microsoft

Inspiration [McTaggart04] Half-Life 2 “Radiosity Normal Mapping”

Related Work [Willmott99] Vector irradiance formulation of radiosity (for accelerating computation) [Tabellion04] Lighting model similar to above (dominant light direction) [Good2005] “Spherical Harmonic Light Maps”

Goal Develop lightweight techniques to decouple normal variation (ala HL2) From parameterized models of lighting (PRT) For rigid objects Non goals Modeling “local GI” of the bumps [Sig05] Masking effects Glossy materials

PRT Now that we’ve seen a demo we are going to define some of the terms we will use for the rest of the talk. We use the abbreviation “PRT” for precomputed radiance transfer. In PRT, the source radiance function represents the distant lighting that will illuminate the object. It is a spherical function represented by a vector of coefficients with respect to a given basis.

PRT At every point over the surface of the object, this source radiance function is attenuated due to shadowing and increased due to inter-reflections. We call the result transferred incident radiance. It represents the local environment lighting each point “p”.

PRT This transferred incident radiance function can be integrated against the surface material properties to produce the exit radiance emanating from that point. Exit radiance is a spherical function that is parameterized by view direction.

In order to get that global transport we introduce Bi-scale radiance transfer. It uses PRT to model macro-scale transport – the source radiance is transformed into a local lighting vector at a coarse sampling over the object. This is then interpolated and used to light an RTT that models the meso-scale effects.

l : vector = source radiance spherical function Mp : 25x25 transfer matrix at point p (source → transferred incident) q(xp) : ID map (2D → 2D, maps RTT patch over surface) b(x,v) : RTT (4D → 25D, tabulated over small spatial patch) Mathematically, source radiance L is transferred to incident radiance using the transfer matrix Mp. An ID map “q” defined over the whole surface indexes into a small RTT “b”. The resulting RTT sample is then dotted with the transferred incident radiance vector, generating exit radiance “e”. This essentially applies macro-scale transferred radiance to the meso-scale RTT. applies macro-scale transferred radiance to meso-scale RTT

l : vector = source radiance spherical function Mp : 25x25 transfer matrix at point p (source → transferred incident) q(xp) : ID map (2D → 2D, maps RTT patch over surface) b(x,v) : RTT (4D → 25D, tabulated over small spatial patch) Mathematically, source radiance L is transferred to incident radiance using the transfer matrix Mp. An ID map “q” defined over the whole surface indexes into a small RTT “b”. The resulting RTT sample is then dotted with the transferred incident radiance vector, generating exit radiance “e”. This essentially applies macro-scale transferred radiance to the meso-scale RTT. applies macro-scale transferred radiance to meso-scale RTT

l : vector = source radiance spherical function Mp : 25x25 transfer matrix at point p (source → transferred incident) q(xp) : ID map (2D → 2D, maps RTT patch over surface) b(x,v) : RTT (4D → 25D, tabulated over small spatial patch) Mathematically, source radiance L is transferred to incident radiance using the transfer matrix Mp. An ID map “q” defined over the whole surface indexes into a small RTT “b”. The resulting RTT sample is then dotted with the transferred incident radiance vector, generating exit radiance “e”. This essentially applies macro-scale transferred radiance to the meso-scale RTT. applies macro-scale transferred radiance to meso-scale RTT

Limitations Expensive Add constraints
Stores 64 “response vectors” that are 9-36D (x3 for spectral) Parallax mapping cheaper way of getting masking Local radiance is too much data (9 – 36 x 3) for low res textures/per-vertex Add constraints Just model normal variation Diffuse only

Diffuse + Normal Maps Quadratic SH [Ramamoorthi2001]
Distant Lighting Environment

Diffuse + Normal Maps Quadratic SH [Ramamoorthi2001] Distant Radiance to Transferred Incident Radiance In local frame

Diagonal Convolution Matrix
Diffuse + Normal Maps Quadratic SH [Ramamoorthi2001] Diagonal Convolution Matrix Clamped cosine kernel

Diffuse + Normal Maps Quadratic SH [Ramamoorthi2001] Irradiance Environment Map

Diffuse + Normal Maps Quadratic SH [Ramamoorthi2001] Irradiance Environment Map

Evaluate SH basis with normal
Diffuse + Normal Maps Quadratic SH [Ramamoorthi2001] Evaluate SH basis with normal

Concerns Lot of data at low res
9xO2 matrices (x3 with color bleeding) Can compress using CPCA [Sig03] Too much data passed from low res to high res Irradiance emap (27 numbers, 7 interpolators) Alternatives Project into analytic basis Separable approximation

Project into new basis (fewer rows)
Analytic Basis Project into new basis (fewer rows)

Shifted Associated Legendre Polynomials
[Gautron2005]

Half-Life 2 Basis

Comparison PRT Gold Standard HL2 SAL

Comparison PRT Gold Standard HL2 SAL

Normal Mapping for PRT Use same ideas as BRDF factorization [Kautz and McCool1999] Another approach, that we will discuss here is to use the same ideas that are used for BRDF factorization, which Jaakko talked about earlier. That is build a matrix “A” sampling the convolved lighting environment over a hemisphere of normals. The rows represent the normal directions, the columns the lighting environment. Bi-linear basis functions over hemisphere (4 non-zero) Matrix, rows “normal directions” columns quadratic SH light Aij equals evaluating convolved light basis function “j” in normal direction “i”

Normal Mapping for PRT This is just trying to give more intuition to the matrix A. You have some sampling of the hemisphere, each row corresponds to a sample. The columns correspond to illumination from the 9 quadratic SH basis functions. A coefficient in the matrix represents the integral of a cosine kernel in the given direction against the lighting environment. Not that the lighting environment should be clamped to the hemisphere before this integral happens. Bi-linear basis functions are used on the unit disk, and then mapped to the hemisphere to generate a value at any point on the hemisphere (so there are at most 4 non-zero entries for a given normal.)

Normal Mapping for PRT Compute SVD of A
Nx9 matrix (each column is a “normal basis” texture) Then you compute the singular value decomposition of this matrix and you get 3 terms. A matrix U, where each column represents a “normal basis” texture. A diagonal matrix S (the singular values) And a matrix Vt,which is 9x9 (for quadratic SH.) 9x9 diagonal matrix (singular values) 9x9 matrix

Normal Mapping for PRT Old equation New equation
This generates a new equation that simply replaces the evaluation of the quadratic SH in the normal direction.

Normal Mapping for PRT Use first M singular values MxO2 matrix
M channel “normal direction” texture Then instead of using the full matrix, just use the first M singular values.

Pixel Shader StandardSVDPS( VS_OUT In, out float3 rgb : COLOR ) {
float2 Normal = tex2D(NormalSampler, In.TexCoord); float2 vTex = Normal*0.5 + float2(0.5,0.5); float4 vU = tex2D(USampler,vTex); rgb.r = dot(In.cR,vU); rgb.g = dot(In.cG,vU); rgb.b = dot(In.cB,vU); rgb *= tex2D(AlbedoSampler, In.TexCoord); }

Comparison

Demo

Conclusions Lightweight form of normal mapping for PRT
Inspired by Half-Life 2 For static objects, diffuse only HL2 basis and separable basis seem to be best Experiment with CPCA more [Sig03] Integrate with other techniques Parallax mapping for masking Ambient Occlusion for local effects

Acknowledgments Gary McTaggart for HL2 images Shanon Drone for Models
Paul Debevec for Light Probes