Download presentation
Presentation is loading. Please wait.
Published bySamantha Hunter Modified over 10 years ago
1
INB382/INN382 Real-Time Rendering Techniques Lecture 9: Global Illumination
Ross Brown
2
Lecture Contents Global Illumination Radiosity
Pre-computed Radiosity Transfer Spherical Harmonics
3
Global Illumination “There are one-story intellects, two-story intellects, and three-story intellects with skylights. All fact collectors with no aim beyond their facts are one-story men. Two-story men compare, reason and generalize, using labours of the fact collectors as well as their own. Three- story men idealize, imagine, and predict. Their best illuminations come from above through the skylight.” - Oliver Wendell Holmes
4
Lecture Context As mentioned before, we have covered the direct forms of illumination on object surfaces Last week we moved further into more indirect illumination issues by modelling effects such as reflection and refraction We then presented a more generalised approach known as ray tracing – simple but costly – not practical for real-time just yet We now cover two relevant forms of global illumination in real-time rendering – Radiosity and Pre- computed Radiance Transfer
5
Global Illumination In most rendering, the use of a local lighting model is the norm Meaning that only the surface data at the visible point is used in the lighting calculations This is useful for object-precision GPU systems you can easily parallelise SIMD instructions on a stream of vertex and pixel data Data is discarded once rendered Problematic for global illumination as we showed last week
6
Global Illumination Beginning
We have covered reflection, refraction and area lighting, which are global illumination techniques because the rest of the scene influences the lighting on the object Lighting needs to be thought of as the paths that the photons take from light sources to the eye Local only acounts for one reflection from the light source to the object, onto the eye
7
Global Illumination Beginning
Global illumination calculates the results of the paths of photons from any reflecting object in the room – specular, transparent or diffuse The contributions are summed for the object being rendered This process is contained within the rendering equation developed by Kajiya [2] – more later You can thus sum all the possible light paths to the surface – the more paths, the greater the level of realism
8
Radiosity and Ray Tracing
Thus global illumination is divided into two main approaches Radiosity – calculation of light as radiation transfer through volume of air Ray Tracing – calculation of light as a set of discrete ray samples through a volume We have covered the general idea of ray tracing Now we illustrate the main principles of Radiosity using a series of lectures notes from Brown University [1]
9
Radiosity for Inter-Object Diffuse Reflection
Color Bleeding Soft Shadows No Ambient Term View Independent Used in other areas of Engineering
10
Pretty Pictures Reality (actual photograph)…
Minus Radiosity Rendering… Equals the difference (or error) image Mostly due to mis-calibration
11
The Radiosity Technique: An Overview
Model scene as “patches” Each patch has an initial luminance value: all but luminaires (light sources) are probably zero Iteratively determine how much luminance travels from each patch to each other patch until entire system converges to stable values We can then render scene from any angle without recomputing these final patch luminances – they are viewer independent!
12
Overview of Radiosity The radiometric term radiosity means rate at which energy leaves a surface Sum of rates at which surface emits energy and reflects (or transmits) energy received from all other surfaces Radiosity simulations are based on thermal engineering model of emission and reflection of radiation using Finite Element Analysis (FEA) First determine all light interactions in a view-independent way, then render one or more views Consider room with only floor and ceiling: ceiling floor
13
Overview of Radiosity Suppose ceiling is actually a fluorescent drop- panel ceiling which emits light… Floor gets some of this light and reflects it back Ceiling gets some of this reflected light and sends it back… Simulation mimics these successive “bounces” through progressive refinement – each iteration contributes less energy so process converges
14
Kajiya’s [2] Rendering Equation
Light energy travelling from point i to j is equal to light emitted from i to j, plus the integral over S (all points on all surfaces) of reflectance from point k to i to j, times the light from k to i, all attenuated by a geometry factor L(i → j) is the amount of light travelling along the ray from point i to point j Le is the amount of light emitted by the surface (luminance) f(k → i → j, λ) is the Bidirectional Reflectance Distribution Function (BRDF) of the surface. Describes how much of the light incident on the surface at i from the direction of k leaves the surface in direction of j λ is wavelength of light, use a different function for R, G, & B G(i ↔ j) is a geometry term which involves occlusion, distance, and the angle between the surfaces
15
Kajiya’s Rendering Equation
More complete model of light transport than either raytracing or polygonal rendering, but does omit some things, eg., subsurface scattering How do we evaluate this function? very difficult to solve complicated integral equations analytically
16
Rendering a Scene A scene has:
geometry luminaires (light sources) observation point Light transport to camera must be computed for every incoming angle to observation point, bouncing off all geometry (in all directions combined) This is far too hard
17
Rendering a Scene Both raytracing and radiosity are crude approximations to rendering equation Raytracing: consider only a very small, finite number of rays and ignore diffuse inter-object reflections, perhaps using ambient hack to approximate that lost component. Can use either a BRDF or a simpler (e.g., Phong) model for luminaires Radiosity: approximate integral over differential source areas dA(i) with finite sum over finite areas; consider light transport not on basis of individual rays but on basis of energy transport between finite patches (e.g., quads or triangles that result from (adaptive) meshing) – remember Lecture 5???
18
Adaptive Meshes for Radiosity
19
What Can We Do? Design an alternative simulation of real transfer of light energy With any luck, will be more accurate, but accuracy is relative hall of mirrors is specular raytracing museum with latex-painted walls is diffuse radiosity Best solutions are hybrid techniques: use raytracing for specular components and radiosity for diffuse components
20
What Can We Do? Radiosity approximates global diffuse inter- object reflection by considering how each pair of surface elements (patches) in scene send and receive light energy, an O(n2) operation best accomplished by progressive refinement
21
Some Important Symbols
Energy = light energy = radiosity for our purposes – it should really be rate, ie., energy/unit time… Ei: initial amount of energy radiating from i’th patch Bi: final amount of energy radiating from i’th patch Bj: final amount of energy radiating from j’th patch Fj-i: fraction of energy Bi emitted by j’th patch that is gathered by i’th patch (relationship between i’th and j’th patches based on geometry: distance, relative angles) Fj-i Bj: total amount of patch j's energy sent to patch i ρi: fraction of incoming energy to a patch that is then exported in next iteration
22
Let’s Arrange Those Symbols
Amount of light/radiosity/energy a patch finally emits is initial emission plus sum of emissions due to other n-1 patches in scene emitting to this patch - recursive definition. Note: F1-1 is zero only for planar patches! Why is it zero for only planar patches? Sender1 Sender2 Sendern Receiver patchi … or
23
Arranging Those Symbols
Thus: Rewrite as a vector product: And the whole system:
24
Arranging Those Symbols Some More
Decompose the first matrix as: Can be rewritten: (I – D()F)B = E where D() is diagonal matrix with i as its ith diagonal entry, and F is called a Form Factor Matrix and is based on geometry between patches If we know E, D() , and F, we can determine B If we let A = I – D()F
25
Arranging Those Symbols Even More
Then we are solving (for B) the equation AB = E This is a linear system, and methods for solving these are well-known, e.g. Gaussian elimination or Gauss-Seidel iteration (although which method is best depends on nature of matrix A) – part of introductory linear algebra courses Typically want B, knowing E and A
26
Progressive Refinement
Let us look at the floor ceiling problem again Both ceiling and floor act as lights emitting and reflecting light uniformly over areas (all surfaces considered such in radiosity) C emits 12 C reflects 75% F reflects 50% C gets 1/3 F F gets 1/3 C
27
Progressive Refinement
Let ceiling emit 12 units of light per second Let floor reflect get 1/3 of light from ceiling (based on geometry) reflect 50% of what it gets Let ceiling get 1/3 of floor’s light (based on geometry), and reflect 75% of what it gets Writing B1 for ceiling’s total light, and B2 for floor’s, and E1 and E2 for light generated by each: Ceiling: Floor:
28
Progressive Refinement
thus: B1 = E1 + ρ1(F2-1(E2 + ρ2(F1-2B1))) becoming: B1 = E1 + ρ1F2-1E2 + ρ1 ρ2 F1-2F1-2B1 which simplifies to: In general, this algebra is too complex, but we can find the solution iteratively using progressive refinement
29
Progressive Refinement
Iterative method 1, gathering energy: send out light from emitters everywhere, accumulate it, resend from all patches… Each iteration uses radiosity values from previous iteration as estimates for recursive form. Iterate by rows. Bk = E + D(r)FBk-1 B1 = E B2 = E + D(r)FB1 B3 = E + D(r)FB2 Where Bk is your kth guess at radiosity values B1, B2…
30
Progressive Refinement
Results for our example (notice what happens to values): {12, 0} = {B1, B2}= {E1, E2} {12, 2} = {12.5, 2} = {12.5, } = { , } { , } { , } { , }
31
General Radiosity Equation
Sender2 … Sendern Sender1 Receiver
32
General Radiosity Equation
The radiosity equation for normalized unit areas of Lambertian diffuse patches is: Bi is total radiosity in watts/m2 (i.e. energy/unit-time / unit-area) radiating from patch i Note that we are now calculating Bi (and Ei) per unit area Ei is light emitted in watts/m2 ρi is fraction of incident energy reflected by patch i (related to diffuse reflection coefficient kd in simple lighting model) Aj is area of jth patch (Bj Aj) is total energy radiated by patch j with area Aj (i.e., unit radiosity x area)
33
General Radiosity Equation
Fj-i is fraction of energy leaving (“exported by”) patch j arriving at patch i Fj-i is dimensionless form factor that takes into account shape and relative orientation of each patch and occlusion by other patches It is a function of (r, θi, and θj) Geometrically, Fj-i is relative area receiver patch i subtends in sender patch j’s “view”, a hemisphere centred over patch j
34
General Radiosity Equation
patches may be concave and self-reflect; Fi-i 0 for all i, i.e., convex linear combination (conservation of energy) is total amount of energy leaving patch j arriving at patch i is total amount of energy leaving patch j arriving at unit area of patch i θi θj Receiveri Senderj
35
General Radiosity Equation
Reciprocity relationship between Fi-j and Fj-i: Which means form factors scaled for unit area of receiver patch are equal
36
General Radiosity Equation
Therefore, which is easier to deal with, if less intuitive says that radiosity of receiver patch i is energy emitted by that patch + attenuated sum of each sender j’s radiosity times form factor from i to j (not from j to i) for each receiver row i iterate across all sender columns j to gather energy those senders emit, attenuated by Fi-j. Note: we should calculate this for all wavelengths approximate with BiR, BiG, BiB.
37
Computing Form Factors
Receiverj Senderi Form factor from differential sending area dAi to differential receiving area dAj is: for ray of length r between patches, at angles i, j to normals of the areas. Hij is 1 if dAj is visible from dAi and 0 otherwise We will motivate this equation… Applet:
38
Computing Form Factors
When two patches directly face each other, maximum energy is transmitted from Ai to Aj Their normal vectors are parallel, cosj = 1, cosi =1 since i = j = 0° Rotate Aj so that it is perpendicular to Ai Now cosi is still 1, but cosj = 0 since j = 90 In general, we calculate energy fraction by multiplying by cosj Tilting Ai means multiplying by cosi Same as Lambertian diffuse reflection from direct lighting
39
Computing Form Factors
From where does the r2 term arise? The inverse-square law of light propagation Consider a patch A1, at a distance R = 1 from light source L If P photons hit area A1, their density is P/A1 These same P photons pass through A2 Since A2 is twice as far from L, by similar triangles, it has four times the area of A1 Therefore each similar patch on A2 receives 1/4 of the photons
40
Computing Form Factors
The in the formula is a normalizing factor matching area of circle
41
Computing Form Factors
Now consider a differential patch dAi radiating to finite patch Aj Fdi-j can be computed by projecting those parts of Aj visible from dAi onto the unit hemisphere centred about dAi. Form factor is effectively the ratio of curved patch area to the total surface area of the hemisphere. Total surface area encompasses all energy emitted by dAi But is costly to computer in this form
42
Approximating Form Factors
Rather than projecting Aj onto a hemisphere, Cohen and Greenberg proposed projecting onto the upper half of a cube centred about dAi, with its top face parallel to the surface Each face of the hemicube is divided into equal sized square cells Think of each face of the cube as a film plane which records what a patch, dAi , “sees” in each of the five directions; centre of dAi acts as Centre Of Projection In other words, think of the cells on the faces as pixels and use the frustum formed by the vertices of Aj onto each face. What does this remind you of?
43
Approximating Form Factors
Identity of closest intersecting patch is recorded at each cell (the survivor of the z buffer algorithm). Each hemicube cell p is associated with a precomputed delta form factor value, Where θp is angle between p’s surface normal and vector of length r between dAi and p, and where ΔA is area of cell Can approximate Fdi-j for any patch j by summing Fp associated with each cell p covered by patch Aj’s hemicube projection Provides occlusion determination of patches through z buffer (albeit approximately, limited by the granularity/resolution of the cells on each plane) This eliminates the need to compute Hi-j
44
Faster Progressive Refinement: Shooting
Gathering: must process consecutive rows of receivers, for each receiver looping through each column/sender; all rows must be processed for each iteration of the algorithm all form factors must be calculated before first iteration of Gauss- Seidel occurs Shooting: shoot energy in order from brightest to least bright patch (i.e., most significant light sources first) accumulate at receivers iteratively shoot from patch that has largest amount of “unshot” radiosity (e.g., for a single light source, the patch which has the largest form factor with that source will be next patch to shoot from)
45
Faster Progressive Refinement: Shooting
Computing form factor can be done by using a hemicube for each shooter that can be computed and discarded for each receiver (shown next) solves O(n2) processing / storage problem for each gathering iteration shooting converges faster than gathering In shooting, iterate by column not from left to right but in order of patch with most “unshot” radiosity can form an estimate of light on all destination patches with only first column shot, while gathering by row from all senders lights up only that row’s patch. iterating over column of brightest emitter lights up all patches reachable by emitter as a zeroth order approximation, improved by each successive emitter (and then brightest unshot patch)
46
Details on Shooting Each row of matrix used in “gathering” (I – D()F) represents estimate of patch i’s radiosity Bi based on estimates of other patch radiosities each term in summation gathers light from patch j for all j: therefore, For shooting, shoot from patch i to each patch j in turn; again for each receiver j, keep adding radiosity from successive sources i in order of decreasing radiosity So given an estimate of Bi from shooter patch i, we can estimate its impact on all receiving patches j, at the cost of computing Fj-i for each receiver patch j, i.e., via n hemicubes. But that is still too much work.
47
Details on Shooting Video: https://youtu.be/AiMBG8-rYE4
Using reciprocity again, or just the original formulation, which requires only the hemicube over patch i! Thus only a single hemicube and its n form factors need be computed each pass! Note that for a given shooter i, we loop through all receiving patches j. Given our notation for the form factor matrix, holding i constant and looping through all j corresponds to traversing a column Java applet on the right available from link below NB: need Java 1.6 to run the app!! Applet: Video:
48
Radiosity Pseudocode Algorithm for fast progressive refinement through shooting: U0 = e; // Set Matrix to initial emission values U = unshot energy B0 = e; t = 1; do { i = index_of(MAX(ut-1)); // Max row precalculate hemicubei; // Shoot the radiance to each row for (j = 1; j < n; ++) { bjt = uit-1 Fi-j Ai/Aj + bjt-1; ujt = uit-1 Fi-j Ai/Aj + ujt-1; } uit = uit-1 Fj-j; ++t; } while Bt-Bt-1 > tolerance; // ie. Matrix has not changed
49
Limitations of Radiosity
Assumption that radiation is uniform in all directions Assumption that radiosity is piecewise constant usual renderings make this assumption, but then interpolate cheaply to fake a nice-looking answer this introduces quantifiable errors Computation of form factors Fi-j can be tough especially with intervening surfaces, etc. Assumption that reflectivity is independent of directions to source and destination
50
Limitations of Radiosity
No volumetric objects (though there are equations and algorithms for calculating surface-to-volume form factors) No transparency or translucency Independence from wavelength – no fluorescence or phosphorescence Independence from phase – no diffraction Enormity of matrices! For large scenes, 10K x 10K matrices are not uncommon (shooting reduces need to have it all memory resident)
51
More Comments Even with these limitations, it produces great pictures
For n surface patches, we have to build an n x n matrix and solve Ax = b, which takes O(n2), this gets rather expensive for large scenes Could we do it in O(n) instead? The answer, for lots of nice scenes, is “Yes” The Google search engine uses an system much like radiosity to rank its pages Site rankings are determined not only by the number of links from various sources, but by the number of links coming into those sources (and so on) After multiple iterations through the link network, site rankings stabilize Site importance is like luminance, and every site is initially considered an “emitter”
52
Making Radiosity Fast One approach is importance driven radiosity: if I turn on a bright light in the graphics lab with the door open, it’ll lighten my office a little… …but not much By taking each light source and asking “what’s illuminated by this, really?” we can follow a “shooting” strategy in which unshot radiosity is weighted by its importance, i.e., how likely it is to affect the scene from my point of view No longer a view-independent solution…but much faster
53
Precomputed Radiance Transfer PRT
Imagine that time is stationary. Picture a volume of space filled with photons – each cube of space can be said to have a constant photon density. Picture this field of photons in linear motion We need to find out how many photons collide with a stationary surface for each unit of time, a value called flux which measured in joules/second or watts Divide the flux by the differential area, we get a value called the irradiance, measured in watts/meter2 We thus model this using Spherical Harmonic functions – rest of PRT based on paper - Spherical Harmonic Lighting: The Gritty Details, Robin Green[4]
54
Spherical Harmonics (SH)
SH lighting paper assumes knowledge of the use of Basis Functions. Basis Functions are small pieces of signal that can be scaled and combined to produce an approximation to original function Process of working out how much of each basis function to sum is called Projection. To approximate a function using basis functions we must work out a scalar value that represents how much the original function f(x) is like the each Basis Function Bi(x). We do this by integrating the product f(x)Bi(x) over the full domain of f – use summation for an approximation
55
Legendre Polynomials The associated Legendre polynomials are at the heart of the Spherical Harmonics, a mathematical system analogous to the Fourier transform but defined across the surface of a sphere SH functions in general are defined on Imaginary Numbers but we are only interested in approximating real functions over the sphere (i.e. light intensity fields) Lecture will be working only with the Real Spherical Harmonics
56
SH series for varying values of l and m
57
Projecting into SH Space
Note how the first band is just a constant positive value If you render a self-shadowing model using just the 0- band coefficients the resulting looks just like an accessibility shader with points deep in crevices (high curvature) shaded darker than points on flat surfaces – See Lecture 8 The l = 1 band coefficients cover signals that have only one cycle per sphere and each one points along the x, y, or z-axis Linear combinations of just these functions give us very good approximations to the cosine term in the diffuse surface reflectance model
58
Projection Process Process for projecting a spherical function into SH coefficients is simple Calculate a single coefficient for a specific band you just integrate the product of your function f and the SH function y Working out how much your function is “like” the basis function – Slide 58
59
Example Complex SH Function Approximations
60
Approximating Lighting using SH
To project a function into SH coefficients we want to integrate the product of the function and an SH function We must evaluate this integral using Monte Carlo integration where xj is our array of pre-calculated samples (Slide 59) and the function f is the product f(xj) = light(xj)yi(xj). An example lighting function displayed as a color (left) and a spherical plot (right) Example (left) of a physically sampled scene using a spherical silver ball and a camera
61
Spherical Harmonic Sampling Basis
Uses a set of ortho-normal basis functions on the surface of a sphere – just like X,Y,Z axes Rendering equation that we want to integrate over the surface of a sphere So all we need to do is generate evenly distributed points (more technically called unbiased random samples) over the surface of a sphere. Taking a pair of independent canonical random numbers ξx and ξy we can map this “square” of random values into spherical coordinates using the transform This forms a light sample basis for the later calculations
62
Generating SH Coefficients
Applying this process to the light source we defined earlier with 10,000 samples over 4 bands gives us this vector of coefficients: [ , , , , , , , , , , , , , , ] Reconstructing the SH functions for checking purposes from these 15 coefficients is simply a case of calculating a weighted sum of the basis functions: An example approximated lighting function displayed as a color and a spherical plot
63
Can also model shadows with SH
The transfer function (occlusion factor) in the lighting model can also be stored as a 16 coefficient SH function As you can see this is a different way of recording shadowing at a point on a model without forcing us to do the final integral Can rotate object and still get correct values for shadows on surface without recasting the rays – an improvement on last lecture approach Light Distribution Transfer (Occlusion) Final Light Distribution
64
Indirect Lighting Steps
Geometrically, the idea of interreflected light is simple Each point on the model already knows how much direct illumination it has, encoded in the form of an SH transfer function Fire rays to find sample points that can reflect light back onto our position and add a cosine weighted copy of that transfer function back into our own For example, point A in the illustration above has fired a ray and hit point B
65
Shadowed Indirect Lighting
A rendering using 5th order diffuse shadowed SH transfer functions Note the soft shadowing from the constant hemisphere light source NB: This is how light probes work in Unity 5
66
Real-Time Rendering using SH
Now we have a set of SH coefficients for each vertex, how do we build a renderer using current graphics hardware that will give us real-time frame rates? Basic calculation for SH lighting is the dot product between an SH projected light source and the SH transfer function vertex Approximate the complete solution over an object by filling in the gaps between vertices using Gouraud shading Typically only need 4 coefficients per vertex – one extra 4 value colour
67
Real-Time Rendering using SH
Can rotate the SH coefficients so that the object is able to have its first pass lighting updated in real time Now have a cheap rendering technique for generalised area light sources in a scene Can add a specular term and/or environment maps to give required appearance
68
Direct X Browser PRT Demonstration
Within the DirectX SDK is a demo browser Run the PRT demo to see similar to the movie on the left Choose scene 4 to obtain the head figure
69
Radiosity and Unity 5 For years this has been a grand challenge of CG, to run radiosity in real time Enlighten’s implementation [3] performs a partial version in real time on a tablet in Unity 5!!! See here atch?v=Wrt5aLHI8ME
70
Unity 5 GI Basics [5] Unity GI performed in background and baked on for static objects Precomputed Realtime GI – encodes all possible bounces into lightmap data structure texture [6] Directional Light is bounced into the scene via this data structure in real time due to no position requirements
71
Dynamic Objects and GI Dynamic objects cannot cast light into the scene But they can use light probes to sample the bouncing light in the scene to be lit themselves Light probes are placed where dynamic objects are moving and are a spherical panoramic view of the environment
72
Dynamic Objects and GI Light probes are stored as Spherical Harmonic approximations Interpolated internally to light dynamic object at that point Indirect, probe and direct light blended to provide final lighting model in scene
73
Unity 5 Demonstration Video
74
References John F. Hughes, Andries van Dam, Introduction to Radiosity, 29/04/2007 J Kajiya. “The Rendering Equation.” SIGGRAPH 1984, pp 29/04/2007 Robin Green, Spherical Harmonic Lighting: The Gritty Details - www1.cs.columbia.edu/~cs4162/slides/spherical-harmonic- lighting.pdf 04/05/2015 content/uploads/2014/03/radiosity_architecture.pdf
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.