Download presentation
Presentation is loading. Please wait.
Published byMadeleine Thomas Modified over 6 years ago
1
Advanced Graphics Algorithms Ying Zhu Georgia State University
Ray Tracing and Radiosity
2
Outline Basics Recursive ray tracing Intersection Shading
Ray tracing vs. rasterization
3
By Giles Tran (www.povray.org)
What is ray tracing? Ray-tracing is a rendering technique that calculates an image of a scene by simulating the way rays of light travel in the real world. Calculating the light rays between light source and camera. However it does its job backwards. By Giles Tran (
4
Follow the Photon Follow each photon through its chain of bounces, till it either: Runs out of energy Departs the predefined space, or Strikes the image, and then we add its contribution into the appropriate pixel. The trajectories of photons look like rays so this process is called ray tracing.
5
Forward Ray Tracing The process mentioned above is called forward ray tracing. Problems with forward ray tracing: Many rays never reach the image We’ll waste a lot of time following the photons that make no contribution to the image. The solution: backward ray tracing. This is what we use in computer graphics.
6
Backward Ray Tracing A ray of light is traced starting from the camera into the scene, and back through interactions with geometry, to see if it ends up back at a light source . At least one ray per pixel.
7
Backward Ray Tracing Backward recursive ray tracing:
Draw a line (primary ray) through each image pixel and the eye point and follow that line outward. If the primary ray happens to hit a light source, then we are done. Then the pixel color is the same as light source color. If the primary ray hits nothing and flies straight through to the darkness of the space, we are done. The pixel color is the same as background color. If the primary ray hits a surface, we apply local illumination model to this point to calculate its color. Then we shoot out more secondary rays (reflection and/or refraction) and recursively apply step 2, 3, and 4. The color of the primary ray is the weighted sum of colors of all of its secondary rays. The color of each secondary ray is the weighted sum of colors of all of its subsequent secondary rays, and so on …
8
Ray Casting Non-recursive ray tracing is called ray casting.
For every pixel (x,y) Construct a ray from the eye; color[x,y] = castRay(ray); plot(x, y, color[x, y]); Complexity? O(n * m) n: number of objects, m: number of pixels
9
Why use ray tracing? More elegant than polygon scan conversion
Testbed for numerous techniques and effects modeling rendering texturing Relatively easy to implement
10
Ray Tracing Example More pictures at
11
Ray Tracing Example
12
Ray Tracing Example
13
Ray Tracing Example
14
Where does ray tracing come from?
Ray tracing technique has been used for lens design since the 1900s. Geometric ray tracing is used to describe the propagation of light rays through a lens system or optical instrument This is used to optimize the design of the instrument before it is built. Gamma ray tracing software was developed (by MAG Inc.) in late 1960s for military applications Help design radiation-proof military vehicles
15
Where does ray tracing come from?
A. Appel, “On Calculating the Illusion of Reality”, IFIP Congress 1968 First to introduce ray casting T. Whitted, “An Improved Illumination Model for Shaded Display”, Communications of ACM 1980 First global illumination model First to simulate specular reflection and refractive transmission.
16
Ray Tracing Overview For each pixel, trace a primary ray to the first visible surface For each intersection, trace secondary rays: Shadow ray (in direction to light source) Reflection ray Transmission ray (refraction)
17
Ray Tracing Overview Primary rays: the ones that bring light directly to the image Shadow rays: that bring light (or shadow if blocked by other objects) directly from a light source to an object source. Reflected rays: carrying light reflected from a surface. Transmission rays: carrying light passing through an object.
18
Ray Tracing Overview Refration rays Primary ray Reflection rays
19
Recursive Ray Tracing Steps (1)
Construct a primary ray (eye-pixel ray) Backward tracking of photons that could have arrived along primary ray Intersect with all object in scene Determine nearest object and intersection point Generate secondary rays To light sources (shadow ray) In reflection-associated directions In refraction-associated directions
20
Recursive Ray Tracing Steps (2)
Continue recursively for each secondary ray Terminate after certain levels Collect color information from secondary rays and accumulate suitably averaged information for primary ray Assign primary ray color to the pixel
21
Ray Trees Eye Ni surface normal Ri reflected ray Li shadow ray
Ti transmitted (refracted) ray Eye R2 R3 T3
22
Ray Casting Similar to ray tracing, but stop before generating secondary rays Apply illumination model at nearest object intersection with no regard to light occlusion Only a local illumination model Only consider light that is directly emitted from the source and the light ray’s intersection with an isolated surface
23
Intersection A typical ray tracing algorithm has three basic elements:
Ray-object intersection Shading Secondary ray generation Intersection determines the performance of ray tracing. How do we represent rays and objects? Object representation is covered in the lectures about modeling. Definition of a ray: E + t(P – E) E is the eyepoint and P is the pixel point
24
Construct Primary Rays
We can use a convenient camera model to compute primary eye rays. For example, the OpenGL look-at model: Camera position Look-at point Up vector Field-of-view angle Width/height Given pixel coordinates (i, j), we can compute the ray direction d.
25
Construct Primary Rays
Direction of the ray
26
Object Representation
Scene object: direct implicit form Express as f(Q)=0 where Q is a surface point, and f is given formula Intersection computation is an equation to solve: find t such hat f(E + t(P – E)) = 0
27
Quadratic Surfaces Surfaces given by Ellipsoid
Hyperboloid of one sheet Cone Elliptic cylinder
28
Quadric Surfaces Ray is given by
Substitute ray x, y, z into surface formula Quadratic equation results for t and then we have the intersection point
29
Polygons The plane of the polygon should be known as Ax + By + Cz + D = 0, where (A, B, C, 0) is the normal vector. How to come up with the plane formula? Pick three successive vertices Normal vector is the cross product Once you have the normal vector, you have values for A, B, C. Substitute ray x, y, z into surface formula Linear equation results for t and then we have the intersection point
30
The shading process If the primary ray hits a light, then the pixel color is the light’s color. If the ray hits a surface, then the pixel color I(P, v) as seen from point P along direction v is I(P, v) = Id + Kr*Ir + Kt*It, where Id is computed from shadow rays using local illumination model, Ir is the reflection ray color It is refraction ray color.
31
Shadow Rays We can easily render shadow in recursive ray tracing.
We cast shadow rays to determine if a particular light source is visible to a ray-object intersection point. If a light source is visible, we use local illumination model to calculate its contribution to the pixel color Can use a similar lighting equation as in OpenGL If it is not visible, which means this point is in the shadow of this light source, then we omit the contribution of this light source when applying the local illumination model.
32
Shadow Rays ob->intersect(ray2, hit2, 0); // find intersection
Color castRay(ray) For every object ob ob->intersect(ray, hit); // find intersection // calculate direct component Color col=ambient*hit->getMaterial()->getDiffuse(); For every light L // cast shadow ray Ray ray2(hitPoint, L->getDir()); For every object ob ob->intersect(ray2, hit2, 0); // find intersection // if light is closer than the shadow ray intersection If (hit->getT > L->getDist()) // include light source in the shading calculation col=col+hit->getMaterial()->shade (ray, hit, L->getDir(), L->getColor()); Return col;
33
Reflection Rays q q Compute mirror contribution
Multiply by transparency coefficient (Kr) Cast ray In direction symmetric with regards to normal N V R q q R V
34
Refraction Rays Compute transmitted contribution Cast ray
In refracted direction Multiply by transparency coefficient (Kt)
35
Refraction Don’t forget to normalize Snell-Descartes Law
Note that I is the negative of the incoming ray Don’t forget to normalize Total internal reflection when the square root is imaginary
36
Putting It Together I(P, v) = Id + Kr*Ir + Kt*It
Id can be calculated using Phong illumination model (depending on whether the intersection point is in shadow) Kr controls global mirror reflections Kt controls global specular transmission Ir and It are the global contributions computed using recursive ray tracing.
37
Recap: Recursive Ray Tracing
traceRay Intersect all objects; Ambient shading; For every light Shadow ray; Shading (local illumination); If mirror Trace reflected ray; If transparent Trace transmitted ray; Add local and global illumination contributions;
38
Rasterization vs. Ray Tracing
What’s the difference between ray tracing and rasterization? What are the benefits and drawbacks? Will ray tracing eventually replace rasterization?
39
Rasterization for each global illumination pass { for each object {
for covered pixels { shade update framebuffer } } // Objects are processed one at a time Loop through all objects for each illumination effect reflections shadows (map, or to draw shadow volumes) all pixels conceptually done for one object before the next one alpha sorting is a problem
40
Rasterization Interactive rendering is possible
Most global effects will be sacrificed, but some can be approximated: Reflections, refractions, bump-mapping Various lighting models, shadows Motion blur, anti-aliasing, depth-of-field Technically, can reproduce all effects ray-tracing has through pre-rendering and texture mapping
41
Ray-tracing for all pixels { for each ray in path {
find intersected object { shade } update framebuffer } // Pixels are processed one at a time
42
Ray-tracing Each pixel rendered fully and independently
43
Ray-tracing Complex effects easy to implement
Can render anything that can be intersected with a ray But may require significant math per intersection Secondary interactions need help e.g. Caustics, color bleeding Solutions exist - Photon mapping
44
Ray-tracing Time complexity
Acceleration is necessary Usually too slow for interactive application Hard to implement in hardware Must fit entire scene in memory
45
Ray-tracing vs. Rasterization
Rasterization is FAST > 100 million polygons per second > 1 billion pixels per second Pipelined, parallelized hardware Ray-tracing is SLOW (on CPU) tens of millions of raw triangle intersections/sec Not including texture accesses, shading...
46
Ray-tracing vs. Rasterization
Ray tracing is actually easier to implement. Raytracing can rely on Moore's law CPU will become faster and faster Quality of rasterization depends on cleverness Vertex shaders Pixel shaders Multipass rendering Rasterization is so fast it’s almost fast enough! It will soon matter less how fast the program runs and more how high the quality is. In my mind, RT Is easier to implement and understand, but a whole generation of programmers are growing up with high-speed depth buffer cards in their PCs with programmable pixel processing - will they care about ray-tracing?
47
Ray-tracing vs. Rasterization
Will ray tracing replace rasterization? Not likely, but we will see more and more ray-tracing implementations on graphics hardware
48
Is real-time ray tracing possible?
Ray tracing on graphics hardware “Ray tracing on a stream processor”, TJ Purcell, PhD dissertation, Stanford U, March 2004 ( “Ray tracing on programmable graphics hardware”, TJ Purcell, et al. SIGGRAPH 2002 paper (
49
Is real-time ray tracing possible?
“Realtime ray tracing of dynamic scenes on an FPGA chip”, J Schmittler, et al. Graphics Hardware 2004 The ray engine, Nathan Carr, et al., UIUC (
50
Ray Tracing Resources Persistence of Vision Raytracer at A free ray tracer Use a scene description language to define scene The user specifies the location of the camera, light sources, and objects as well as the surface texture properties of objects, their interiors (if transparent) and any atmospheric media such as fog, haze, or fire. Include a lot of pre-defined scenes
51
Example of a POV-Ray Scene File
#include "colors.inc" background { color Cyan } camera { location <0, 2, -3> look_at <0, 1, 2> } sphere { <0, 1, 2>, 2 texture { pigment { color Yellow } light_source { <2, 4, -3> color White}
52
Summary What is ray tracing and why use it? Recursive ray tracing
Intersection Shading and secondary rays Shadow rays, reflection rays, and refraction rays Ray tracing vs. rasterization
53
Radiosity
54
Outline What is radiosity? Radiosity process
Form factor Solve radiosity Advantages and limitations of radiosity
55
Background Radiosity method is based on the theory of thermal heat transfer. Heat transfer theory describes radiation as the transfer of energy from a surface when that surface has been thermally excited. This thermal radiation theory can be used to describe the transfer of many kinds of energy between surfaces, including light energy.
56
What is radiosity? The radiosity of a surface is the rate at which energy leaves that surface (energy per unit time per unit area). It includes the energy emitted by a surface as well as the energy reflected from other surfaces. Radiosity methods allow the intensity of radiant energy arriving at a surface to be computed. These intensities can then be used to determine the shading of the surface. In graphics, the term “radiosity” refers to the rendering algorithm that models the lighting process as energy transfer
57
Radiosity Process Process Group mesh surface into patches
Calculate form factors for patches Solve radiosity Display patches
58
The Radiosity Equation
The "radiosity equation" describes the amount of energy which can be emitted from a surface, as the sum of the energy inherent in the surface (a light source, for example) and the energy which strikes the surface, being emitted from some other surface. The energy which leaves a surface (surface "j") and strikes another surface (surface "i") is attenuated by two factors: the "form factor" between surfaces "i" and "j", which accounts for the physical relationship between the two surfaces the reflectivity of surface "i", which will absorb a certain percentage of light energy which strikes the surface.
59
The Radiosity Equation
60
Form Factor The "form factor" describes the fraction of energy which leaves one surface and arrives at a second surface. It takes into account the distance between the surfaces, computed as the distance between the center of each of the surfaces. It also takes into account their orientation in space relative to each other, computed as the angle between each surface's normal vector and a vector drawn from the center of one surface to the center of the other surface.
61
Form Factor To use this form factor with surfaces which have a positive area, the equation must be integrated over one or both surface areas. The form factor between a point on one surface and another surface with positive area can be used if the assumption is made that the single point is representative of all of the points on the surface.
62
The Form Factor
63
The Nusselt Analog Differentiation of the basic form factor equation is difficult even for simple surfaces. Nusselt developed a geometric analog which allows the simple and accurate calculation of the form factor between a surface and a point on a second surface. The "Nusselt analog" involves placing a hemispherical projection body, with unit radius, at a point on a surface. The second surface is spherically projected onto the projection body, then cylindrically projected onto the base of the hemisphere. The form factor is, then, the area projected on the base of the hemisphere divided by the area of the base of the hemisphere.
64
The Nusselt Analog
65
The Hemicube Approximation
In the "hemicube" form factor calculation, the center of a cube is placed at a point on a surface The upper half of the cube is used as a projection body as defined by the "Nusselt analog." Each face of the hemicube is subdivided into a set of small, usually square ("discrete") areas, each of which has a pre-computed form factor value. When a surface is projected onto the hemicube, the sum of the form factor values of the discrete areas of the hemicube faces which are covered by the projection of the surface is the form factor between the point on the first surface (about which the cube is placed) and the second surface (the one which was projected).
66
The Hemicube Approximation
The speed and accuracy of this method of form factor calculation can be affected by changing the size and number of discrete areas on the faces of the hemicube.
67
The Hemicube Anolog This illustration demonstrates the calculation of form factors between a particular surface on the wall of a room and several surfaces of objects in the room.
68
The Hemicube Anolog A radiosity algorithm will compute the form factors from a point on a surface to all other surfaces, by projecting all other surfaces onto the hemicube and storing, at each discrete area, the identifying index of the surface that is closest to the point. When all surfaces have been projected onto the hemicube, the discrete areas contain the indices of the surfaces which are ultimately visible to the point. From there the form factors between the point and the surfaces are calculated. For greater accuracy, a large surface would typically be broken into a set of small surfaces before any form factor calculation is performed.
69
Radiosity form factor learniing tools
Brown University Exploratories project: Radiosity Form Factor: This applet shows how the form factor between two patches varies in magnitude depending on the relative geometry of the patches.
70
The Full Matrix Radiosity Algorithm
Two classes of radiosity algorithms have been developed which will calculate the energy equilibrium solution in an environment. The "full matrix" radiosity solution calculates the form factors between each pair of surfaces in the environment, then forms a series of simultaneous linear equations. This matrix equation is solved for the "B" values, which can be used as the final intensity (or color) value of each surface.
71
The Full Matrix Radiosity Algorithm
72
The Full Matrix Radiosity Algorithm
This method produces a complete solution, at the substantial cost of first calculating form factors between each pair of surfaces and then the solution of the matrix equation. Each of these steps can be quite expensive if the number of surfaces is large complex environments typically have upwards of ten thousand surfaces, and environments with one million surfaces are not uncommon. This leads to substantial costs not only in computation time but in storage.
73
The Progressive Radiosity Algorithm
The "progressive" radiosity solution is an incremental method, yielding intermediate results at much lower computation and storage costs. Each iteration of the algorithm requires the calculation of form factors between a point on a single surface and all other surfaces, rather than all N-squared form factors (where "N" is the number of surfaces in the environment). After the form factor calculation, radiosity values for the surfaces of the environment are updated.
74
The Progressive Radiosity Algorithm
This method will eventually produce the same complete solution as the "full matrix" method, though, unlike the "full matrix" method, it will also produce intermediate results, each more accurate than the last. It can be halted when the desired approximation is reached. It also exacts no large (again, N-squared) storage cost.
75
The Progressive Radiosity Algorithm
76
Progressive Radiosity Example
77
Progressive Radiosity Variants
Several variations on the basic progressive radiosity algorithm have been developed. “Gathering” variant “Shooting” variant The "gathering" variant updates one surface by collecting light energy from all other surfaces. In this variant, as well as the "shooting" variant, the "base" surface is arbitrarily chosen.
78
Progressive Radiosity Variants
The "shooting" variant updates all surfaces by distributing light energy from one surface The "shooting and sorting" variant chooses the surface with the greatest unshot light energy and distributes its light energy to the surfaces in the environment. The "shooting and sorting" method is the most desirable, as it finds the surface with the greatest potential contribution to the intensity solution and updates all other surfaces in the environment with its energy. In addition to these, an initial "ambient" term can be approximated for the environment and adjusted at each iteration, gradually replaced by the true ambient contribution to the rendered image.
79
Radiosity Learning Tool
Brown University Exploratories project: Radiosity Shooting vs Gathering This applet shows the graphic difference between two radiosity algorithms: shooting and gathering. The algorithms are animated for each iteration of the algorithms and the resulting visualization of a discretized scene and of its form factor matrix can be viewed as the animation progresses.
80
Comparison of Progressive Variants
shooting gathering Shooting & sorting & ambient Shooting & sorting
81
The Two-Pass Radiosity Solution
View independent, global diffuse illumination computed with radiosity pre-process. View dependent, global specular illumination computed with ray tracing post-process. Combining the strengths of radiosity and ray tracing achieves a more accurate and efficient solution to the problem of light transport.
82
Participating Media Another variation on the basic diffuse radiosity solution adds the contribution of light passing through a participating medium, such as smoke, fog, or water vapor in the air Creating volumetric lighting effect In this algorithm, light energy is sent through a three-dimensional volume representing a participating medium, which both attenuates the light energy and adds to the intensity solution through illumination of the participating medium.
83
Participating Media
84
Advantages of Radiosity
Photorealistic image quality Accurate simulation of energy transfer Can simulate color bleeding. Light reflected from a surface is attenuated by the reflectivity of the surface, which is closely associated with the color of the surface. The reflected light energy often is colored, to some small extent, by the color of the surface from which it was reflected.
85
Advantages of Radiosity
The reflection of light energy in an environment often produces a phenomenon known as "color bleeding," where a brightly colored surface's color will "bleed" onto adjacent surfaces. The image in this slide illustrates this phenomenon, as both the red and blue walls "bleed" their color onto the white walls, ceiling and floor.
86
Radiosity vs. Ray Tracing
Simulate soft shadows: shadows with soft edges Ray tracing often creates hard-edged shadows Viewpoint independent: the solution will be the same regardless of the viewpoint of the image Ray tracing is viewpoint dependent Radiosity is an object space algorithm while ray tracing is an image space algorithm
87
Limitations of Radiosity
Large computational and storage costs Must preprocess polygonal environments Difficult to compute form factor Does not consider transparent/translucent surfaces Non-diffuse components of light not represented Specular highlights hard to achieve
88
Radiosity Examples
89
Radiosity Examples Over 1 million polygons
90
Radiosity Examples
91
Radiosity tools Pixar’s PRMan Radiosity is supported in POV-Ray
92
Summary What is radiosity? Radiosity process
Form factor Solve radiosity Advantages and limitations of radiosity
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.