Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter X Advanced Texturing

Similar presentations


Presentation on theme: "Chapter X Advanced Texturing"— Presentation transcript:

1 Chapter X Advanced Texturing

2 Environment Mapping Environment mapping simulates a shiny object reflecting its surrounding environment. Remember the morphing cyborg in T2!!

3 Environment Mapping (cont’d)
The first task for environment mapping is to capture the environment images in a texture called an environment map. The most popular implementation is using a cube map. It’s cube mapping.

4 Cube Mapping To determine the reflected color at p, a ray is fired from the viewpoint toward p. The ray denoted by I is reflected with respect to the surface normal n at p. HLSL library function texCUBE() takes a cube map and R as input, and returns an RGB color.

5 Cube Mapping (cont’d) The cube map faces are named {+x, -x, +y, -y, +z, -z}.

6 Cube Mapping (cont’d) The cube map face intersected by R is identified using R's coordinate that has the largest absolute value. The remaining coordinates are divided by the largest absolute value, and then range-converted from [-1,1] to [0,1].

7 Cube Mapping (cont’d) We have to use (u,v), but R returns (u',v').
Note that (u,v) and (u',v') will produce the same filtering result only when the environment is infinitely far away from p. Fortunately, people are fairly forgiving about physical incorrectness that results when the environment is not far away.

8 Dynamic Cube Mapping A cube map is created at run time, and then immediately used for environment mapping. Two-pass algorithm 1st pass: The Geometry Shader replicates the incoming primitive into six separate primitives, and each primitive is rasterized on its own render target which refers to a general off-screen buffer. Six images are generated. 2nd pass: The scene is rendered using the cube map. For the first pass, multiple render targets (MRT) are used for enabling the rendering pipeline to produce images to multiple render target textures at once.

9 Light Mapping Suppose a character navigating a static environment illuminated by static light sources. Even though the viewpoint is moving, the diffuse reflection remains constant over the environment surfaces. Then, we can pre-compute part of the diffuse reflection ahead of time, store the result in a light map, and look up the light map at run time.

10 Light Mapping (cont’d)
When creating a light map, there is no real-time constraint. Therefore, the light map is computed using a global illumination algorithm, especially the radiosity algorithm. The light map is combined with the image texture at run time, i.e., the diffuse reflectance is read from the image texture, incoming light is read from the light map, and they are multiplied.

11 Light Mapping (cont’d)
Quake II released in 1997 was the first commercial game that used the light map.

12 Radiosity Normal Mapping
Radiosity computation adopts the concept of hemisphere. Its orientation is defined by the surface normal np. See below. Consider combining light mapping and normal mapping. The normal n(u, v) fetched from the normal map is the perturbed instance of np. If lighting were computed on the fly, n(u, v) would be used. However, the light map that is going to partly replace lighting computation was created using the unperturbed normal np. A simple solution to this discrepancy would be to use the perturbed normals stored in the normal map when the light map is created. Then, the light map would have the same resolution as the normal map. It is unnecessarily large.

13 Radiosity Normal Mapping (cont’d)
In the solution of Half-Life 2, three vectors, v0, v1, and v2, which form an orthonormal basis, are pre-defined in the tangent space. Each vi is transformed into the world space, a hemisphere is placed, and the radiosity preprocessor is run. Then, incoming light is computed per vi, and three different colors are computed for p. If we repeat this for all sampled points of the object surface, we obtain three light maps for the object. Each light map is often called a directional light map.

14 Radiosity Normal Mapping (cont’d)
At run time, the directional light maps are blended. The normal fetched from the normal map, (nx,ny,nz), is a tangent-space normal and can be redefined in terms of the basis {v0,v1,v2}. Let (n'x,n'y,n'z) denote the transformed normal. They represent the coordinates with respect to the basis {v0,v1,v2}, and are used for computing the weights needed for blending the three light maps.

15 Radiosity Normal Mapping (cont’d)

16 Radiosity Normal Mapping (cont’d)
The resolutions of the directional light maps and the normal map. In (a), the macrostructure is densely sampled for creating the normal map. In (b), the macrostructure is sparsely sampled for creating the directional light maps. When two fragment, f1 and f2, are processed at run time, the directional light maps would provide appropriate colors for both f1 and f2. In (c), the traditional light map with the same resolution as the directional light maps would provide an incorrect light color for f2.

17 Why Shadows? Shadows help us understand the spatial relationships among objects in a scene, especially between the occluders and receivers. (Occluders cast shadows onto receivers.)

18 Why Shadows? (cont’d) Shadows increase the realism.

19 Terminologies in Shadows
A point light source generates only fully shadowed regions, called hard shadows. An area light source generates soft shadows, which consist of Umbra: fully shadowed regions Penumbra: partially shadowed regions This chapter will focus on hard shadows even though a lot of real-time algorithms for soft shadow generation are available.

20 Shadow Mapping Two-pass algorithm Pass 1
Render the scene from the position of the light source. Store the depths into the shadow map, which is a depth map with respect to the light source.

21 Shadow Mapping (cont’d)
Pass 2 Render the scene from the camera position. It’s real rendering. For each pixel, compare its distance d to the light source with the depth value z in the shadow map. If d > z, the pixel is in shadows. Otherwise, the pixel is lit.

22 Shadow Mapping Artifact – Surface Acne
The algorithm works, but its brute-force implementation suffers from two major problems. Surface acne - a mixture of shadowed and lit areas Shadow-map resolution sensitivity

23 Shadow Mapping Artifact – Surface Acne (cont’d)
What’s the problem? The shadowed and lit pixels coexist on a surface area that should be entirely lit. Note that the scene points sampled at the second pass are usually different from the scene points sampled at the first pass. Suppose the nearest point sampling. In the example, p1 is to be lit, but judged to be in shadows because d1 > z1. For p1, it should be d1 < z1.

24 Shadow Mapping Artifact – Surface Acne (cont’d)
At the 2nd pass, add a small bias value to z1 such that d1 < z1´.

25 Shadow Mapping Artifact – Surface Acne (cont’d)
The bias value is usually fixed after a few trials. Anyway, the surface acne problem has been largely resolved.

26 Shadow Mapping Artifact – Resolution Sensitivity
If the resolution of a shadow map is not high enough, multiple pixels may be mapped to a single texel of the shadow map. This is a magnification example, but bilinear interpolation wouldn’t help as you will see very soon.

27 Shadow Mapping Artifact – Resolution Sensitivity (cont’d)
A simple but expensive solution is to increase the shadow map resolution. Many other solutions have been proposed.

28 Shadow Mapping – Shader
After every vertex is transformed into the world space, the view parameters are specified with respect to the light source. Consequently, the view-transformed vertices are defined in the so-called light space. The view frustum is then specified in the light space.

29 Shadow Mapping – Shader (cont’d)

30 Shadow Mapping – Shader (cont’d)

31 Shadow Mapping – Shader (cont’d)

32 Shadow Mapping – Filtering
Recall that, for filtering the shadow map, we assumed nearest point sampling. An alternative would be bilinear interpolation. The problems of bilinear interpolation It is not good to have completely different results depending on the filtering options. The shadow quality is not improved at all and the shadow still reveals the jagged edges.

33 Shadow Mapping – Filtering (cont’d)
A solution to this problem is to first determine the visibilities of a pixel with respect to the four texels, and then interpolate the visibilities. This value is taken as the “degree of being lit.” In general, the technique of taking multiple texels from the shadow map and blending the pixel's visibilities against the texels is named percentage closer filtering (PCF).

34 Shadow Mapping – Filtering (cont’d)
Direct3D 10 supports PCF through a new function SampleCmpLevelZero(). In the example, the visibility is computed for each of nine texels, and the visibilities are averaged.

35 Ambient Occlusion We have assumed that the ambient light arrives at a surface point from all directions. In reality, however, part of the directions may be occluded by the external environment. Ambient occlusion algorithm computes how much of the ambient light is occluded, which we call the occlusion degree. Imagine casting rays along the opposite directions of the incident light. Some rays intersect the scene geometry. The occlusion degree is defined as the ratio between the intersected and all cast rays. However, it’s expensive.

36 Screen Space Ambient Occlusion
The occlusion degree can be approximated as the ratio between the occupied space and the entire hemisphere’s space. Instead of computing the space volumes, let’s take a set of samples, and then determine if each sample is inside or outside.

37 Screen Space Ambient Occlusion (cont’d)
Take the z-buffer of the scene (seen from the camera) as the discrete approximation of the scene geometry. It’s depth map that can be created using the algorithm presented in the first pass of shadow mapping. At the second pass, a quad covering the entire screen (more precisely, the viewport) is rendered such that each fragment of the quad corresponds to a pixel of the z-buffer. Let’s do comparison that is similar to the one performed for shadow mapping. In the example, s1 is outside as d1 < z1. However, s2 is inside as d2 > z2. The occlusion degree is 3/8. Each of the occlusion degrees can be used to modulate sa in the ambient reflection term.

38 Screen Space Ambient Occlusion (cont’d)
The result is great!! However, we have errors!!

39 Screen Space Ambient Occlusion (cont’d)
Gears of War example Lighting is one of the elements influenced the most by GPU evolution. The state of the art in real-time lighting has been gradually moving away from the classic implementation of the Phong model.

40 Deferred Shading A most notable technique that extensively utilizes the MRT is deferred shading. This technique does not shade a fragment that will fail the depth test, i.e., shading is deferred until all “visible surfaces” of the scene are determined. It is split into geometry pass and lighting passes. At the geometry pass, various per-pixel attributes are computed and stored into MRT by a fragment shader. The per-pixel attributes may include texture colors, depths, and normals “of the visible surfaces.”

41 Deferred Shading (cont’d)
The MRT filled at the geometry pass is often called the G-buffer which stands for geometric buffer. Screen-space image texture Depth map Screen-space normal map Shading using the G-buffer.

42 Deferred Shading (cont’d)
At the lighting passes, a large number of light sources can be applied to the G-buffer. All light sources are iterated. For a light source, a full-screen quad is rendered (as was done by the SSAO algorithm) such that each fragment of the quad corresponds to a pixel on the screen. The fragment shader takes the G-buffer as input, computes lighting, and determines the color of the current pixel. It is blended with the current content of the color buffer, which was computed using the previous light sources.


Download ppt "Chapter X Advanced Texturing"

Similar presentations


Ads by Google