Download presentation
Presentation is loading. Please wait.
1
Polygon Rendering Techniques
Swapping the Loops
2
Review: Ray Tracing Pipeline
Ray tracer produces visible samples of scene Samples are convolved with a filter to form pixel image Ray tracing pipeline: -Traverse scene graph -Accumulate CTM -Spatially organize objects Acceleration Data Structure (ADS) suitable for ray tracing Preprocessing Scene Graph -traverse ADS -for nearest cell to farthest cell -intersect objects in cell with ray -return nearest intersection in cell -otherwise continue to next cell Evaluate lighting equation at sample Generate ray for each sample Ray Tracing Generate reflection rays Post processing All samples Convolve with filter Final image pixels
3
Rendering Polygons Often need to render triangle meshes
Ray tracing works well for implicitly defined surfaces Many existing models and modeling apps are based on polygon meshes – can we render them by ray tracing the polygons? Easy to do: ray-polygon intersection is a simple calculation (use barycentric coords) Very inefficient: common for an object to have thousands of triangles, for a scene to have hundreds of thousands or even millions of triangles – each needs to be considered in intersection tests Traditional hardware pipeline is more efficient for many triangles: Process the scene polygon by polygon, using “z-buffer” to determine visibility Local illumination model Use crude interpolation shading approximation to compute the color and/or uv of most pixels, fine for small triangles
4
Traditional Fixed Function Pipeline (disappearing)
Polygons (usually triangles) approximate the desired geometry One polygon at a time: Each vertex is transformed into screen space, illumination model evaluated at each vertex. No global illumination (except through ambient term hack) One pixel of the polygon at a time (pipeline is highly simplified): depth is compared against z-buffer; if the pixel is visible, color value is computed using either linear interpolation between vertex color values (Gouraud shading) or a local lighting model for each pixel (e.g., Phong shading using interpolated normals and various types of hardware assisted maps) Geometry Processing World Coordinates Rendering/Pixel Processing Screen Coordinates Selective traversal of Acc Data Str (or traverse scene graph) to get CTM) Transform vertices to canonical view volume Trivial accept/reject and back-face culling Lighting: calculate light intensity at vertices (lighting model of choice) View volume clipping Image-precision VSD: compare pixel depth (z-buffer) Shading: interpolate vertex color values (Gouraud) or normals (Phong)
5
Visible Surface Determination - review
Back face culling: Often don’t need to render triangles “facing away” from camera Skip triangle if screen space vertex positions aren’t in counter clockwise order View volume clipping: If triangle lies entirely outside of view volume, no need to render it Some special cases requiring polygon clipping: large triangles, triangles that intersect faces of the view volume Occlusion culling: Triangles that are behind other opaque triangles don’t need to be rendered
6
Programmable Shader-Based Pipeline
Allows programmer to fine tune and optimize various stages of pipeline This is what we have used in labs and projects this semester Shaders are fast! (They use multiple ultra- fast ALU’s, Arithmetic Logic Units). Shaders can do all the techniques mentioned in this lecture in real time For example, physical phenomena such as shadows and light refraction can be emulated using shaders The image on the right is rendered with a custom shader. Note that only the right image has realistic shadows. A normal map is also used to add detail to the model.
7
Classical Shading Models Review (1/5)
Constant Shading: No interpolation, pick a single representative intensity and propagate it over entire object Loses almost all depth cues Pixar “Shutterbug” images from:
8
Classical Shading Models Review (2/5)
Faceted Shading: Single illumination value per polygon With many polygons approximating a curved surface, this creates an undesirable faceted look. Facets exaggerated by “Mach banding” effect
9
Classical Shading Models Review (3/5)
Gouraud Shading: Linear Interpolation of intensity across triangles to eliminate edge discontinuity Eliminates intensity discontinuities at polygon edges; still have gradient discontinuities Mach banding is largely ameliorated, not eliminated Must differentiate desired creases from tessellation artifacts (edges of cube vs. edges on tessellated sphere) Can miss specular highlights between vertices because it interpolates vertex colors instead of calculating intensity directly at each point
10
Classical Shading Models Review (4/5)
Gouraud shading can miss specular highlights because it interpolates vertex colors instead of calculating intensity directly at each point 𝑁 𝑎 and 𝑁 𝑏 have small specular components, whereas 𝑁 𝑐 ’s specular component is large when view ray aligns with reflection ray. Interpolating between Ia and Ib misses the highlight that evaluating I at c using 𝑁 𝑐 would catch Phong shading: (interpolating vertex normals) Better than Gouraud: Interpolated normal comes close to actual normal of true surface at a given point Reduces temporal “jumping” effect of highlight, e.g., when animating a rotating sphere (ex. next slide)
11
Classical Shading Models Review (5/5)
Phong Shading: Linear interpolation of normal vectors instead of color values Always captures specular highlight More computationally expensive At each pixel, interpolated normal is re- normalized and used to compute lighting model Looks much better than Gouraud, but still no global effects Gouraud Phong Gouraud Phong
12
More Sophisticated Techniques
What we have doesn’t cover most scenes – use hacks to approximate desired features Representing complicated objects Complicated color, geometry – texture mapping, normal and bump mapping, etc. Hacks for global illumination effects Reflections, shadows – environment mapping, shadow mapping and stencil volumes
13
Maps Overview Originally designed for offline rendering to improve quality Now used in shader programming to provide real time quality improvements Maps can contain many different kinds of information Simplest kind contains color information, i.e., texture map Can also contain depth, normal vectors, specularity, etc. (examples coming…) Can use multiple maps in combination (e.g., use texture map to determine object color, then normal map to perform lighting calculation, etc.) Indexing into maps – what happens between pixels in the original image? Depends! OpenGL allows you to specify filtering method GL_NEAREST: returns nearest pixel GL_LINEAR: triangle filtering
14
Mipmap for a brick texture
Mipmapping Choosing a texture map resolution Want high resolution textures for nearby objects But high resolution textures are inefficient for distant objects Simple idea: Mipmapping MIP: multum in parvo, “much in little” Maintain multiple texture maps at different resolutions Use lower resolution textures for objects further away (typically powers of 2) Example of “level of detail” (LOD) management common in CG Reduces aliasing artifacts from down- and up- sampling Mipmap for a brick texture Switching between mipmaps can be a bit jarring so mipmapping is often combined with trilinear filtering.
15
Trilinear and Anisotropic Filtering (1/3)
But… Transitions between levels in mipmaps can be jarring Solution? Filter! Trilinear filtering: Assume texture maps are at 2 ft, 4 ft, etc. Map a wall at, say, 3.5 ft by selecting the sequence of corresponding sample points for filtering in the two bracketing maps at 2 and 4 ft and blend them using linear interpolation Doesn’t compensate for perspective distortion and the higher spatial frequencies it introduces Bilinear Trilinear Anisotropic Note the blue wall line receding in the distance Filter Comparison
16
Trilinear and Anisotropic Filtering (2/3)
Mipmapping assumes orthogonal perspective to the texture (isotropic). What happens to our rectangular texture as we rotate it from the xy plane (vertical wall) into xz (ground plane)? The rectangle becomes a trapezoid and spatial frequencies increase with z! Anisotropic filtering deals with this issue of perspective distortion and higher spatial frequencies, by sampling at a higher rate (in modern hardware, a max of 8 or 16 samples per pixel is common) The best part? Mipmapping and Anisotropic filtering can be enabled in ~1 line each! glGenerateMipmap(m_texture); glTexParameterf(m_texture, GL_TEXTURE_MAX_ANISOTROPY_EXT, numMaxSamples); Please use this simple code for final projects in TextureParameters.cpp!
17
Trilinear and Anisotropic Filtering (3/3)
18
Shadows (1/5) – Simplest Hack
Render each object twice First pass: render normally Second pass: use transformations to project object onto ground plane, render completely black Pros: Easy, can be convincing. Eye more sensitive to presence of shadow than shadow’s exact shape Cons: Becomes complex computational geometry problem in anything but simplest case Easy: projecting onto flat, infinite ground plane Hard(er): how to implement for projection on stairs? Rolling hills?
19
Shadows (2/5) – More Advanced
For each light 𝐿 For each point 𝑃 in scene If 𝑃 is in shadow cast by 𝐿 //how to compute? Only use indirect lighting (e.g., ambient term for Phong lighting) Else Evaluate full lighting model (e.g., ambient, diffuse, specular for Phong) Next: different methods for computing whether 𝑃 is in shadow cast by 𝐿 Stencil shadow volumes implemented by former cs123 TA Dr. Kevin Egan and former PhD student and book co-author Prof. Morgan McGuire, on nVidia chip
20
Shadows (3/5) – Stencil Shadow Volumes
For each light + object pair, compute mesh enclosing area where the object occludes the light Find silhouette from light’s perspective Includes every edge shared by two triangles, such that one triangle faces light source and other faces away – marks the transition from visible to not visible by the light On torus, where angle between normal vector and vector to light becomes >90° Project silhouette along light rays Generate triangles bridging silhouette and its projection to obtain the shadow volume A point P is in shadow from light L if any shadow volume V computed for L contains P Can determine this quickly using multiple passes and a “stencil buffer” More here on Stencil Buffers, Stencil Shadow Volumes Silhouette Projected Silhouette Example shadow volume (yellow mesh)
21
Shadows (4/5) – Another Multi-Pass Technique: Shadow Maps
Render scene using each light as center of projection, saving only its z-buffer Resultant 2D images are “shadow maps”, one per light Next, render scene from camera’s POV To determine if point P on object is in shadow: compute distance dP from P to light source convert P from world coordinates to shadow map coordinates using the viewing and projection matrices used to create shadow map look up min distance dmin in shadow map P is in shadow if dP > dmin , i.e., it lies behind a closer object Shadow map (on right) obtained by rendering from light’s point of view (darker is closer) Light Camera P dP dmin
22
Shadows (5/5) – Shadow Map Tradeoffs
Pro: Can extend to support soft shadows Soft shadows: shadows with “soft” edges (e.g., via blurring) Stencil shadow volumes only useful for hard-edged shadows Con: Naïve implementation has impressively bad aliasing problems When the camera is closer to the scene than the light, many screen pixels may be covered by only one shadow map pixel (e.g., sun shining on Eiffel tower) Many workarounds for aliasing issues Percentage-Closer Filtering: For each screen pixel, sample shadow map in multiple places to see how many are in and out of shadow, then average Cascaded Shadow Maps: Multiple shadow maps, higher resolution closer to viewer (like mip mapping!) Variance Shadow Maps: Stores the average depth and variance at each texel. Uses Chebyshev’s inequality to approximate the percentage of samples with greater depth than the current sample (Loudon recommends this highly for your final project! He did this in a couple of afternoons using this tutorial) Hard shadows Soft shadows Aliasing produced by naïve shadow mapping Please, please use Variance over PCFs as they are better in practically every way. And if you do Variance right, you can essentially get soft shadows for free! reves/Marc.Stamminger/psm/
23
Environment Mapping (for specular reflections) (1/2)
Approximate reflections by creating a skybox a.k.a. environment map a.k.a. reflection map (see Lab 10!) A hack to approximate recursive raytracing – place highly specular object in center of scene and see what reflection ray intersects in the environment – that’s what viewer sees Often represented as six faces of a cube surrounding scene Can also be a large sphere surrounding scene, etc. To create environment map, render entire scene from center of an object one face at a time Can use a single environment map for a cluster of objects Can do this offline for static geometry, but must generate at runtime for moving objects Rendering environment map at runtime is expensive (compared to using a pre-computed texture) Can also use photographic panoramas instead of rendering Skybox Object
24
Environment Mapping (2/2)
To sample environment map reflection at point P: Computer vector E from P to eye Reflect E about normal N to obtain R Use the direction of R to compute the intersection point with the environment map Treat P as being center of map, i.e. move environment map so P is in center See more here: _tutorial_chapter07.html R P N E eye
25
Overview: Surface Detail
Observation What if we replaced the 3D sphere on the right with a 2D circle? The circle would have fewer triangles (thus renders faster) If we kept the sphere’s normals, the circle would still look like a sphere! Works because human visual system infers shape from patterns of light and dark regions (“shape from shading”). Brightness at any point is determined by normal vector, not by actual geometry of model Image credit: Dave Kilian, ‘13
26
Idea: Surface Detail Two ideas: use texture map either to vary normals or vary mesh geometry itself. First let’s vary normals Start with a high-poly (high polygon count) model Decimate the mesh (remove triangles) Encode high-poly normal information into texture Map texture (i.e., normals) onto low-poly mesh In fragment shader: Look up normal for each texel in normal texture map For Phong shading (normal interpolation) calculate sample’s normal and use a second texture map for color Original high-poly model Low-poly model with high-poly model’s normals preserved
27
Object-space normal map
Normal Mapping Idea: Fill a texture map with normals Fragment shader samples 2D color texture Interprets R as Nx ; G as Ny ; B as Nz Easiest to render, hardest to produce Usually need specialized modeling tools like Pixologic’s ZBrush; algorithm on slide 30 Variants Object-space: store object-space normal vectors Original mesh Object-space normal map Mesh with normal map applied (colors still there for visualization purposes) (right click on Google Chrome. Translate to English)
28
Normal Mapping Another variant
Tangent-space: store normals in tangent space Tangent space: UVW system, where W is aligned with the original normal, U is a (positive) texture coordinate, and V is orthonormal to U and W RGB is the tangent space normal (e.g. R = NU, G = NV, B = NW) As with object space, <NU, NV, NW> replaces N Tangent space coordinate system at a point is defined by the object’s normal computed by the gradient at that point. Thus tangent space normals will continue match the object if a distortion (such as a non- uniform scale or a mesh deformation) changes the gradients (not true for object space normal mapping) Matrix [U V W] sends object space basis vectors e1, e2, e3 to U, V, W Use the inverse to convert back to object space From there convert to world or camera space to do lighting calculation (see Intersect or Lab 4) Tangent space normal map Blue is predominant as the object normals typically face the viewer ~(0, 0, 1) and get mapped to ~(0.5, 0.5, 1) in tangent space. Tangent space
29
Normal Mapping Example
Normal mapping can completely alter the perceived geometry of a model Image courtesy of
30
Creating Normal Maps Algorithm (“baking the texture map”) Manual
Original mesh M simplified to mesh S For each pixel in normal map for S: Find corresponding point on S: PS Find closest point on original mesh: PM Average nearby normals in M and save value in normal map Manual Artist starts with S Uses specialized tools to draw directly onto normal map (‘sculpting’) PS S Normal map Each normal vector’s X,Y,Z components stored as R,G,B values in texture
31
Normal Mapping Video https://youtu.be/m-6Yu-nTbUU?t=2m21s
32
+ = Bump Mapping: Example Original object (plain sphere)
Bump map (height map to perturb normals - see next slide ) Sphere with bump-mapped normals
33
Bump Mapping, Another Way to Perturb Normals
Idea: instead of encoding normals themselves in the map, encode relative heights (or “bumps”) Black: minimum height delta White: maximum height delta Much easier to create than normal maps How to compute a normal from a height map? Collect several height samples from texture Convert height samples to 3D coordinates to calculate the average normal vector at the given point (see diagram below) You computed normals like this for a terrain mesh in Lab 5 (terrain)! Transform computed normal from tangent space to object space using the inverse of [U V W] as we did earlier Nearby values in (1D) bump map Original tangent-space normal = (0, 0, 1) Bump map visualized as tangent-space height deltas Transformed tangent-space normal Normal vectors for triangles neighboring a point. Each dot corresponds to a pixel in the bump map.
34
Other Techniques: Displacement Mapping
Actually move the vertices along their normals by looking up height deltas in a height map Displacing the vertices of the mesh will deform the mesh, producing different vertex normals because the face normals will be different Unlike bump/normal mapping, this will produce correct silhouettes and self-shadowing By default, does not provide detail between vertices like normal/bump mapping To increase detail level we can subdivide the original mesh Can become very costly since it creates additional vertices Displacement map on a plane at different levels of subdivision
35
Other Techniques: Parallax Mapping (1/2)
Extension to normal/bump mapping Used in conjunction with anisotropic filtering Perturbs texture coordinates right before sampling, as a function of normal and eye vector (next slide) Example below: looking at stone sidewalk at an angle Texture coordinates stretched along the near edges of the stones, which are “facing” the viewer Similarly, texture coordinates compressed along the far edges of the stones, where you shouldn’t be able to see the “backside” of the stones
36
Other Techniques: Parallax Mapping (2/2)
Modify original texture coordinates (u,v) to better approximate where the intersection point of eye ray with perturbed surface would be on the bumpy surface Option 1 Sample the discrete height map at point (u,v) to get height h, which may require linear interpolation of adjacent heights Approximate region around (u,v) with a surface of constant height h Intersect eye ray with approximate surface to get new texture coordinates (u’, v’). Again a sample location—not necessarily pixel location Use (u', v') and sample the height map there and the local neighborhood Use the samples and compute a normal from said heights, just like in slide 33 Option 2 Sample the height map in region around (u,v) to get the normal vector N’ (use the same normal averaging technique that we used in bump mapping) Approximate region around (u,v) with the tangent plane Intersect eye ray with approximate surface to get (u’, v’) Both produce artifacts when viewed from a steep angle Other options discussed here: Premecz-Matyas.pdf For illustration purposes, we use a smooth curve to show the varying heights in the neighborhood of point (u,v)
37
Other Techniques: Steep Parallax Mapping
Traditional parallax mapping only works for low-frequency bumps, does not handle very steep, high-frequency bumps Using option 1 from previous slide: the black eye ray correctly approximates the bump surface the gray eye ray misses the first intersection point on the high- frequency bump Steep Parallax Mapping Instead of approximating the bump surface, iteratively step along eye ray (ray marching) at each step, check height map value to see if we have intersected the surface more costly than naïve parallax mapping Invented by Brown PhD ‘06 Morgan McGuire and Max McGuire Adds support for self-shadowing Short description under each screenshot of what to look at to notice the difference between the screenshots
38
Comparison Creases between bricks look flat
Creases look progressively deeper Objects have self-shadowing
39
Final Project Proposals!
Good places to find ideas: Final Project Handout Polygon Rendering Techniques and Realism lectures Past projects on the CS1230 Youtube channel! Your goal this week: find group members (use Piazza’s find group members functionality!), figure out what topics you’re interested in, submit a brief description Brief = a few sentences or bullet points This is not a contract - you can still change your mind for Final Plan if you took on too much / too little Note: If you want to do scientific simulations (fluids, Newtonian physics, biological or chemical reactions, etc.) you are completely responsible for understanding the science, the math and the numerical simulation of the phenomenon they are modeling – we TAs are not equipped to supervise that work Our goal: Match you with a mentor TA, check to make sure your ideas are appropriate for the timeframe Next week: work with your mentor TA to create a detailed Final Plan Don’t forget Ray!
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.