Polygon Rendering Techniques

Slides:



Advertisements
Similar presentations
16.1 Si23_03 SI23 Introduction to Computer Graphics Lecture 16 – Some Special Rendering Effects.
Advertisements

Exploration of advanced lighting and shading techniques
CS123 | INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 1/16 Deferred Lighting Deferred Lighting – 11/18/2014.
CS 352: Computer Graphics Chapter 7: The Rendering Pipeline.
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Exploration of bump, parallax, relief and displacement mapping
Graphics Pipeline.
Computer Graphics methods
Cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © Andries van Dam Texture Mapping Beautification of Surfaces 1/23.
Week 7 - Monday.  What did we talk about last time?  Specular shading  Aliasing and antialiasing.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
3D Graphics Rendering and Terrain Modeling
 Engineering Graphics & Introductory Design 3D Graphics and Rendering REU Modeling Course – June 13 th 2014.
Chapter 6: Vertices to Fragments Part 2 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley Mohan Sridharan Based on Slides.
Computer Graphics (Fall 2005) COMS 4160, Lecture 16: Illumination and Shading 1
(conventional Cartesian reference system)
University of British Columbia CPSC 414 Computer Graphics © Tamara Munzner 1 Shading Week 5, Wed 1 Oct 2003 recap: lighting shading.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Polygon Shading. Assigning color to a shape to make graphical scenes look realistic, or artistic, or whatever effect we’re attempting to achieve But first.
CS 445 / 645 Introduction to Computer Graphics Lecture 18 Shading Shading.
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) CS 426 Intro to 3D Computer Graphics © 2003, 2004, 2005 Jason Leigh Electronic.
COMPUTER GRAPHICS CS 482 – FALL 2014 AUGUST 27, 2014 FIXED-FUNCTION 3D GRAPHICS MESH SPECIFICATION LIGHTING SPECIFICATION REFLECTION SHADING HIERARCHICAL.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
COLLEGE OF ENGINEERING UNIVERSITY OF PORTO COMPUTER GRAPHICS AND INTERFACES / GRAPHICS SYSTEMS JGB / AAS 1 Shading (Shading) & Smooth Shading Graphics.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Rendering Overview CSE 3541 Matt Boggus. Rendering Algorithmically generating a 2D image from 3D models Raster graphics.
CS-378: Game Technology Lecture #4: Texture and Other Maps Prof. Okan Arikan University of Texas, Austin V Lecture #4: Texture and Other Maps.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
3D animation is rendered clip of animated 3D objects in a 3D environment. An example: Examples of movies released in 3D are Toy Story, Cars, Shrek, Wall-E,
Computing & Information Sciences Kansas State University Lecture 10 of 42CIS 636/736: (Introduction to) Computer Graphics CIS 636/736 Computer Graphics.
Binary Space Partitioning Trees Ray Casting Depth Buffering
09/16/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Environment mapping Light mapping Project Goals for Stage 1.
BUMP-MAPPING SET09115 Intro to Graphics Programming.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Graphics Graphics Korea University cgvr.korea.ac.kr 1 Surface Rendering Methods 고려대학교 컴퓨터 그래픽스 연구실.
CS 445 / 645 Introduction to Computer Graphics Lecture 15 Shading Shading.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
CS123 | INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © Visible Surface Determination (VSD) To render or not to render, that is the question… 1 of.
Lecture Fall 2001 Illumination and Shading in OpenGL Light Sources Empirical Illumination Shading Transforming Normals Tong-Yee Lee.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
Local Illumination and Shading
CS559: Computer Graphics Final Review Li Zhang Spring 2010.
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
Real-Time Dynamic Shadow Algorithms Evan Closson CSE 528.
1 CSCE 441: Computer Graphics Hidden Surface Removal Jinxiang Chai.
Shadows David Luebke University of Virginia. Shadows An important visual cue, traditionally hard to do in real-time rendering Outline: –Notation –Planar.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
09/23/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Reflections Shadows Part 1 Stage 1 is in.
Computer Graphics Ken-Yi Lee National Taiwan University (the slides are adapted from Bing-Yi Chen and Yung-Yu Chuang)
Computer Graphics (Fall 2006) COMS 4160, Lecture 16: Illumination and Shading 1
Week 7 - Monday CS361.
Polygon Rendering Techniques
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
Jim X. Chen George Mason University
Chapter 14 Shading Models.
Lecture 13 Clipping & Scan Conversion
CS-378: Game Technology Lecture #4: Texture and Other Maps
Illumination and Shading
Computer Graphics Material Colours and Lighting
Chapter 14 Shading Models.
Adding Surface Detail 고려대학교 컴퓨터 그래픽스 연구실.
Adding Surface Detail 고려대학교 컴퓨터 그래픽스 연구실.
Presentation transcript:

Polygon Rendering Techniques Swapping the Loops

Review: Ray Tracing Pipeline Ray tracer produces visible samples of scene Samples are convolved with a filter to form pixel image Ray tracing pipeline: -Traverse scene graph -Accumulate CTM -Spatially organize objects Acceleration Data Structure (ADS) suitable for raytracing Preprocessing Scene Graph -traverse ADS -for nearest cell to farthest cell -intersect objects in cell with ray -return nearest intersection in cell -otherwise continue to next cell Evaluate lighting equation at sample Generate ray for each sample Ray Tracing Generate reflection rays Post processing All samples Convolve with filter Final image pixels

Rendering Polygons Often need to render triangle meshes Ray tracing works well for implicitly defined surfaces Many existing models and modeling apps are based on polygon meshes – can we render them by ray tracing the polygons? Easy to do: ray-polygon intersection is a simple calculation (use barycentric coords) Very inefficient: It’s common for an object, to have thousands of triangles, for a scene therefore hundreds of thousands or even millions of triangles – each needs to be considered in intersection tests Traditional hardware pipeline is more efficient for many triangles: Process the scene polygon by polygon, using “z-buffer” to determine visibility Local illumination model Use crude interpolation shading approximation to compute the color of most pixels, fine for small triangles

Traditional Fixed Function Pipeline (disappearing) Polygons (usually triangles) approximate the desired geometry One polygon at a time: Each vertex is transformed into screen space, illumination model evaluated at each vertex. No global illumination (except through ambient term hack) One pixel of the polygon at a time (pipeline is highly simplified): depth is compared against z-buffer; if the pixel is visible, color value is computed using either linear interpolation between vertex color values (Gouraud shading) or a local lighting model for each pixel (e.g., Phong shading using interpolated normals) Geometry Processing World Coordinates Rendering/Pixel Processing Screen Coordinates Selective traversal of Acc Data Str (or traverse scene graph) to get CTM) Transform vertices to canonical view volume Trivial accept/reject and back-face culling Lighting: calculate light intensity at vertices (lighting model of choice) View volume clipping Image-precision VSD: compare pixel depth (z-buffer) Shading: interpolate vertex color values (Gouraud) or normals (Phong)

Visible Surface Determination Back face culling: Often don’t need to render triangles “facing away” from camera Skip triangle If screen space vertex positions aren’t in counter clockwise order View volume clipping: If triangle lies entirely outside of view volume, no need to render it Some special cases requiring polygon clipping: large triangles, triangles that intersect faces of the view volume Occlusion culling: Triangles that are behind other opaque triangles don’t need to be rendered

Programmable Shader Based Pipeline Allows programmer to fine tune and optimize various stages of pipeline This is what we have used in labs and projects this semester Shaders are fast! (They use multiple ultra- fast ALU’s, or artithmetic logic units). Using shaders all the techniques mentioned in this lecture can be done in real time For example, physical phenomena such as shadows and light refraction can be emulated using shaders The image on the right is rendered with a custom shader. Note that only the right image has realistic shadows. A normal map is also used to add detail to the model.

Shading Models Review (1/6) Constant Shading: No interpolation, pick a single representative intensity and propagate it over entire object Loses almost all depth cues Pixar “Shutterbug” images from: www.siggraph.org/education/materials/HyperGraph /scanline/shade_models/shading.htm

Shading Models Review (2/6) Faceted Shading: Single illumination value per polygon With many polygons approximating a curved surface, this creates an undesirable faceted look. Facets exaggerated by “Mach banding” effect

Shading Models Review (3/6) Gouraud Shading: Linear Interpolation of intensity across triangles to eliminate edge discontinuity Eliminates intensity discontinuities at polygon edges; still have gradient discontinuities Mach banding is largely ameliorated, not eliminated Must differentiate desired creases from tesselation artifacts (edges of cube vs. edges on tesselated sphere) Can miss specular highlights between vertices because it interpolations vertex colors instead of calculating intensity directly at each point

Shading Models Review (4/6) Gouraud shading can miss specular highlights because it interpolates vertex colors instead of calculating intensity directly at each point, or even interpolating vertex normals (Phong shading) 𝑁 𝑎 and 𝑁 𝑏 would cause no appreciable specular component, whereas 𝑁 𝑐 would, with view ray aligned with reflection ray. Interpolating between Ia and Ib misses the highlight that evaluating I at c using 𝑁 𝑐 would catch Phong shading: Interpolated normal comes close to the actual normal of the true curved surface at a given point Reduces temporal “jumping” affect of highlight, e.g., when rotating sphere during animation (example on next slide)

Shading Models Review (5/6) Phong Shading: Linear interpolation of normal vectors instead of color values Always captures specular highlight More computationally expensive At each pixel, interpolated normal is re- normalized and used to compute lighting model Looks much better than Gouraud, but still no global effects Gouraud Phong Gouraud Phong http://www.cgchannel.com/2010/11/cg-science-for-artists-part-2-the-real-time-rendering-pipeline/

Shading Models Review (6/6) Global illumination models: Objects enhanced using texture, shadow, bump, reflection, etc., mapping These are still hacks, but more realistic ones

Maps Overview Originally designed for offline rendering to improve quality Now used in shader programming to provide real time quality improvements Maps can contain many different kinds of information Simplest kind contains color information, i.e., texture map Can also contain depth, normal vectors, specularity, etc. (examples coming…) Can use multiple maps in combination (e.g., use texture map to determine object color, then normal map to perform lighting calculation, etc.) Indexing into maps – what happens at non- integer locations? Depends! OpenGL allows you to specify filtering method GL_NEAREST = point sampling GL_LINEAR = triangle filtering

Mipmap for a brick texture Mipmapping Choosing a texture map resolution Want high resolution textures for nearby objects But high resolution textures are inefficient for distant objects Simple idea: Mipmapping MIP: multum in parvo, “much in little” Maintain multiple texture maps at different resolutions Use lower resolution textures for objects further away Example of “level of detail” (LOD) management common in CG Mipmap for a brick texture http://flylib.com/books/en/1.541.1.66/1/

Shadows (1/5) – Simplest Hack Render each object twice First pass: render normally Second pass: use transformations to project object onto ground plane, render completely black Pros: Easy, can be convincing. Eye more sensitive to presence of shadow than shadow’s exact shape Cons: Becomes complex computational geometry problem in anything but simplest case Easy: projecting onto flat, infinite ground plane How to implement for projection on stairs? Rolling hills? http://web.cs.wpi.edu/~matt/courses/cs563/talks/shadow/shadow.html

Shadows (2/5) – More Advanced For each light 𝐿 For each point 𝑃 in scene If 𝑃 is in shadow cast by 𝐿 //how to compute? Only use indirect lighting (e.g., ambient term for Phong lighting) Else Evaluate full lighting model (e.g., ambient, diffuse, specular for Phong) Next: different methods for computing whether 𝑃 is in shadow cast by 𝐿 Stencil shadow volumes implemented by former cs123 ta and recent Ph.D. Kevin Egan and former PhD student and book co-author Prof. Morgan McGuire, on nVidia chip

Shadows (3/5) – Shadow Volumes For each light + object pair, compute mesh enclosing area where the object occludes the light Find silhouette from light’s perspective Every edge shared by two triangles, such that one triangle faces light source and other faces away On torus, where angle between normal vector and vector to light becomes >90° Project silhouette along light rays Generate triangles bridging silhouette and its projection to obtain the shadow volume A point P is in shadow from light L if any shadow volume V computed for L contains P Can determine this quickly using multiple passes and a “stencil buffer” More here on Stencil Buffers, Stencil Shadow Volumes Original Silhouette Projected Silhouette Example shadow volume (yellow mesh) http://www.ozone3d.net/tutorials/images/stencil_shadow_volumes/shadow_volume.jpg

Shadows (4/5) – Another Multi-Pass Technique: Shadow Maps Render scene using each light as center of projection, saving only its z-buffer Resultant 2D images are “shadow maps”, one per light Next, render scene from camera’s POV To determine if point P on object is in shadow: compute distance dP from P to light source convert P from world coordinates to shadow map coordinates using the viewing and projection matrices used to create shadow map look up min distance dmin in shadow map P is in shadow if dP > dmin , i.e., it lies behind a closer object Shadow map (on right) obtained by rendering from light’s point of view (darker is closer) Light Camera P dP dmin

Shadows (5/5) – Shadow Map Tradeoffs Pro: Can extend to support soft shadows Soft shadows: shadows with “soft” edges Stencil shadow volumes only useful for hard-edged shadows Con: Naïve implementation has impressively bad aliasing problems When the camera is closer to the scene than the light, many screen pixels may be covered by only one shadow map pixel (e.g., sun shining on Eiffel tower) Many workarounds for aliasing issues Percentage-Closer Filtering: for each screen pixel, sample shadow map in multiple places and average Cascaded Shadow Maps: multiple shadow maps, higher resolution closer to viewer Variance Shadow Maps: use statistical modeling instead of simple depth comparison Hard shadows Soft shadows Aliasing produced by naïve shadow mapping http://http.developer.nvidia.com/GPUGems3/gpugems3_ch08.html http://www-sop.inria.fr/ reves/Marc.Stamminger/psm/

Environment Mapping (for specular reflections) (1/2) Approximate reflections by creating a skybox a.k.a. environment map a.k.a. reflection map Often represented as six faces of a cube surrounding scene Can also be a large sphere surrounding scene, etc. To create environment map, render entire scene from center point one face at a time Can do this offline for static geometry, but must generate at runtime for moving objects Rendering environment map at runtime is expensive (compared to using a pre-computed texture) Can also use photographic panoramas Skybox Object

Environment Mapping (2/2) To sample environment map reflection at point P: Computer vector E from P to eye Reflect E about normal to obtain R Use the direction of R to compute the intersection point with the environment map Treat P as being center of map; equivalently, treat environment map as being infinitely large R P N E eye

Overview: Surface Detail Observation What if we replaced the 3D sphere on the right with a 2D circle? The circle would have fewer triangles (thus renders faster) If we kept the sphere’s normals, the circle would still look like a sphere! Works because human visual system infers shape from patterns of light and dark regions (“shape from shading”). Brightness at any point is determined by normal vector, not by actual geometry of model Image credit: Dave Kilian, ‘13

Idea: Surface Detail Start with a hi-poly (high polygon count) model Decimate the mesh (remove triangles) Encode hi-poly normal information into texture Map texture (i.e. normals) onto lo-poly mesh In fragment shader: Look up normal for each pixel Use Phong shading to calculate pixel color normal obtained from texture Original hi-poly model Lo-poly model with hi-poly model’s normals preserved

Normal Mapping Idea: Fill a texture map with normals Original mesh Object-space normal map Mesh with normal map applied (colors still there for visualization purposes) Idea: Fill a texture map with normals Fragment shader samples 2D color texture Interprets R as Nx, G as Ny, B as Nz Easiest to render, hardest to produce Usually need specialized modeling tools like Pixologic’s Zbrush; algorithm on slide 31 Variants Object-space: store raw, object-space normal vectors Tangent-space: store normals in tangent space Tangent space: UVW system, where W is aligned with the original normal Have to convert normals to world space to do lighting calculation Tangent space Tangent-space normal map http://www.3dvf.com/forum/3dvf/Blender-2/modelisation-organique-tutoriel-sujet_16_1.htm

Normal Mapping Example Render showing simple underlying geometry Normal map gives model finer detail Normal mapping can completely alter the perceived geometry of a model Image courtesy of www.anticz.com

Normal Mapping Video http://www.youtube.com/watch?v=RSmjxcAhkfE

Creating Normal Maps Algorithm Manual PM Algorithm Original mesh M simplified to mesh S For each pixel in normal map for S: Find corresponding point on S: PS Find closest point on original mesh: PM Average nearby normals in M and save value in normal map Manual Artist starts with S Uses specialized tools to draw directly onto normal map (‘sculpting’) PS S Normal map Each normal vector’s X,Y,Z components stored as R,G,B values in texture

+ = Bump Mapping: Example Original object (plain sphere) Bump map (height map to perturb normals - see next slide ) Sphere with bump-mapped normals http://cse.csusb.edu/tong/courses/cs520/notes/texture.php

Bump Mapping, Another Way to Perturb Normals Idea: instead of encoding normals themselves in the map, encode relative heights (or “bumps”) Black: minimum height delta White: maximum height delta Much easier to create than normal maps How to compute a normal from a height map? Collect several height samples from texture Convert height samples to 3D coordinates to calculate the average normal vector at the given point Transform computed normal from tangent space to object space (and from there into world space) You computed normals like this for a terrain mesh in the Lab 4! Nearby values in (1D) bump map Original tangent-space normal = (0, 0, 1) Bump map visualized as tangent-space height deltas Transformed tangent-space normal Normal vectors for triangles neighboring a point. Each dot corresponds to a pixel in the bump map.

Other Techniques: Displacement Mapping Actually move the vertices along their normals by looking up height deltas in a height map Displacing the vertices of the mesh will deform the mesh, producing different vertex normals because the face normals will be different Unlike bump/normal mapping, this will produce correct silhouettes and self-shadowing By default, does not provide detail between vertices like normal/bump mapping To increase detail level we can subdivide the original mesh Can become very costly since it creates additional vertices Displacement map on a plane at different levels of subdivision http://en.wikipedia.org/wiki/Displacement_mapping http://www.nvidia.com/object/tessellation.html https://support.solidangle.com/display/AFMUG/Displacement

Other Techniques: Parallax Mapping (1/2) Extension to normal/bump mapping Distorts texture coordinates right before sampling, as a function of normal and eye vector (next slide) Example below: looking at stone sidewalk at an angle Texture coordinates stretched along the near edges of the stones, which are “facing” the viewer Similarly, texture coordinates compressed along the far edges of the stones, where you shouldn’t be able to see the “backside” of the stones http://amdstaging.wpengine.com/wordpress/media/2012/10/I3D2006-Tatarchuk-POM.pdf

Other Techniques: Parallax Mapping (2/2) Would like to modify original texture coordinates (u,v) to better approximate where the intersection point would be on the bumpy surface Option 1 Sample the height map at point (u,v) to get height h Approximate region around (u,v) with a surface of constant height h Intersect eye ray with approximate surface to get new texture coordinates (u’, v’) Option 2 Sample the height map in region around (u,v) to get the normal vector N’ (use the same normal averaging technique that we used in bump mapping) Approximate region around (u,v) with the tangent plane Intersect eye ray with approximate surface to get (u’, v’) Both produce artifacts when viewed from a steep angle Other options discussed here: http://www.cescg.org/CESCG- 2006/papers/TUBudapest-Premecz-Matyas.pdf For illustration purposes, we use a smooth curve to show the varying heights in the neighborhood of point (u,v)

Other Techniques: Steep Parallax Mapping Traditional parallax mapping only works for low-frequency bumps, does not handle very steep, high-frequency bumps Using option 1 from previous slide: the black eye ray correctly approximates the bump surface the gray eye ray misses the first intersection point on the high- frequency bump Steep Parallax Mapping Instead of approximating the bump surface, iteratively step along eye ray at each step, check height map value to see if we have intersected the surface more costly than naïve parallax mapping Invented by Brown PhD ‘06 Morgan McGuire and Max McGuire Adds support for self-shadowing Short description under each screenshot of what to look at to notice the difference between the screenshots http://graphics.cs.brown.edu/games/SteepParallax/mcguire-steepparallax.pdf

Comparison Creases between bricks look flat Creases look progressively deeper Objects have self-shadowing