We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byTheodore Hatcher
Modified about 1 year ago
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 1 of 34 Polygon Rendering Techniques Swapping the Loops
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 2 of 34 Ray tracer produces visible samples of scene Samples are convolved with a filter to form pixel image Ray tracing pipeline: Review: Ray Tracing Pipeline Scene Graph -Traverse scene graph -Accumulate CTM -Spatially organize objects Acceleration Data Structure (ADS) suitable for raytracing Generate ray for each sample -traverse ADS -for nearest cell to farthest cell -intersect objects in cell with ray -return nearest intersection in cell -otherwise continue to next cell Evaluate lighting equation at sample All samples Convolve with filterFinal image pixels Generate reflection rays Preprocessing Ray Tracing Post processing
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 3 of 34 Often need to render triangle meshes Ray tracing works well for implicitly defined surfaces Many existing models and modeling apps are based on polygon meshes – can we render them by ray tracing the polygons? Easy to do: ray-polygon intersection is a simple calculation (use barycentric coords) Very inefficient: It’s common for an object, to have thousands of triangles, for a scene therefore hundreds of thousands or even millions of triangles – each needs to be considered in intersection tests Traditional hardware pipeline is more efficient for many triangles: Process the scene polygon by polygon, using “z-buffer” to determine visibility Local illumination model Use crude interpolation shading approximation to compute the color of most pixels, fine for small triangles Rendering Polygons
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 4 of 34 Polygons (usually triangles) approximate the desired geometry One polygon at a time: Each vertex is transformed into screen space, illumination model evaluated at each vertex. No global illumination (except through ambient term hack) One pixel of the polygon at a time (pipeline is highly simplified): depth is compared against z-buffer; if the pixel is visible, color value is computed using either linear interpolation between vertex color values (Gouraud shading) or a local lighting model for each pixel (e.g., Phong shading using interpolated normals) Traditional Fixed Function Pipeline (disappearing) Geometry Processing World Coordinates Rendering/Pixel Processing Screen Coordinates Selective traversal of Acc Data Str (or traverse scene graph) to get CTM) Transform vertices to canonical view volume Lighting: calculate light intensity at vertices (lighting model of choice) View volume clipping Image- precision VSD: compare pixel depth (z-buffer) Shading: interpolate vertex color values (Gouraud) or normals (Phong) Trivial accept/reject and back- face culling
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 5 of 34 Back face culling: Often don’t need to render triangles “facing away” from camera Skip triangle If screen space vertex positions aren’t in counter clockwise order View volume clipping: If triangle lies entirely outside of view volume, no need to render it Some special cases requiring polygon clipping: large triangles, triangles that intersect faces of the view volume Occlusion culling: Triangles that are behind other opaque triangles don’t need to be rendered Visible Surface Determination
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 6 of 34 Allows programmer to fine tune and optimize various stages of pipeline This is what we have used in labs and projects this semester Shaders are fast! (They use multiple ultra- fast ALU’s, or artithmetic logic units). Using shaders all the techniques mentioned in this lecture can be done in real time For example, physical phenomena such as shadows and light refraction can be emulated using shaders Programmable Shader Based Pipeline The image on the right is rendered with a custom shader. Note that only the right image has realistic shadows. A normal map is also used to add detail to the model.
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 7 of 34 Constant Shading: No interpolation, pick a single representative intensity and propagate it over entire object Loses almost all depth cues Shading Models Review (1/6) Pixar “Shutterbug” images from: /scanline/shade_models/shading.htm
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 8 of 34 Faceted Shading: Single illumination value per polygon With many polygons approximating a curved surface, this creates an undesirable faceted look. Facets exaggerated by “Mach banding” effect Shading Models Review (2/6)
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 9 of 34 Gouraud Shading: Linear Interpolation of intensity across triangles to eliminate edge discontinuity Eliminates intensity discontinuities at polygon edges; still have gradient discontinuities Mach banding is largely ameliorated, not eliminated Must differentiate desired creases from tesselation artifacts (edges of cube vs. edges on tesselated sphere) Can miss specular highlights between vertices because it interpolations vertex colors instead of calculating intensity directly at each point Shading Models Review (3/6)
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 10 of 34 Shading Models Review (4/6) Phong shading: Interpolated normal comes close to the actual normal of the true curved surface at a given point Reduces temporal “jumping” affect of highlight, e.g., when rotating sphere during animation (example on next slide)
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 11 of 34 Shading Models Review (5/6) Gouraud Phong Phong Shading: Linear interpolation of normal vectors instead of color values Always captures specular highlight More computationally expensive At each pixel, interpolated normal is re- normalized and used to compute lighting model Looks much better than Gouraud, but still no global effects Phong part-2-the-real-time-rendering-pipeline/
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 12 of 34 Shading Models Review (6/6) Global illumination models: Objects enhanced using texture, shadow, bump, reflection, etc., mapping These are still hacks, but more realistic ones
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 13 of 34 Originally designed for offline rendering to improve quality Now used in shader programming to provide real time quality improvements Maps can contain many different kinds of information Simplest kind contains color information, i.e., texture map Can also contain depth, normal vectors, specularity, etc. (examples coming…) Can use multiple maps in combination (e.g., use texture map to determine object color, then normal map to perform lighting calculation, etc.) Maps Overview Indexing into maps – what happens at non- integer locations? Depends! OpenGL allows you to specify filtering method GL_NEAREST = point sampling GL_LINEAR = triangle filtering
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 14 of 34 Choosing a texture map resolution Want high resolution textures for nearby objects But high resolution textures are inefficient for distant objects Simple idea: Mipmapping MIP: multum in parvo, “much in little” Maintain multiple texture maps at different resolutions Use lower resolution textures for objects further away Example of “level of detail” (LOD) management common in CG Mipmapping Mipmap for a brick texture
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 15 of 34 Render each object twice First pass: render normally Second pass: use transformations to project object onto ground plane, render completely black Pros: Easy, can be convincing. Eye more sensitive to presence of shadow than shadow’s exact shape Cons: Becomes complex computational geometry problem in anything but simplest case Easy: projecting onto flat, infinite ground plane How to implement for projection on stairs? Rolling hills? Shadows (1/5) – Simplest Hack alks/shadow/shadow.html
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 16 of 34 Shadows (2/5) – More Advanced Stencil shadow volumes implemented by former cs123 ta and recent Ph.D. Kevin Egan and former PhD student and book co- author Prof. Morgan McGuire, on nVidia chip
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 17 of 34 For each light + object pair, compute mesh enclosing area where the object occludes the light Find silhouette from light’s perspective Every edge shared by two triangles, such that one triangle faces light source and other faces away On torus, where angle between normal vector and vector to light becomes >90° Project silhouette along light rays Generate triangles bridging silhouette and its projection to obtain the shadow volume A point P is in shadow from light L if any shadow volume V computed for L contains P Can determine this quickly using multiple passes and a “stencil buffer” More here on Stencil Buffers, Stencil Shadow VolumesStencil BuffersStencil Shadow Volumes Shadows (3/5) – Shadow Volumes shadow_volumes/shadow_volume.jpg Original Silhouette Projected Silhouette Example shadow volume (yellow mesh)
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 18 of 34 Shadows (4/5) – Another Multi-Pass Technique: Shadow Maps Shadow map (on right) obtained by rendering from light’s point of view (darker is closer) Render scene using each light as center of projection, saving only its z-buffer Resultant 2D images are “shadow maps”, one per light Next, render scene from camera’s POV To determine if point P on object is in shadow: compute distance d P from P to light source convert P from world coordinates to shadow map coordinates using the viewing and projection matrices used to create shadow map look up min distance d min in shadow map P is in shadow if d P > d min, i.e., it lies behind a closer object dPdP Light Camera P d min
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 19 of 34 Pro: Can extend to support soft shadows Soft shadows: shadows with “soft” edges Stencil shadow volumes only useful for hard-edged shadows Con: Naïve implementation has impressively bad aliasing problems When the camera is closer to the scene than the light, many screen pixels may be covered by only one shadow map pixel (e.g., sun shining on Eiffel tower) Many workarounds for aliasing issues Percentage-Closer Filtering: for each screen pixel, sample shadow map in multiple places and average Percentage-Closer Filtering Cascaded Shadow Maps: multiple shadow maps, higher resolution closer to viewer Cascaded Shadow Maps Variance Shadow Maps: use statistical modeling instead of simple depth comparison Variance Shadow Maps Shadows (5/5) – Shadow Map Tradeoffs Hard shadows Soft shadows reves/Marc.Stamminger/psm/ Aliasing produced by naïve shadow mapping
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 20 of 34 Environment Mapping (for specular reflections) (1/2) Skybox Object Approximate reflections by creating a skybox a.k.a. environment map a.k.a. reflection map Often represented as six faces of a cube surrounding scene Can also be a large sphere surrounding scene, etc. To create environment map, render entire scene from center point one face at a time Can do this offline for static geometry, but must generate at runtime for moving objects Rendering environment map at runtime is expensive (compared to using a pre-computed texture) Can also use photographic panoramas
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 21 of 34 Environment Mapping (2/2) To sample environment map reflection at point P: Computer vector E from P to eye Reflect E about normal to obtain R Use the direction of R to compute the intersection point with the environment map Treat P as being center of map; equivalently, treat environment map as being infinitely large Environment map eye P E N R
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 22 of 34 Observation What if we replaced the 3D sphere on the right with a 2D circle? The circle would have fewer triangles (thus renders faster) If we kept the sphere’s normals, the circle would still look like a sphere! Works because human visual system infers shape from patterns of light and dark regions (“shape from shading”). Brightness at any point is determined by normal vector, not by actual geometry of model Overview: Surface Detail Image credit: Dave Kilian, ‘13
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 23 of 34 Idea: Surface Detail Original hi-poly model Lo-poly model with hi-poly model’s normals preserved Start with a hi-poly (high polygon count) model Decimate the mesh (remove triangles) Encode hi-poly normal information into texture Map texture (i.e. normals) onto lo-poly mesh In fragment shader: Look up normal for each pixel Use Phong shading to calculate pixel color normal obtained from texture
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 24 of 34 Normal Mapping Tangent-space normal map Idea: Fill a texture map with normals Fragment shader samples 2D color texture Interprets R as N x, G as N y, B as N z Easiest to render, hardest to produce Usually need specialized modeling tools like Pixologic’s Zbrush; algorithm on slide 31 Variants Object-space: store raw, object-space normal vectors Tangent-space: store normals in tangent space Tangent space: UVW system, where W is aligned with the original normal Have to convert normals to world space to do lighting calculation Tangent space Original mesh Object-space normal map Mesh with normal map applied (colors still there for visualization purposes)
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 25 of 34 Normal mapping can completely alter the perceived geometry of a model Normal Mapping Example Render showing simple underlying geometry Image courtesy of Normal map gives model finer detail
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 26 of 34 Normal Mapping Video
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 27 of 34 Creating Normal Maps Algorithm Original mesh M simplified to mesh S For each pixel in normal map for S: Find corresponding point on S: P S Find closest point on original mesh: P M Average nearby normals in M and save value in normal map Manual Artist starts with S Uses specialized tools to draw directly onto normal map (‘sculpting’) M S Each normal vector’s X,Y,Z components stored as R,G,B values in texture Normal map PSPS PMPM
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 28 of 34 Bump Mapping: Example + = Original object (plain sphere) Bump map (height map to perturb normals - see next slide ) Sphere with bump-mapped normals
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 29 of 34 Idea: instead of encoding normals themselves in the map, encode relative heights (or “bumps”) Black: minimum height delta White: maximum height delta Much easier to create than normal maps How to compute a normal from a height map? Collect several height samples from texture Convert height samples to 3D coordinates to calculate the average normal vector at the given point Transform computed normal from tangent space to object space (and from there into world space) You computed normals like this for a terrain mesh in the Lab 4! Bump Mapping, Another Way to Perturb Normals Nearby values in (1D) bump map Original tangent-space normal = (0, 0, 1) Bump map visualized as tangent- space height deltas Transformed tangent-space normal Normal vectors for triangles neighboring a point. Each dot corresponds to a pixel in the bump map.
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 30 of 34 Actually move the vertices along their normals by looking up height deltas in a height map Displacing the vertices of the mesh will deform the mesh, producing different vertex normals because the face normals will be different Unlike bump/normal mapping, this will produce correct silhouettes and self-shadowing By default, does not provide detail between vertices like normal/bump mapping To increase detail level we can subdivide the original mesh Can become very costly since it creates additional vertices Other Techniques: Displacement Mapping https://support.solidangle.com/display/AFMUG/Displacement Displacement map on a plane at different levels of subdivision
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 31 of 34 Other Techniques: Parallax Mapping (1/2) Extension to normal/bump mapping Distorts texture coordinates right before sampling, as a function of normal and eye vector (next slide) Example below: looking at stone sidewalk at an angle Texture coordinates stretched along the near edges of the stones, which are “facing” the viewer Similarly, texture coordinates compressed along the far edges of the stones, where you shouldn’t be able to see the “backside” of the stones Tatarchuk-POM.pdf
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 32 of 34 Other Techniques: Parallax Mapping (2/2) Would like to modify original texture coordinates (u,v) to better approximate where the intersection point would be on the bumpy surface Option 1 Sample the height map at point (u,v) to get height h Approximate region around (u,v) with a surface of constant height h Intersect eye ray with approximate surface to get new texture coordinates (u’, v’) Option 2 Sample the height map in region around (u,v) to get the normal vector N’ (use the same normal averaging technique that we used in bump mapping) Approximate region around (u,v) with the tangent plane Intersect eye ray with approximate surface to get (u’, v’) Both produce artifacts when viewed from a steep angle Other options discussed here: 2006/papers/TUBudapest-Premecz-Matyas.pdf 2006/papers/TUBudapest-Premecz-Matyas.pdf For illustration purposes, we use a smooth curve to show the varying heights in the neighborhood of point (u,v)
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 33 of 34 Traditional parallax mapping only works for low-frequency bumps, does not handle very steep, high-frequency bumps Other Techniques: Steep Parallax Mapping Using option 1 from previous slide: the black eye ray correctly approximates the bump surface the gray eye ray misses the first intersection point on the high- frequency bump Steep Parallax Mapping Instead of approximating the bump surface, iteratively step along eye ray at each step, check height map value to see if we have intersected the surface more costly than naïve parallax mapping Invented by Brown PhD ‘06 Morgan McGuire and Max McGuire Adds support for self-shadowing
11/11/2014 cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 34 of 34 Comparison Creases between bricks look flatCreases look progressively deeperObjects have self-shadowing
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
4.3. L IGHTING AND S HADING Exploration of advanced lighting and shading techniques.
Week 10 - Monday. What did we talk about last time? Global illumination Shadows Projection shadows Soft shadows.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Technology and Historical Overview. Introduction to 3d Computer Graphics 3D computer graphics is the science, study, and method of projecting a mathematical.
Coordinate Systems X Y Z (conventional Cartesian reference system) X Y Z.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
Computer Graphics (Fall 2005) COMS 4160, Lecture 16: Illumination and Shading 1
09/23/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Reflections Shadows Part 1 Stage 1 is in.
4.10. D ISPLACEMENT M APPING Exploration of bump, parallax, relief and displacement mapping.
Cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © Andries van Dam Texture Mapping Beautification of Surfaces 1/23.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
3D Graphics Rendering and Terrain Modeling Technology and Historical Overview By Ricardo Veguilla.
CS123 | INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © Visible Surface Determination (VSD) To render or not to render, that is the question… 1 of.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Shadows David Luebke University of Virginia. Shadows An important visual cue, traditionally hard to do in real-time rendering Outline: –Notation –Planar.
Polygon Shading. Assigning color to a shape to make graphical scenes look realistic, or artistic, or whatever effect we’re attempting to achieve But first.
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) CS 426 Intro to 3D Computer Graphics © 2003, 2004, 2005 Jason Leigh Electronic.
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
Taku KomuraComputer Graphics Local Illumination and Shading Computer Graphics – Lecture 3 Taku Komura Institute for Perception, Action.
COLLEGE OF ENGINEERING UNIVERSITY OF PORTO COMPUTER GRAPHICS AND INTERFACES / GRAPHICS SYSTEMS JGB / AAS 1 Shading (Shading) & Smooth Shading Graphics.
Graphics Pipeline. Goals Understand the difference between inverse- mapping and forward-mapping approaches to computer graphics rendering Be familiar.
University of British Columbia CPSC 414 Computer Graphics © Tamara Munzner 1 Shading Week 5, Wed 1 Oct 2003 recap: lighting shading.
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Real-Time Dynamic Shadow Algorithms Evan Closson CSE 528.
Rendering Overview CSE 3541 Matt Boggus. Rendering Algorithmically generating a 2D image from 3D models Raster graphics.
CS 445 / 645 Introduction to Computer Graphics Lecture 18 Shading Shading.
3D animation is rendered clip of animated 3D objects in a 3D environment. An example: Examples of movies released in 3D are Toy Story, Cars, Shrek, Wall-E,
1 CSCE 441: Computer Graphics Hidden Surface Removal Jinxiang Chai.
CS 445 / 645 Introduction to Computer Graphics Lecture 15 Shading Shading.
CS-378: Game Technology Lecture #4: Texture and Other Maps Prof. Okan Arikan University of Texas, Austin V Lecture #4: Texture and Other Maps.
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics Ken-Yi Lee National Taiwan University (the slides are adapted from Bing-Yi Chen and Yung-Yu Chuang)
Creating shadows Computer Graphics methods. Problem statement The goal is to produce realistic-looking images (for games, 3D rendering etc.) from 3D.
Computer Graphics (Fall 2006) COMS 4160, Lecture 16: Illumination and Shading 1
Engineering Graphics & Introductory Design 3D Graphics and Rendering REU Modeling Course – June 13 th 2014.
COMPUTER GRAPHICS CS 482 – FALL 2014 AUGUST 27, 2014 FIXED-FUNCTION 3D GRAPHICS MESH SPECIFICATION LIGHTING SPECIFICATION REFLECTION SHADING HIERARCHICAL.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
CAP4730: Computational Structures in Computer Graphics Lighting and Shading.
CS123 | INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 1/16 Deferred Lighting Deferred Lighting – 11/18/2014.
BUMP-MAPPING SET09115 Intro to Graphics Programming.
Lecture Fall 2001 Illumination and Shading in OpenGL Light Sources Empirical Illumination Shading Transforming Normals Tong-Yee Lee.
Week 7 - Monday. What did we talk about last time? Specular shading Aliasing and antialiasing.
Graphics Graphics Korea University cgvr.korea.ac.kr 1 Surface Rendering Methods 고려대학교 컴퓨터 그래픽스 연구실.
CS 352: Computer Graphics Chapter 7: The Rendering Pipeline.
© 2017 SlidePlayer.com Inc. All rights reserved.