Presentation is loading. Please wait.

Presentation is loading. Please wait.

Week 2 - Friday.  What did we talk about last time?  Graphics rendering pipeline  Geometry Stage.

Similar presentations


Presentation on theme: "Week 2 - Friday.  What did we talk about last time?  Graphics rendering pipeline  Geometry Stage."— Presentation transcript:

1 Week 2 - Friday

2  What did we talk about last time?  Graphics rendering pipeline  Geometry Stage

3

4

5

6

7  I did not properly describe an important optimization done in the Geometry Stage: backface culling  Backface culling removes all polygons that are not facing toward the screen  A simple dot product is all that is needed  This step is done in hardware in SharpDX and OpenGL  You just have to turn it on  Beware: If you screw up your normals, polygons could vanish

8  For API design, practical top-down problem solving, and hardware design, and efficiency, rendering is described as a pipeline  This pipeline contains three conceptual stages: Produces material to be rendered Application Decides what, how, and where to render Geometry Renders the final image Rasterizer

9

10

11  The goal of the Rasterizer Stage is to take all the transformed geometric data and set colors for all the pixels in the screen space  Doing so is called:  Rasterization  Scan Conversion  Note that the word pixel is actually a portmanteau for "picture element"

12  As you should expect, the Rasterization Stage is also divided into a pipeline of several functional stages: Triangle Setup Triangle Traversal Pixel Shading Merging

13  Data for each triangle is computed  This could include normals  This is boring anyway because fixed- operation (non-customizable) hardware does all the work

14  Each pixel whose center is overlapped by a triangle must have a fragment generated for the part of the triangle that overlaps the pixel  The properties of this fragment are created by interpolating data from the vertices  Again, boring, fixed-operation hardware does this

15  This is where the magic happens  Given the data from the other stages, per- pixel shading (coloring) happens here  This stage is programmable, allowing for many different shading effects to be applied  Perhaps the most important effect is texturing or texture mapping

16  Texturing is gluing a (usually) 2D image onto a polygon  To do so, we map texture coordinates onto polygon coordinates  Pixels in a texture are called texels  This is fully supported in hardware  Multiple textures can be applied in some cases

17  The final screen data containing the colors for each pixel is stored in the color buffer  The merging stage is responsible for merging the colors from each of the fragments from the pixel shading stage into a final color for a pixel  Deeply linked with merging is visibility: The final color of the pixel should be the one corresponding to a visible polygon (and not one behind it)

18  To deal with the question of visibility, most modern systems use a Z-buffer or depth buffer  The Z-buffer keeps track of the z-values for each pixel on the screen  As a fragment is rendered, its color is put into the color buffer only if its z value is closer than the current value in the z-buffer (which is then updated)  This is called a depth test

19  Pros  Polygons can usually be rendered in any order  Universal hardware support is available  Cons  Partially transparent objects must be rendered in back to front order (painter's algorithm)  Completely transparent values can mess up the z buffer unless they are checked  z-fighting can occur when two polygons have the same (or nearly the same) z values

20  A stencil buffer can be used to record a rendered polygon  This stores the part of the screen covered by the polygon and can be used for special effects  Frame buffer is a general term for the set of all buffers  Different images can be rendered to an accumulation buffer and then averaged together to achieve special effects like blurring or antialiasing  A back buffer allows us to render off screen to avoid popping and tearing

21  This pipeline is focused on interactive graphics  Micropolygon pipelines are usually used for film production  Predictive rendering applications usually use ray tracing renderers  The old model was the fixed-function pipeline which gave little control over the application of shading functions  The book focuses on programmable GPUs which allow all kinds of tricks to be done in hardware

22

23  GPU architecture  Programmable shading

24  Read Chapter 3  Start on Assignment 1, due next Friday, January 30 by 11:59  Keep working on Project 1, due Friday, February 6 by 11:59


Download ppt "Week 2 - Friday.  What did we talk about last time?  Graphics rendering pipeline  Geometry Stage."

Similar presentations


Ads by Google