 # Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter 4 4-1.

## Presentation on theme: "Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter 4 4-1."— Presentation transcript:

Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter 4 4-1

Polygonal Objects Most renderers work with objects that are represented by a set of polygons Advantages: Geometric information can be stored at vertices Other geometric representations (e.g., spline surfaces) can be converted to polygons by tessellation Fast shading available in graphics hardware Disadvantages: Texture mapping 2D images to arbitrary polygonal object is difficult. Converting other geometric representations such as bicubic surface patches to polygons is actually a sampling process. Problem: aliasing upon closer examination Many polygons are needed to represent complex objects, e.g. a human skull requires 500,000+ triangles 4-2

Rendering Steps 1.Polygons are extracted from database & transformed to world space 2.The 3D scene is then transformed into eye/camera space 3.Visibility test - backface culling 4.Unculled polygons are clipped against the 3D viewing frustum 5.Clipped polygons are projected onto a view plane or image plane 6.Hidden Surface Removal: (Z buffering) Projected polygons are shaded by an incremental shading algorithm which consists of rasterization (scan conversion) hidden surface calculation (depth buffering or depth sorting) shading calculation (what is the color of the pixel) Note: Step 1 to 5 are standard for most renderers, while there are many variations for Step 6. e.g. Ray tracing, radiosity solver 4-3

From Database to World Space Information stored List of polygon vertices (in object space) Connectivity of vertices. Other attributes at the vertex: vertex normal (in object space) color texture coordinates (in texture space) any other attributes needed for interpolation It is easy to transform vertices; how about vertex normal? In general, For example, One solution: recalculate the vertex normal after all vertices have been transformed. Time consuming. 4-4

Transforming Vertex Normal Tangent vector is the difference between two points on the surface. Properties of tangent vector: Given a transformation M Proof: Hence Try transform 2 points on the surface 4-5

Transforming Vertex Normal Inverse of arbitrary 3x3 matrix may be obtained from the cross-products of pairs of its columns via: where 4-6

Backface elimination or culling Eye space is the most convenient space in which to ‘cull’ polygon Remove all polygons that face away from the viewer For a scene that consists of only a single closed object, this culling solves the hidden surface problem completely. In most cases, it is a preprocess to eliminate invisible polygons. Visibility test: N p. V > 0 where N p is the polygon normal V is the line of sight (viewing) vector How to define polygon normal 4-7

To Screen Space It describes how those light rays reach our eye (or camera). Basic principle of perspective projection (using similar triangles) Screen space is defined to be within a closed volume - the viewing frustum (the volume of space which is to be rendered) 4-8

To Screen Space Why don’t we simply drop the z coordinate as we project everything onto the screen at z=D? We need the depth values (the distance to the eye) to perform hidden surface calculation Consistent with the transformation equations for x s and y s given in the previous slide, it would be nice if we map z e to z s in the following form: z s = A + B / z e, A, B are constants Constraints: 1.B<0, so when z e increases, z s also increases, i.e. if one point is farther than another in eye space (it has larger z e ) it also has larger z value in screen space. Hence, hidden surface removal can be done correctly. 2.Normalize the range of z s values so that z e  [D,F] maps into the range z s  [0,1] 4-9

To Screen Space Full perspective transformation: x s = D x e / (h*z e ) y s = D y e / (h*z e ) z s = F ( 1 - D / z e ) / (F - D) Check: when z e =D then z s = 0 z e =F then z s = 1 when x e = -h z e / D then x s = -1 x e = h z e / D then x s = 1 when y e = -h z e / D then y s = -1 y e = h z e / D then y s = 1 4-10

To Screen Space Perspective projection is non-linear, so how to express it in matrix form? Separate the transformation into 2 steps: 1.This step is linear x = x e y = y e z = (h F z e ) / D (F-D) - hF / (F-D) w = h z e / D 2.Non-linear perspective division x s = x / w i.e. x s = D x e / (h*z e ) y s = y / w i.e.y s = D y e / (h*z e ) z s = z / w i.e. z s = F ( 1 - D / z e ) / (F - D) One extra coordinate w is added. (x, y, z, w) homogeneous coordinates That is also why we need the fourth row in the transformation matrix. We call this homogeneous transformation. 4-11

To Screen Space Now all transformation in the rendering pipeline can be expressed as 4 x 4 matrices. Interpolating along a line in the eye space is not the same as interpolating this line in the screen space. As z e approaches the far clipping plane, z s approaches 1 more rapidly. Objects in the screen space thus get pushed and distorted towards to the back of the viewing frustum. Why is screen space suited to perform the hidden surface calculation? Hidden surface calculation need only be performed on those points that have the same x s, y s coordinates. Just a simple comparison between z s values to determine which point is in front. eye screen 4-12

Clipping Why Clipping? View point is an arbitrary point in the world space, we don’t want to handle objects that do not contribute to the final image Screen space transformation is not well- defined outside the viewing frustum e.g., when z e = 0 Three possible cases in a clipping process: Object lies completely outside the viewing frustum, discard it! Object lies completely inside. Transform and render it. Object intersects the viewing frustum. Clip then transform the inside portion and render. Clipping operation must be performed on the homogeneous coordinates before the perspective division. -w  x  w -w  y  w 0  z  w 4-13

Pixel-Level Processes Rasterization Hidden surface removal (assume z-buffering) Shading calculation All three processes can be viewed as 2D linear (bilinear) interpolation problems. Rasterization: Interpolation between vertices to find the x- coordinates that define the limit of a span. Hidden surface removal: Interpolating screen space z-values to obtain a depth value for each pixel from the given vertices’ depth values. Shading: Interpolating from vertex intensities to find a intensity for each pixel. 4-14

Rasterization Overall structure of the rendering process: for each polygon transform each vertex to screen space for each scanline within a polygon find the x-span by interpolation and rasterize the span for each pixel within the span perform hidden surface removal shade it 4-15

Hidden Surface Removal The practical solution and de facto standard in the graphics community: Z-buffering. Depth-buffering: 1)Initialize Z-buffer to max depth value. Initialize frame buffer to background color. 2)For each polygon For each point (x, y) on that polygon a) calculate depth value by interpolation b) if z is less than current z-buffer value store z in z-buffer calculate pixel color and store in frame buffer 3)Display frame buffer 4-16

Shading Interpolative shading To recover (approximately) the visual appearance of curved surface which is now represented by flat polygons. Assumption: 1)An approximate normal to the original smooth surface is given or can be computed at each vertex by averaging the normals to the polygons sharing that vertex. 2) The shading of a particular pixel can be obtained by a bilinear interpolation of the the appropriate quantities from adjacent vertices. N p =(N 1 +N 2 +N 3 +N 4 )/4 4-17

Gouraud Shading Calculate the intensity at each vertex using local reflection model Intensities of interior pixels are determined by linearly interpolating the vertices’ intensities Interpolation equations are: I a = [I 1 (y s - y 2 ) + I 2 (y 1 -y s )] / (y 1 - y 2 ) I b = [I 1 (y s - y 3 ) + I 3 (y 1 -y s )] / (y 1 - y 3 ) I s = [I a (x b - x s ) + I b (x s -x a )] / (x b - x a ) 4-18

Gouraud Shading Flaws of Gouraud Shading: 1) Highlight anomalies If the highlight appears in the interior of the polygon, Gouraud may fail to shade this highlight because no highlighted intensities are recorded/calculated at the vertices. 2) Mach banding Human visual system emphasizes intensity changes occurring at a boundary, which creates a banding effect. The bands can be obvious if insufficient polygons are used to model area of high curvature. 4-19

Phong Shading Instead of interpolating the vertex intensities, Phong interpolates vertex normal vector. Solves the interior highlight problem: Interpolation equations are: N a = [N 1 (y s - y 2 ) + N 2 (y 1 -y s )] / (y 1 - y 2 ) N b = [N 1 (y s - y 3 ) + N 3 (y 1 -y s )] / (y 1 - y 3 ) N s = [N a (x b - x s ) + N b (x s -x a )] / (x b - x a ) Since illumination calculation has to be invoked at each interior surface point, Phong shading is more expensive than Gouraud shading. 4-20

Defects in Phong Shading 1)Interpolation inaccuracies: Interpolation done in screen space is not equivalent to interpolation in world space. Remember perspective projection is non-linear. Hence interpolation is orientation dependent. 4-21

Phong Shading 2)Vertex normal inaccuracies: Correct interior normal vectors (red) cannot be found by linear interpolation of vertex normal vectors (black). No guarantee that intensity is smoothly transited here 4-22 Interpolated normal vectors on the left patch might not be equivalent to the normal vector at vertex P.  Discontinuity in calculated intensities. P

Download ppt "Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter 4 4-1."

Similar presentations