8.2si31_2001 The Story So Far n We now understand: – how to model objects as a set of polygonal facets and create a 3D world (Lectures 1 & 2) – how to view these worlds with a camera model, projecting the facets to 2D (Lectures 3 & 4) – how to calculate reflection off a surface (Lectures 5 & 6) – how to shade a single projected facet using the reflection calculation (Lecture 7) set of facets n Next step: rendering a set of facets
8.3si31_2001 First a Word on Normals n A polygon has two normals If the polygon is part of a solid object, one normal will face out, one will face in. We need to have a way of distinguishing them.
8.4si31_2001 Surface Normals insideoutside, single normal n Each polygon facet is considered to have an inside and an outside, and a single normal n This is determined by the order in which vertices of facet are specified: – look at object from outside anti- clockwise – if polygon vertices are specified in anti- clockwise order, then normal points from inside to outside P1 P4 P3 P2
8.5si31_2001 Rendering Polygons n We are now ready to consider rendering a set of polygon facets visible n For efficiency, we only want to render those that are visible to the camera
8.6si31_2001 Back Face Culling polyhedron back-facing n If the facets belong to a solid object (a polyhedron) we do not need to render back-facing polygons Here only three facets need to be drawn - those that face towards the camera
8.7si31_2001 Back Face Culling n A polygon faces away from the viewer if the angle between the surface normal (N) and the viewing direction (V) is less than 90 degrees V.N > 0 camera V N
8.8si31_2001 Back Face Culling n It is efficient to carry this out in the viewing co-ordinate system – camera on z-axis pointing in negative z- direction – so V = (0,0,-1) n Thus the V.N>0 test becomes a test only on z-component of normal vector N z < 0 – ie test if z-component of normal points in negative z-direction
8.9si31_2001 Back Face Culling n Back face culling is an extremely important efficiency gain in rendering and is typically the first step in visibility processing n We are left with a set of front facing polygons...
8.10si31_2001 The Next Problem visible n Some facets will be obscured by others - we only want to draw (ie shade) the visible polygons
8.11si31_2001 Solution - Z Buffer Algorithm n Suppose polygons have been passed through the projection transformation, with the z-co-ordinate retained (ie the depth information) - suppose z normalized to range 0 to 1 z x y view plane window For each pixel (x,y), we want to draw the polygon nearest the camera, ie largest z 0 1 camera
8.12si31_2001 Z Buffer Algorithm n We require two buffers: – frame buffer – frame buffer to hold colour of each pixel (in terms of RGB)... typically 24 bits – z-buffer – z-buffer to hold depth information for each pixel... typically 32 bits n Initialize: – frame buffer to the background colour of the scene colour (x,y) = (I RED, I GREEN, I BLUE ) background – z-buffer to zero (back clipping plane) depth (x,y) = 0
8.13si31_2001 Z Buffer Algorithm n As each polygon is scan converted and shaded using Gouraud or Phong shading: – calculate depth z for each pixel (x,y) in polygon – if z > depth(x,y), then set: depth (x,y) = z; colour (x,y) = (I RED, I GREEN, I BLUE ) gouraud/phong n After all polygons processed, depth buffer contains depth of visible surfaces, and frame buffer the colour of these surfaces
8.14si31_2001 Z Buffer - Strengths and Weaknesses n A major advantage of the z-buffer algorithm is its simplicity n A weakness (of now decreasing importance) is the amount of memory required n Limited precision for depth calculations in complex scenes (perspective effect again a problem)
8.15si31_2001 Transparency n Polygons in practice may be opaque or semi-transparent – in OpenGL =1 represents opaque n Simple rendering: – render opaque polygons first, generating colour (x,y) – for each semi-transparent polygon (with opacity ) render into another buffer as polygon (x,y) – and combine using: ( 1 - ) * colour (x,y) + * polygon (x,y)
8.16si31_2001 Better Transparency n Better results by storing for each pixel the depth and transparency of each surface n Surfaces can then be composited back to front in order to give more accurate images
8.17si31_2001 Shadows n Z buffers also give us a nice way of doing shadows n The z buffer is a way of determining what is visible to the camera n For shadows, we need a way of determining what is visible to the light source
8.18si31_2001 Shadow Z Buffer n We require a second z-buffer, called a shadow z-buffer n Two step algorithm: – scene is rendered from the light source as viewpoint, with depth information stored in the shadow z-buffer (no need to calculate intensities) – scene is rendered from the camera position, using Gouraud or Phong shading with a z-buffer algorithm... but we need to adjust colour if point is in shadow
8.19si31_2001 Shadow Z Buffer n To determine if point is in shadow: – take its position (x O, y O, z O ) in the camera view, and transform it into the corresponding position (x O, y O, z O ) in the light source view – look up the z value, say z L, in the shadow z-buffer at the position (x O, y O ) – if z L is closer to the light than z O, this means some object is nearer the light and therefore the point is in shadow... in this case only the ambient reflection would be shown at that point
Your consent to our cookies if you continue to use this website.