Visible Surface Detection
Review: Rendering Pipeline Almost finished with the rendering pipeline: Modeling transformations Viewing transformations Projection transformations Clipping Scan conversion We now know everything about how to draw a polygon on the screen, except visible surface detection.
Invisible Primitives Why might a polygon be invisible? Polygon outside the field of view Polygon is backfacing Polygon is occluded by object(s) nearer the viewpoint For efficiency reasons, we want to avoid spending work on polygons outside field of view or backfacing For efficiency and correctness reasons, we need to know when polygons are occluded
View Frustum Clipping Remove polygons entirely outside frustum Note that this includes polygons “behind” eye (actually behind near plane) Pass through polygons entirely inside frustum Modify remaining polygons to pass through portions intersecting view frustum
View Frustum Clipping Canonical View Volumes Remember how we defined cameras Eye point, lookat point, v-up Orthographic | Perspective Remember how we define viewport Width, height (or field of view, aspect ratio) These two things define rendered volume of space Standardize the height, length, and width of view volumes
View Frustum Clipping Canonical View Volumes
Review Rendering Pipeline Clipping equations are simplified Perspective and Orthogonal (Parallel) projections have consistent representations
Perspective Viewing Transformation Remember the viewing transformation for perspective projection Translate eye point to origin Rotate such that projection vector matches –z axis Rotate such that up vector matches y Add to this a final step where we scale the volume
Canonical Perspective Volume Scaling
Clipping Because both camera types are represented by same viewing volume Clipping is simplified even further
Visible Surface Detection There are many algorithms developed for the visible surface detection ● Some methods involve more processing time. ● Some methods require more memory. ● Some others apply only to special types of objects.
Classification of Visible-Surface Detection Algorithms They are classified according to whether they deal with object definitions or with their projected images. ● Object space methods. ● Image-space methods. Most visible-surface algorithms use image space method.
Back-Face Detection Most objects in scene are typically “solid”
Back-Face Detection (cont.) On the surface of polygons whose normals point away from the camera are always occluded: Note: backface detection alone doesn’t solve the hidden-surface problem!
Back-Face Detection N=(A,B,C) yv N=(A,B,C) This test is based on inside-outside test. A point (x,y,z) is inside if xv V zv We can simplify this test by considering the normal vector vector N to a polygon surface, which has Cartesian components (A,B,C). If V is a vector in the viewing direction from eye then this polygon is back face if V●N > 0. If the object descriptions have been converted to projection coordinates and viewing direction is parallel to zv axis then V=(0, 0, Vz) and V●N=VzC so that we only need to consider the sign of C.
Depth-Buffer (z-Buffer) Method This method compares surface depths at each pixel position on the projection plane. Each surface is processed separetly, one point at a time across the surface. Surface S1 is closest to view plane, so its surface intensity value at (x,y) is saved. S3 S2 yv S1 xv (x,y) zv
Steps for Depth-Buffer (z-Buffer) Method(Cont.) Initialize the depth buffer and refresh buffer s.t. for all buffer positions (x,y) depth(x, y) = 0, refresh(x, y) = Ibackground
Steps for Depth-Buffer (z-Buffer) Method(Cont.) For each position on each polygon surface, compare depth values to previously stored values in depth buffer to determine visibility. ● Calculate the depth z for each (x,y) position on the polygon. ● If z >depth(x,y), then depth(x, y) = z, refresh(x, y) = Isurf(x,y). where Ibackground is the value for the bacground intensity and Isurf(x,y), is the projected intensity value for the surface at (x,y).
Depth-Buffer (z-Buffer) Calculations. Depth values for a surface position (x,y) are calculated from the plane equation z-value for the horizontal next position z-value down the edge (starting at top vertex) Y Y-1 X X+1 top scan line Left edge intersection bottom scan line
Scan-Line Method S1 S1 S2 yv B E F Scan Line 1 A Scan Line 2 G xv
Depth-Sorting Algorithm (Painter’s Algorithm) This method performs the following basic functions: Surfaces are sorted in order of decreasing order. Surfaces are scan converted in order, starting with the surface of greatest.
Depth-Sorting Algorithm (Painter’s Algorithm) Simple approach: render the polygons from back to front, “painting over” previous polygons:
Depth-Sorting Algorithm (Painter’s Algorithm)
Depth-Sorting Algorithm (Painter’s Algorithm)
Painter’s Algorithm: Problems Intersecting polygons present a problem Even non-intersecting polygons can form a cycle with no valid visibility order:
Analytic Visibility Algorithms Early visibility algorithms computed the set of visible polygon fragments directly, then rendered the fragments to a display: Now known as analytic visibility algorithms
Analytic Visibility Algorithms What is the minimum worst-case cost of computing the fragments for a scene composed of n polygons? Answer: O(n2)
OpenGL Depth-Cueing Function We can vary the brigthness of an object glEnable ( GL_FOG ); glFogi ( GL_FOG_MODE, mode); // modes: GL_LINEAR, GL_EXP or GL_EXP2 . . . glDisable ( GL_FOG );