Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Chapter 8 Implementation of a Renderer. 2 Rendering as a Black Box.

Similar presentations


Presentation on theme: "1 Chapter 8 Implementation of a Renderer. 2 Rendering as a Black Box."— Presentation transcript:

1 1 Chapter 8 Implementation of a Renderer

2 2 Rendering as a Black Box

3 3 Object-Oriented v.s Image-Oriented for(each_object) render (object); for(each_pixel) assign_a_color(pixel);

4 4 Four Major Tasks Modeling Geometric processing Rasterization Display

5 5 Modeling Chapter 6: modeling of a sphere Chapter 9: hierarchical modeling Chapter 10: curves and surfaces Chapter 11: procedural modeling Can be combined with clipping to reduce the burden of the renderer

6 6 Geometric Processing Normalization Clipping Hidden-Surface Removal (visible surface determination) Shading (combined with normals and lighting information to compute the color at each vertex)

7 7 Rasterizatoin Also called scan-conversion Texture value is not needed until rasterization

8 8 Display Usually this is not the concern of the application program Dealing with aliasing is one possible task at this stage Half-toning (dithering) Color-correction

9 9 Implementation of Transformation Object (world) coordinates Eye (camera) coordinates Clip coordinates Normalized device coordinates Window (screen) coordinates

10 10 Viewport Transformation

11 11 Line-Segment Clipping Primitives pass through the clipper are accepted, Otherwise they are rejected or culled.

12 12 Cohen-Sutherland Clipping Replace most of the expensive floating- point multiplications and divisions with a combination of floating-point subtractions and bit operations

13 13 Breaking Up Spaces Each region is represented by a 4-bit outcode b 0 b 1 b 2 b 3 :

14 14 Four Possible Cases Given a line segment, let o 1 =outcode(x 1,y 1 ), o 2 =outcode(x 2, y 2 ) 1. (o 1 =o 2 =0)  it is in the clipping window (AB) 2. (o 1  0, o 2 =0; or vice versa)  one or two intersections must be computed, and the outcode of the intersection point is re- examined (CD) 3. (o 1 &o 2  0)  it is on the same outside sides of the window (EF) 4. (o 1 &o 2 =0)  Cannot tell, find the outcode of one intersection point (GH, IJ)

15 15 Discussion Cohen-Sutherland algorithm works best when there are many line segments but few are actually displayed The main disadvantage is it must be used recursively How to compute intersection? y=mx+h (cannot represent a vertical line)

16 16 Liang-Barsky Clipping Represent parametrically a line segment: Note that this form is robust and needs no changes for horizontal or vertical lines

17 17 Examples 11 22 33 44 11 22 33 44 1>  4 >  3 >  2 >  1 >0 1>  4 >  2 >  3 >  1 >0

18 18 Avoid Computing Intersections For intersecting with the top: All the tests required by the algorithm can be done by comparing  y max and  y. Only if an intersection is needed, because a segment has to be shortened, is the division done. This way, we could avoid multiple shortening of line segments and the re-execution of the clipping algorithm.

19 19 Polygon Clipping Creation of a single polygon 

20 20 Dealing with Concave Polygons Forbid the use of concave polygons or tessellate them.

21 21 Sutherland-Hodgeman Algorithm A line-segment clipper can be envisioned as a black box

22 22 Clipping Against the Four Sides

23 23 Example 1

24 24 Example 2

25 25 Clipping of Other Primitives Bounding Boxes and Volumes Curves, Surfaces, and Text Clipping in the Frame Buffer

26 26 Bounding Boxes and Volumes Axis-aligned bounding box (Extent) Can be used in collision detection!

27 27 Clipping for Curves and Surfaces Avoid complex intersection computation by approximating curves with line segments and surfaces with planar polygons and only perform the calculation when it’s necessary

28 28 Clipping for Text Text can be treated as bitmaps and dealt with in the frame buffer Or defined as any other geometric object, and processed through the standard viewing pipeline OpenGL allows both Pixel operations on bitmapped characters Standard primitives for stroke characters

29 29 Clipping the Frame Buffer It’s usually known as scissoring It’s usually better to clip geometric entities before the vertices reach the frame buffer Thus clipping within the frame buffer is only required for raster objects (blocks of pixels)

30 30 Clipping in Three Dimensions

31 31 Cohen-Sutherland 3D Clipping Replace the 4-bit outcode with a 6-bit outcode

32 32 Liang-Barsky and Pipe-line Clipper Liang-Barsky: add the equation Pipe-line Clipper: add the clippers for the front and back faces

33 33 Intersections in 3D Requires six multiplications and one division

34 34 Clipping for Different Viewings Orthographic Viewing Oblique Viewing Only need six divisions!

35 35 OpenGL Normalizaton

36 36 Hidden-Surface Removal Object-Space Approaches Image-Space Approaches

37 37 Object-Space Approach 1. A completely obscures B from the camera; we display only A 2. B obscures A; we display only B 3. A and B both are completely visible; we display both A and B 4. A and B partially obscure each other; we must calculate the visible parts of each polygon O(k 2 )!

38 38 Image-Space Approach Assuming n  m pixels, then using the Z-buffer algorithm takes nmk running time, which is O(k) May create more jagged rendering result

39 39 Back-Face Removal  In normalized device coordinates: If the polygon is on the surface ax+by+cz+d=0, we just need to check The sign of c. In OpenGL, use glCullFace() to turn on back-face removal

40 40 The z-Buffer Algorithm The frame buffer is initialized to the background color. The depth buffer is initialized to the farthest distance. Normalization may affect the depth accuracy. Use glDepthFunc() to determine what to do if distances are equal.

41 41 Incremental z-Buffer Algorithm

42 42 Painter’s Algorithm Back-to-front rendering

43 43 Depth Sorting – 1/2

44 44 Depth Sorting – 2/2

45 45 Two Troublesome Cases for Depth Sorting May resolve these cases by partitioning/clipping

46 46 The Scan-Line Algorithm Scan-line by scan-line or polygon by polygon?

47 47 DDA (digital differential analyzer) Algo. Pseudo code: m float, y float, x int For (ix=x1; ix<=x2; ix++) { y+=m; write_pixel(x, round(y), line_color); }

48 48 Using Symmetry Without using symmetry With symmetry to handle the case where m>1

49 49 Bresenham’s Algorithm – 1/4 The DDA algorithm, although simple, still requires floating point addition for each pixel generated Bresenham derived a line-rasterization algorithm that avoids all floating-point calculation and has become the standard algorithm used in hardware and software rasterizers

50 50 Bresenham’s Algorithms – 2/4 Assume 0  m  1 And assume we have placed a pixel at (i+1/2, j+1/2) Assume y=mx+h At x=i+1/2, this line must pass within one-half the length of the pixel at (i+1/2, j+1/2) –

51 51 Bresenham’s Algorithms – 3/4 –

52 52 Bresenham’s Algorithm – 4/4 –– – –

53 53 Scan Conversion of Polygons One of the major advantages that the first raster systems brought to users was the ability to display filled polygons. Previously rasterizing polygons and polygons scan conversion means filling a polygon with a single color

54 54 Inside-Outside Testing Crossing or odd-even test: draw a semi-infinite line starting from a point and count the number of intersections.

55 55 Winding Number Color a region if its winding number is not zero.

56 56 OpenGL and Concave Polygons Declare a tessellator object mytess=gluNewTess(); gluTessBeginPolygon(mytess, NULL); gluTessBeginContour(mytess); For(i=0; i<nvertices; i++) gluTessVertex(mytess, vertex[i], vertex[i]); gluTessEndContour(); gluTessEndPolygon(mytess);

57 57 Polygon Tessellation

58 58 Scan Conversions with the Z Buffer We process each polygon, one scan line at a time We use the normalized-device-coordinate line to determine depths incrementally

59 59 Polygon Filling Algorithms Flood fill Scan-line fill Odd-even fill

60 60 Flood Fill First find a seed point Flood_fill (int x, int y) { if (read_pixel(x, y)==white) { write_pixel(x, y, BLACK); flood_fill(x-1, y); flood_fill(x+1, y); flood_fill(x, y-1); flood_fill(x, y+1); } } Can remove the recursion by working on one scan-line at a time.

61 61 Scan-Line Algorithms Generating the intersections for each edges.

62 62 Y-X Algorithm bucket sorting for each line

63 63 Singularities We could rule it out by ensuring that no vertex has an integer y value: Perturb its location slightly Consider a virtual frame buffer of twice the resolution of the real frame buffer. In the virtual frame buffer, pixels are located at only even values of y, and all vertices are located at only odd values of y Placing pixel centers half way between integers, as does OpenGL, is equivalent to using this approach.

64 64 Antialiasing of Lines Antialiasing by area averaging

65 65 Antialiasing of Polygons Assign a color based on an area-weighted average of the colors of the three triangles. (Use accumulation buffer as in Chapter 7)

66 66 Time-domain (Temporal) Aliasing Solution: use more than one ray for each pixel. It’s often done off-line, as antialiasing is often computation intensive.

67 67 Color Systems The same colors may cause different impressions on two displays C 1 =[R 1, G 1, B 1 ] T, C 2 =[R 2, G 2, B 2 ] T, then there is a color conversion matrix M such that C 2 =MC 1 Printing industry usually uses CMYK color system than RGB The distance between colors in the color cube is not a measure of how far apart the colors are perceptually. For example, humans are more sensitive to color shifts in blue. (Thus YUV, Lab)

68 68 Chromaticity Coordinates For tristimulus values T1, T2, T3, for a particular RGB color, its chromaticity coordinates are

69 69 Visible Colors and Color Gamut of a Display

70 70 The HLS Color System Hue, Lightness and Saturation

71 71 The Color Matrix It can be looked at part of the pipeline that converts a color, rgba, to a new color, r´g´b´a´, by the matrix multiplication For example, if we define: then it converts the additive representation of a color to its subtractive representation.

72 72 Gamma Correction – 1/2 Human visual system perceives intensity in a logarithmic manner If we want the brightness steps to appear to be uniformly space, the intensities that we assign to pixels should increase exponentially

73 73 Gamma Correction – 2/2 The intensity I of a CRT is related to the voltage V applied by I  V  or logI=c 0 +  logV where the constant  and c 0 are properties of the particular CRT Two CRT may have different values for these. We could have a lookup table to correct this.

74 74 Dithering and Halftoning Trade spatial resolution for gray-scale or color resolution. For a 4x4 group of 1-bit pixels, there are 17 dither pattern, instead of 2 16 patterns. We could avoid always using the same patterns, which may cause beat of moire patterns. glEnable(GL_DITHER) (normally it is enabled) Using this may cause the pixels to return different values than the ones that were written


Download ppt "1 Chapter 8 Implementation of a Renderer. 2 Rendering as a Black Box."

Similar presentations


Ads by Google