Download presentation
Presentation is loading. Please wait.
1
Advance topics : UNIT - 6
2
Topics to cover : visible surface detection concepts,
back-face detection, depth buffer method, illumination, light sources, illumination methods (ambient, diffuse reflection, specular reflection), Color models: properties of light, XYZ, RGB, YIQ and CMY color models
3
Introduction Purpose Classification
Identify the parts of a scene that are visible from a chosen viewing position. Classification Object-Space Methods Compare objects and parts of objects to each other within the scene definition. Image-Space Methods Visibility is decided point by point at each pixel position on the projection plane. Sorting : Facilitate depth comparisons Coherence : Take advantage of regularities in a scene
4
Visible Surface Detection
A major consideration in the generation of realistic graphics is determining what is visible within a scene from a chosen viewing position Algorithms to detect visible objects are referred to as visible- surface detection methods
5
visible surface detection concepts
Object space algorithms: determine which objects are in front of others Resize doesn’t require recalculation Works for static scenes May be difficult to determine Image space algorithms: determine which object is visible at each pixel Resize requires recalculation Works for dynamic scenes
6
Back-Face Detection A fast and simple object-space method for locating back faces A point (x,y,z) is “inside” a polygon surface with plane parameters A, B, C, D if : Ax + By + Cz + D < 0 When an inside point is along the line of sight to the surface, the polygon must be a back face and so cannot be seen
7
Back-Face Culling Back faces
Back-face removal expected to eliminate about half of the polygon surfaces in a scene. A scene contains only nonoverlapping convex polyhedra
8
Depth-Buffer Method Depth-buffer(z-buffer) Method Basic concepts
Object depth: Measured from the view plane along the z axis of a viewing system. Each surface is processed separately. Can be applied to planar and nonplanar surfaces Require two buffers: depth and frame buffers Basic concepts Project each surface onto the view plane For each pixel position covered, compare the depth values. Keep the intensity value of the nearest surface
9
Depth-Buffer Method Initial states Depth buffer : Infinity
Frame buffer : Background color
10
Depth Buffer Method A commonly used image-space approach
Each surface is processed separately, one point at a time Also called the z-buffer method It is generally hardware implemented
11
Depth Buffer Method 3 surfaces overlap at (x,y). S1 has the smallest depth value
12
Depth Buffer Method Two buffers are needed
Depth buffer (distance information) Frame buffer (intensity/color information)
13
Depth Buffer Method Depth-Buffer Algorithm for all (x,y)
depthBuff(x,y) = 1.0, frameBuff(x,y)=backgndcolor for each polygon P for each position (x,y) on polygon P calculate depth z if z < depthBuff(x,y) then depthBuff(x,y) =z frameBuff(x,y)=surfColor(x,y)
14
Scan-Line Method Image space method
Extension of scan-line algorithm for polygon filling As each scan line is processed, all polygon surface projections intersecting that line are examined to determine which are visible
15
Scan-Line Method Example
16
A-Buffer Method Extension of Depth-buffer ideas
Drawback of the Depth-buffer method Deal only with opaque surfaces A-buffer Reference a linked list of surfaces Antialiased, Area-averaged, Accumulation-buffer Each position has two fields Depth field Store a positive or negative real number Intensity field Store surface-intensity information or a pointer value
17
A-Buffer Method Organization of an A-buffer Single-surface overlap
Multiple-surface overlap
18
Scan-Line Method An extension of scan-line polygon filling
Fill multiple surfaces at the same time Across each scan line, depth calculations are made for each overlapping surface
19
Scan-Line Method Surface tables Taking advantage of Coherence
Edge table Coordinate endpoints, inverse slop, pointers into polygon table Polygon table Coefficient, intensity, pointers into the edge table Flag Taking advantage of Coherence Pass from one scan line to the next Same active list No need of depth calculation Be careful not cut through or cyclically overlap each other
20
Scan-Line Method Overlapping polygon surfaces
21
Depth Sorting Method Painter’s Algorithm
Surfaces are sorted in order of decreasing depth. Surfaces are scan converted in order, starting with the surface of greatest depth. Two surfaces with no depth overlap
22
Painter’s Algorithm Draw surfaces from back (farthest away) to front (closest): Sort surfaces/polygons by their depth (z value) Draw objects in order (farthest to closest) Closer objects paint over the top of farther away objects
23
Depth Sorting Method Tests for each surface that overlaps with S in depth: The bounding rectangles in the xy plane for the two surfaces do not overlap. Surface S is completely behind the overlapping surface relative to the viewing position. The overlapping surface is completely in front of S relative to the viewing position. The projections of the two surfaces onto the view plane do not overlap. If one of these tests is true, no reordering is necessary
24
Depth Sorting Method Two surfaces with depth overlap
but no overlap in the x direction.
25
Depth Sorting Method Surface S is completely behind the overlapping
Overlapping surface S’ is completely in front of surface S, but S is not completely behind S’
26
Depth Sorting Method Two surfaces with overlapping bounding rectangles
in the xy plane.
27
Depth Sorting Method Three surfaces entered into
the sorted surface list in the order S, S’, S” should be reordered S’, S”, S. Surface S has greater depth but obscures surface S’
28
BSP-Tree Method Binary Space-Partitioning (BSP) tree Basic concepts
An efficient method for determining object visibility by painting surface onto the screen Particularly useful when the view reference point changes, but the scene is fixed. In VR applications, BSP is commonly used for visibility preprocessing. Basic concepts Identify surfaces that are “inside” and “outside” the partitioning plane, relative to the viewing direction
29
BSP-Tree Method
30
BSP-Tree Method Partitioning planes When the BSP tree is complete
For polygonal objects, we may choose the planes of object facets. Any intersected polygon is split into two parts When the BSP tree is complete Selecting the surfaces for display from back to front Implemented as hardware for the construction and process of BSP tree.
31
Area Subdivision Method
Take advantage of area coherence in a scene Locating those view areas that represent part of a single surface. Recursively dividing the total viewing area into smaller and smaller rectangles. Fundamental tests Identify an area as part of a single surface By comparing surfaces to the boundary of the area It is too complex to analyze easily. Subdivide this area into smaller rectangles.
32
Area Subdivision Method
Four possible relationships that a surface can have with a specified area boundary: Surrounding surface One that completely encloses the area. Overlapping surface One that is partly inside and partly outside the area. Inside surface One that is completely inside the area. Outside surface One that is completely outside the area.
33
Area Subdivision Method
Surrounding Surface Overlapping Surface Inside Surface Outside Surface
34
Area Subdivision Method
No further subdivisions of a specified area are needed if one of the following conditions is true: All surfaces are outside surfaces with respect to the area. Check bounding rectangles of all surfaces Only one inside, overlapping, or surrounding surface is in the area. Check bounding rectangles first and other checks A surrounding surface obscures all other surfaces within the area boundaries.
35
Area Subdivision Method
Obscuring Surround surfaces identification1 Using depth sorting Order surfaces according to their minimum depth from the view plane. Compute the maximum depth of each surrounding surface.
36
Area Subdivision Method
Obscuring-surround surfaces identification-2 Using plane equations Calculate depth values at the four vertices of the area for all surrounding, overlapping, and inside surfaces. If the calculated depths for one of the surrounding surfaces is less than those for all other surfaces, it is identified. If both methods for identifying obscuring-surround surface are fail It is faster to subdivide the area than to continue with more complex testing.
37
Area Subdivision Method
Subdivide areas along surface boundaries Sort the surfaces according to the minimum depth Use the surface with the smallest depth value to subdivide a given area.
38
Octree Methods Octree representation
Projecting octree nodes onto the viewing surface in a front-to- back order
39
Ray-casting Method Algorithm
Cast ray from viewpoint from each pixel to find front-most surface
40
Ray-casting Method Comments for p pixels
May (or may not) utilize pixel-to-pixel coherence Conceptually simple, but not generally used
41
Curved Surfaces Visibility detection for objects with curved surfaces
Ray casting Calculating ray-surface intersection Locating the smallest intersection distance Octree Approximation of a curved surface as a set of plane, polygon surface Replace each curved surface with a polygon mesh use one of the other hidden-surface methods
42
Curved Surfaces Curved-Surface Representations Surface Contour Plots
Explicit surface equation Surface Contour Plots Plotting the visible-surface contour lines Eliminating those contour section that are hidden by the visible parts of the surface One way maintain a list of ymin and ymax ymin <= y <= ymax : not visible
43
Wireframe Methods Wireframe visibility methods Direct Approach
Visible-line detection, hidden-line detection method Visibility tests are applied to surface edges Direct Approach Clipping line For each line, depth values are compared to the surfaces Hidden line removal Using Scan-line methos Depth sorting (interior : backgnd color)
44
Wireframe Methods (b) Penetrate a surface (a) Pass behind a surface
45
Properties of Light What is light?
“light” = narrow frequency band of electromagnetic spectrum The Electromagnetic Spectrum Red: 3.8x1014 hertz Violet: 7.9x1014 hertz
46
Spectrum of Light Monochrome light can be described by frequency f and wavelength λ c = λ f (c = speed of light) Normally, a ray of light contains many different waves with individual frequencies The associated distribution of wavelength intensities per wavelength is referred to as the spectrum of a given ray or light source
47
The Human Eye
48
Psychological Characteristics of Color
Dominant frequency (hue, color) Brightness (area under the curve), total light energy Purity (saturation), how close a light appear to be a pure spectral color, such as red Purity = ED − EW ED = dominant energy density EW = white light energy density Chromaticity, used to refer collectively to the two properties describing color characteristics: purity and dominant frequency
49
Intuitive Color Concepts
Color mixing created by an artist Shades, tints and tones in scene can be produced by mixing color pigments (hues) with white and black pigments Shades Add black pigment to pure color The more black pigment, the darker the shade Tints Add white pigment to the original color Making it lighter as more white is added Tones Produced by adding both black and white pigments
50
Color Matching Experiments
Observers had to match a test light by combining three fixed primaries Goal: find the unique RGB coordinates for each stimulus
51
Tristimulus Values The values RQ, GQ and BQ for a stimulus Q that fulfill Q = RQ*R + GQ*G + BQ*B are called the tristimulus values of Q R = nm G = nm B = nm
52
Negative Light in a CME if a match using only positive RGB values proved impossible, observers could simulate a subtraction of red from the match side by adding it to the test side
53
Color Models Method for explaining the properties or behavior of color within some particular context Combine the light from two or more sources with different dominant frequencies and vary the intensity of light to generate a range of additional colors Primary Colors 3 primaries are sufficient for most purposes Hues that we choose for the sources Color gamut is the set of all colors that we can produce from the primary colors Complementary color is two primary colors that produce white Red and Cyan, Green and Magenta, Blue and Yellow
54
Color-Matching Colors in the vicinity of 500 nm can be matched by subtracting an amount of red light from a combination of blue and green lights Thus, an RGB color monitor cannot display colors in the neighborhood of 500 nm
55
CIE XYZ Problem solution: XYZ color system
Tristimulus system derived from RGB Based on 3 imaginary primaries All 3 primaries are outside the human visual gamut Only positive XYZ values can occur 1931 by CIE (Commission Internationale de l’Eclairage)
56
Transformation CIE RGB->XYZ
Projective transformation specifically designed so that Y = V (luminous efficiency function) XYZ CIE RGB uses inverse matrix XYZ any RGB matrix is device dependent X = 0.723R G B Y = 0.265R G B Z = 0.000R G B
57
The XYZ Model CIE primitives is referred to as the XYZ model
In XYZ color space, color C (λ) represented as C (λ) = (X, Y, Z) where X Y Z are calculated from the color-matching functions k = 683 lumens/watt I(λ) = spectral radiance f = color-matching function
58
The XYZ Model Normalized XYZ values
Normalize the amounts of each primary against the sum X+Y+Z, which represent the total light energy where z = 1 - x - y, color can be represented with just x and y x and y called chromaticity value, it depend only on hue and purify Y is luminance
59
RGB vs. XYZ
60
The CIE Chromaticity Diagram
A tongue-shape curve formed by plotting the normalized amounts x and y for colors in the visible spectrum Points along the curve are spectral color (pure color) Purple line, the line joining the red and violet spectral points Illuminant C, plotted for a white light source and used as a standard approximation for average daylight Spectral Colors C Illuminant Purple Line
61
The CIE Chromaticity Diagram
Luminance values are not available because of normalization Colors with different luminance but same chromaticity map to the same point Usage of CIE chromaticity diagram Comparing color gamuts for different set of primaries Identifying complementary colors Determining purity and dominate wavelength for a given color Color gamuts Identify color gamuts on diagram as straight-line segments or polygon regions
62
The CIE Chromaticity Diagram
Color gamuts All color along the straight line joining C1 and C2 can be obtained by mixing colors C1 and C2 Greater proportion of C1 is used, the resultant color is closer to C1 than C2 Color gamut for C3, C4, C5 generate colors inside or on edges No set of three primaries can be combined to generate all colors
63
The CIE Chromaticity Diagram
Complementary colors Represented on the diagram as two points on opposite sides of C and collinear with C The distance of the two colors C1 and C2 to C determine the amount of each needed to produce white light
64
The CIE Chromaticity Diagram
Dominant wavelength Draw a straight from C through color point to a spectral color on the curve, the spectral color is the dominant wavelength Special case: a point between C and a point on the purple line Cp, take the compliment Csp as dominant Purity For a point C1, the purity determined as the relative distance of C1 from C along the straight line joining C to Cs Purity ratio = dC1 / dCs
65
Complementary Colors Additive Pair of complementary colors Subtractive
subYM Complementary Colors subCR Additive Blue is one-third Yellow (red+green) is two-thirds When blue and yellow light are added together, they produce white light Pair of complementary colors blue and yellow green and magenta red and cyan Subtractive Orange (between red and yellow)<>cyan-blue green-cyan<>magenta-red color addRG
66
The RGB Color Model Basic theory of RGB color model
The tristimulus theory of vision It states that human eyes perceive color through the stimulation of three visual pigment of the cones of the retina Red, Green and Blue Model can be represented by the unit cube defined on R,G and B axes
67
The RGB Color Model An additive model, as with the XYZ color system
Each color point within the unit cube can be represented as a weighted vector sum of the primary colors, using vectors R, G and B C(λ)=(R, G, B)=RR+GG+BB Chromaticity coordinates for the National Television System Committee (NTSC) standard RGB primaries
68
Subtractive RGB Colors
Red Green Blue Yellow Cyan Magenta Black Yellow absorbs Blue Magenta absorbs Green Cyan absorbs Red White minus Blue minus Green = Red
69
The CMY and CMYK Color Models
Color models for hard-copy devices, such as printers Produce a color picture by coating a paper with color pigments Obtain color patterns on the paper by reflected light, which is a subtractive process The CMY parameters A subtractive color model can be formed with the primary colors cyan, magenta and yellow Unit cube representation for the CMY model with white at origin
70
The CMY and CMYK Color Models
Transformation between RGB and CMY color spaces Transformation matrix of conversion from RGB to CMY Transformation matrix of conversion from CMY to RGB
71
The YIQ and Related Color Models
YIQ, NTSC color encoding for forming the composite video signal YIQ parameters Y, same as the Y complement in CIE XYZ color space, luminance Calculated Y from the RGB equations Y = R G B Chromaticity information (hue and purity) is incorporated with I and Q parameters, respectively Calculated by subtracting the luminance from the red and blue components of color I = R – Y Q = B – Y Separate luminance or brightness from color, because we perceive brightness ranges better than color
72
The YIQ and Related Color Models (2)
Transformation between RGB and YIQ color spaces Transformation matrix of conversion from RGB to YIQ Transformation matrix of conversion from YIQ to RGB Obtain from the inverse matrix of the RGB to YIQ conversion
73
The HSV Color Model Interface for selecting colors often use a color model based on intuitive concepts rather than a set of primary colors The HSV parameters Color parameters are hue (H), saturation (S) and value (V) Derived by relating the HSV parameters to the direction in the RGB cube Obtain a color hexagon by viewing the RGB cube along the diagonal from the white vertex to the origin
74
The HSV Color Model The HSV hexcone
Hue is represented as an angle about the vertical axis ranging from 0 degree at red to 360 degree Saturation parameter is used to designate the purity of a color Value is measured along a vertical axis through center of hexcone
75
HSV Color Model Hexcone
Color components: Hue (H) ∈ [0°, 360°] Saturation (S) ∈ [0, 1] Value (V) ∈ [0, 1]
76
HSV Color Definition Color definition
Select hue, S=1, V=1 Add black pigments, i.e., decrease V Add white pigments, i.e., decrease S Cross section of the HSV hexcone showing regions for shades, tints, and tones
77
HSV Hue is the most obvious characteristic of a color
Chroma is the purity of a color High chroma colors look rich and full Low chroma colors look dull and grayish Sometimes chroma is called saturation Value is the lightness or darkness of a color Sometimes light colors are called tints, and Dark colors are called shades
78
Transformation To move from RGB space to HSV space:
Can we use a matrix? No, it’s non-linear.
79
The HSV Color Model Transformation between HSV and RGB color spaces
Procedure for mapping RGB into HSV class rgbSpace {public: float r, g, b;}; class hlsSpace {public: float h, l, s;}; void hsvT0rgb (hlsSpace& hls, rgbSpace& rgb) { /* HLS and RGB values are in the range from 0 to 1.0 */ int k float aa, bb, cc, f; if ( s <= 0.0) r = g = b = v; /* Have gray scale if s = 0 */ else { if (h == 1.0) h = 0.0; h *= 6.0; k = floor (h); f = h - k; aa = v * (1.0 - s); bb = v * (1.0 - (s * f)); cc = v * (1.0 - (s * (1.0 - f))); switch (k) { case 0: r = v; g = cc; b = aa; break; case 1: r = bb; g = v; b = aa; break; case 2: r = aa; g = v; b = cc; break; case 3: r = aa; g = bb; b = v; break; case 4: r = cc; g = aa; b = v; break; case 5: r = v; g = aa; b = bb; break; }
80
The HSV Color Model Transformation between HSV and RGB color spaces
Procedure for mapping HSV into RGB class rgbSpace {public: float r, g, b;}; class hlsSpace {public: float h, l, s;}; const float noHue = -1.0; inline float min(float a, float b) {return (a < b)? a : b;} inline float mab(float a, float b) {return (a > b)? a : b;} void rgbTOhsv (rgbSpace rgb, hlsSpace hls) { float minRGB = min (r, min (g, b)), maxRGB = max (r, max (g, b)); float deltaRGB = maxRGB - minRGB; v = maxRGB; if (maxRGB != 0.0) s = deltaRGB / maxRGB; else s = 0.0; if (s <= 0.0) h = noHue; else { if (r == maxRGB) h = (g - b) / deltaRGB; else if (g == maxRGB) h = (b - r) / deltaRGB; else if (b == maxRGB) h = (r - g) / deltaRGB; h *= 60.0; if (h < 0.0) h += 360.0; h /= 360.0; }
81
The HLS Color Model HLS color model
Another model based on intuitive color parameter Used by the Tektronix Corporation The color space has the double- cone representation Used hue (H), lightness (L) and saturation (S) as parameters
82
Color Model Summary Colorimetry: Device Color Systems:
CIE XYZ: contains all visible colors Device Color Systems: RGB: additive device color space (monitors) CMYK: subtractive device color space (printers) YIQ: NTSC television (Y=luminance, I=R-Y, Q=B-Y) Color Ordering Systems: HSV, HLS: for user interfaces
83
Comparison RGB CMY YIQ HSV HSL CMYK
84
Color Selection and Applications
Graphical package provide color capabilities in a way that aid users in making color selections For example, contain sliders and color wheels for RGB components instead of numerical values Color applications guidelines Displaying blue pattern next to a red pattern can cause eye fatigue Prevent by separating these color or by using colors from one-half or less of the color hexagon in the HSV model Smaller number of colors produces a better looking display Tints and shades tend to blend better than pure hues Gray or complement of one of the foreground color is usually best for background
85
How different are the colors of square A and square B?
They are the same! Don’t believe me?
86
What color is this blue cube? How about this yellow cube?
87
Want to see it slower? What color is this How about this blue cube?
yellow cube?
88
Even slower? How about this yellow cube? What color is this blue cube?
89
So what color is it? What color is this How about this blue cube?
yellow cube? It’s gray!
90
Humans Only Perceive Relative Brightness
91
Cornsweet Illusion Cornsweet illusion. Left part of the picture seems to be darker than the right one. In fact they have the same brightness. The same image, but the edge in the middle is hidden. Left and right part of the image appear as the same color now.
92
Self-Animated Images
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.