Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.

Similar presentations


Presentation on theme: "1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually."— Presentation transcript:

1 1 Computer Graphics Week13 –Shading Models

2 Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually the average of its vertex normals) as shown in Figure. This normal vector is used in the lighting calculations, and the resultant color is assigned to the entire surface. 2

3 Notice how flat shading causes a sudden transition in color from polygon to polygon. This is because every polygon is rendered using a normal vector that changes abruptly across neighboring polygons. Flat shading is fast to render, and is used to preview rendering effects or on objects that are very small. 3

4 4 Flat shading: one normal per polygon

5 Gourad Shading Gourad shading is used when smooth shading effects are needed. In this process, each vertex defines a normal vector. This normal value is used to calculate the color at every vertex of the surface. The resulting colors are averaged into the interior of the surface to achieve a smooth render. Since polygons share vertices (and hence the same normal value), the polygon edges blend together to produce a smooth look. 5

6 6 Gourad shading: one normal per vertex

7 Gourad shading produces smooth renders but takes longer to complete than flat shading. Even more complicated rendering techniques exist, such as ray tracing, radiosity and volume rendering. 7

8 Texture Mapping Texture mapping can dramatically alter the surface characteristics of an object. It adds vitality to models and can provide great visual cues for even simple models. For example, if we map a wood grain image to a model of a chair, the chair will look like it is made out of fine wood grain. Texture mapping an image of a brick wall onto a single polygon would give the impression that the polygon had been modeled with many individual bricks, as shown in Figure. 8

9 Texture mapping is a crucial element in today's games and graphic-oriented programs. Without texture mapping, the models that are rendered would be far from aesthetically pleasing. 9 Texture mapping a polygon

10 Because texture mapping is so useful, it is being provided as a standard rendering technique both in graphics software interfaces and in computer graphics hardware. 10

11 The Basics of 2D Texture Mapping When mapping an image onto an object, the color of the object at each pixel is modified by a corresponding color from the image. The image is called a texture map and its individual elements (pixels) are called texels. The texture map resides in its own texture coordinate space, often referred to as (s,t) space. Typically (s, t) range from 0 to 1, defining the lower and upper bounds of the image rectangle. 11

12 The simplest version of texture mapping can be accomplished as described. The corner points of the surface element (in world coordinates) is mapped onto the texture coordinate space using a predetermined mapping scheme. The four points in the (s,t) space defines a quadrilateral. 12

13 The color value for the surface element is calculated by the weighted average of the texels that lie within the quadrilateral. For polygonal surfaces, texture coordinates are calculated at the vertices of the defining polygons. The texture coordinate values within the polygon are linearly averaged across the vertices. 13

14 14 Texture mapping: from texels to pixels

15 The texture space is defined to be from 0 to 1. For texture coordinates outside the range [0,1] you can have the texture data either clamp or repeat over (s,t) as shown in Figure. 15

16 16 A texture image, repeated along t and clamped along s

17 After being mapped to a polygon or surface and transformed into screen coordinates, the individual texels of a texture rarely correspond to individual pixels of the final screen image. Depending on the transformations used and the texture mapping applied, a single pixel on the screen can correspond to anything from a tiny portion of a texel (magnification) to a large collection of texels (minification), as shown in Figure. 17

18 Filtering operations are used to determine which texel values should be used and how they should be averaged or interpolated. 18 Magnification and Minification of texels

19 There are a number of generalizations to this basic texture-mapping scheme. The texture image to be mapped need not be two-dimensional: the sampling and filtering techniques may also be applied for both one- and three- dimensional images. 19

20 In fact 1D texture mapping is a specialized version of 2D mapping. The texture may not be stored as an array but may be procedurally generated. Finally, the texture may not represent color at all but may instead describe transparency or other surface properties to be used in lighting or shading calculations. 20

21 Mapping Schemes The question in (2D) texture mapping is how to map the two dimensional texture image onto an object. In other words, for each polygonal vertex (or some other surface facet) in an object, we encounter the question, "Where do I have to look in the texture map to find its color?" 21

22 Basic Mapping Scheme Typically, basic shapes are used to define the mapping from world space to texture space. Depending on the mapping situation, we project the object‘s coordinates onto the geometry of a basic shape such as a plane, a cube, or a sphere. It's useful to transform the bounding geometry so that it's coordinates range from zero to one and use this value as the (s,t) coordinates into the texture coordinate space. 22

23 23 Planar Texture mapping: mapping an image to a vase.

24 For example, for a map shape that is planar, we take the (x,y,z) value from the object and throw away (project) one of the components, which leaves us with its two-dimensional (planar) coordinates. We normalize the coordinates by the maximum extents of the object to attain coordinate values between 0 and 1. This value is then used as the (s,t) value into the texture map. The last figure shows such a mapping. 24

25 Environment Mapping If you look around your room you may realize that many objects reflect their surroundings. A bottle of water, a mobile phone, a CD cover, a picture frame, etc. are only a few examples of reflecting objects that could be found in any 3D scene. To make the 3D world more realistic, objects should show reflections of their environment. This reflection is often achieved by a texture-mapping method called environment mapping. 25

26 The goal of environment mapping is to render an object as if it were reflective, so that the colors on its surface are those reflected from its surroundings. In other words, if you were to look at a perfectly polished, perfectly reflective silver object in a room, you would see the walls, floor, and other objects in the room reflected off the object. 26

27 (A classic example of using environment mapping is the evil, morphing Cyborg in the film Terminator 2.) The objects whose reflections you see depend on the position of your eye and on the position and surface angles of the silver object. Of course, objects are not usually completely reflective, so the color of the reflection is modulated with the object's actual color for a realistic look. 27

28 Ray tracing is a very expensive option and is usually not necessary for most cases. More often, certain tricks are used to achieve reflective effects. A method often employed is called cube environment mapping. The idea behind cube environment mapping is very simple. 28

29 From an object with a reflective surface you first generate six images of the environment in each direction (front, back, up, down, left, right). That is, imagine placing a camera at the center of the object and taking the size photographs of the world around it in the directions specified. Based on the normal vector of the vertex in consideration, the appropriate image is selected. 29

30 This image is then projected using planar mapping onto the object as shown in Figure. 30 Cube Mapping

31 The resultant object looks like it is reflecting its environment! Mathematically, the cube face is identified by using the normal vectors (nx, ny, nz) on the surface of the object. The greatest component is used to identify where the surface should be "looking"; and hence the cube face and the texture image to be used. The other two coordinates are used to select the texel from the texture by a simple 3D to 2D projection. 31

32 If, say, ny was the highest value component then we divide the other components by ny (nx/ny,nz/ny). These coordinates are normalized to give the (s,t) values into the texture map. 32


Download ppt "1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually."

Similar presentations


Ads by Google