Presentation is loading. Please wait.

Presentation is loading. Please wait.

© University of Wisconsin, CS559 Spring 2004

Similar presentations


Presentation on theme: "© University of Wisconsin, CS559 Spring 2004"— Presentation transcript:

1 © University of Wisconsin, CS559 Spring 2004
Last Time Local Lighting model Diffuse illumination Specular illumination Ambient illumination 4/1/04 © University of Wisconsin, CS559 Spring 2004

2 © University of Wisconsin, CS559 Spring 2004
Today Light Sources Shading Interpolation Mapping Techniques 4/1/04 © University of Wisconsin, CS559 Spring 2004

3 © University of Wisconsin, CS559 Spring 2004
Light Sources Two aspects of light sources are important for a local shading model: Where is the light coming from (the L vector)? How much light is coming (the I values)? Various light source types give different answers to the above questions: Point light source: Light from a specific point Directional: Light from a specific direction Spotlight: Light from a specific point with intensity that depends on the direction Area light: Light from a continuum of points (later in the course) 4/1/04 © University of Wisconsin, CS559 Spring 2004

4 Point and Directional Sources
Point light: The L vector depends on where the surface point is located Must be normalized - slightly expensive To specify an OpenGL light at 1,1,1: Directional light: L(x) = Llight The L vector does not change over points in the world OpenGL light traveling in direction 1,1,1 (L is in opposite direction): Glfloat light_position[] = { 1.0, 1.0, 1.0, 1.0 }; glLightfv(GL_LIGHT0, GL_POSITION, light_position); Glfloat light_position[] = { 1.0, 1.0, 1.0, 0.0 }; glLightfv(GL_LIGHT0, GL_POSITION, light_position); 4/1/04 © University of Wisconsin, CS559 Spring 2004

5 © University of Wisconsin, CS559 Spring 2004
Spotlights cut-off Point source, but intensity depends on L: Requires a position: the location of the source Requires a direction: the center axis of the light Requires a cut-off: how broad the beam is Requires and exponent: how the light tapers off at the edges of the cone Intensity scaled by (L·D)n direction glLightfv(GL_LIGHT0, GL_POSITION, light_posn); glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, light_dir); glLightfv(GL_LIGHT0, GL_SPOT_CUTOFF, 45.0); glLightfv(GL_LIGHT0, GL_SPOT_EXPONENT, 1.0); 4/1/04 © University of Wisconsin, CS559 Spring 2004

6 © University of Wisconsin, CS559 Spring 2004
Shading so Far So far, we have discussed illuminating a single point We have assumed that we know: The point The surface normal The viewer location (or direction) The light location (or direction) But commonly, normal vectors are only given at the vertices It is also expensive to compute lighting for every point 4/1/04 © University of Wisconsin, CS559 Spring 2004

7 Shading Interpolation
Take information specified or computed at the vertices, and somehow propagate it across the polygon (triangle) Several options: Flat shading Gouraud interpolation Phong interpolation 4/1/04 © University of Wisconsin, CS559 Spring 2004

8 © University of Wisconsin, CS559 Spring 2004
Flat shading Compute shading at a representative point and apply to whole polygon OpenGL uses one of the vertices Advantages: Fast - one shading computation per polygon, fill entire polygon with same color Disadvantages: Inaccurate What are the artifacts? 4/1/04 © University of Wisconsin, CS559 Spring 2004

9 © University of Wisconsin, CS559 Spring 2004
Gouraud Shading Shade each vertex with it’s own location and normal Linearly interpolate the color across the face Advantages: Fast: incremental calculations when rasterizing Much smoother - use one normal per shared vertex to get continuity between faces Disadvantages: What are the artifacts? Is it accurate? 4/1/04 © University of Wisconsin, CS559 Spring 2004

10 © University of Wisconsin, CS559 Spring 2004
Phong Interpolation Interpolate normals across faces Shade each pixel individually Advantages: High quality, narrow specularities Disadvantages: Expensive Still an approximation for most surfaces Not to be confused with Phong’s specularity model 4/1/04 © University of Wisconsin, CS559 Spring 2004

11 © University of Wisconsin, CS559 Spring 2004
4/1/04 © University of Wisconsin, CS559 Spring 2004

12 © University of Wisconsin, CS559 Spring 2004
Shading and OpenGL OpenGL defines two particular shading models Controls how colors are assigned to pixels glShadeModel(GL_SMOOTH) interpolates between the colors at the vertices (the default, Gouraud shading) glShadeModel(GL_FLAT) uses a constant color across the polygon Phong shading requires a significantly greater programming effort – beyond the scope of this class Also requires fragment shaders on programmable graphics hardware 4/1/04 © University of Wisconsin, CS559 Spring 2004

13 The Current Generation
Current hardware allows you to break from the standard illumination model Programmable Vertex Shaders and Fragment Shaders allow you to write a small program that determines how the color of a vertex or pixel is computed Your program has access to the surface normal and position, plus anything else you care to give it (like the light) You can add, subtract, take dot products, and so on Fragment shaders are most useful for lighting because they operate on every pixel 4/1/04 © University of Wisconsin, CS559 Spring 2004

14 © University of Wisconsin, CS559 Spring 2004
The Full Story We have only touched on the complexities of illuminating surfaces The common model is hopelessly inadequate for accurate lighting (but it’s fast and simple) Consider two sub-problems of illumination Where does the light go? Light transport What happens at surfaces? Reflectance models Other algorithms address the transport or the reflectance problem, or both Much later in class, or a separate course (CS 779) 4/1/04 © University of Wisconsin, CS559 Spring 2004

15 © University of Wisconsin, CS559 Spring 2004
Mapping Techniques Consider the problem of rendering a soup can The geometry is very simple - a cylinder But the color changes rapidly, with sharp edges With the local shading model, so far, the only place to specify color is at the vertices To do a soup can, would need thousands of polygons for a simple shape Same thing for an orange: simple shape but complex normal vectors Solution: Mapping techniques use simple geometry modified by a detail map of some type 4/1/04 © University of Wisconsin, CS559 Spring 2004

16 © University of Wisconsin, CS559 Spring 2004
Texture Mapping The soup tin is easily described by pasting a label on the plain cylinder Texture mapping associates the color of a point with the color in an image: the texture Soup tin: Each point on the cylinder get the label’s color Question to address: Which point of the texture do we use for a given point on the surface? Establish a mapping from surface points to image points Different mappings are common for different shapes We will, for now, just look at triangles (polygons) 4/1/04 © University of Wisconsin, CS559 Spring 2004

17 © University of Wisconsin, CS559 Spring 2004
Example Mappings 4/1/04 © University of Wisconsin, CS559 Spring 2004

18 © University of Wisconsin, CS559 Spring 2004
Basic Mapping The texture lives in a 2D space Parameterize points in the texture with 2 coordinates: (s,t) These are just what we would call (x,y) if we were talking about an image, but we wish to avoid confusion with the world (x,y,z) Define the mapping from (x,y,z) in world space to (s,t) in texture space To find the color in the texture, take an (x,y,z) point on the surface, map it into texture space, and use it to look up the color of the texture Samples in a texture are called texels, to distinguish them from pixels in the final image With polygons: Specify (s,t) coordinates at vertices Interpolate (s,t) for other points based on given vertices 4/1/04 © University of Wisconsin, CS559 Spring 2004

19 Texture Interpolation
Specify where the vertices in world space are mapped to in texture space A texture coordinate is the location in texture space that corresponds to the vertex Linearly interpolate the mapping for other points in world space Straight lines in world space go to straight lines in texture space t s Texture map – texture coordinates Triangle in world space – vertex coordinates 4/1/04 © University of Wisconsin, CS559 Spring 2004

20 Interpolating Coordinates
(x3, y3), (s3, t3) (x2, y2), (s2, t2) (x1, y1), (s1, t1) 4/1/04 © University of Wisconsin, CS559 Spring 2004

21 Steps in Texture Mapping
Polygons (triangles) are specified with texture coordinates at the vertices A modeling step, but some ways to automate it for common shapes When rasterizing, interpolate the texture coordinates to get the texture coordinate at the current pixel Previous slide Look up the texture map using those coordinates Just round the texture coordinates to integers and index the image Take the color from the map and put it in the pixel Many ways to put it into a pixel (more later) 4/1/04 © University of Wisconsin, CS559 Spring 2004

22 Pipelines and Texture Mapping
Texture mapping is done in canonical screen space as the polygon is rasterized When describing a scene, you assume that texture interpolation will be done in world space Something goes wrong… Interpolation then projection Projection then interpolation Textured square 4/1/04 © University of Wisconsin, CS559 Spring 2004

23 Perspective Correct Mapping
Which property of perspective projection means that the “wrong thing” will happen if we apply our simple interpolations from the previous slide? Perspective correct texture mapping does the right thing, but at a cost Interpolate homogeneous coordinate w and divide it out just before indexing texture Is it a problem with orthographic viewing? 4/1/04 © University of Wisconsin, CS559 Spring 2004

24 Basic OpenGL Texturing
Specify texture coordinates for the polygon: Use glTexCoord2f(s,t) before each vertex: Eg: glTexCoord2f(0,0); glVertex3f(x,y,z); Create a texture object and fill it with texture data: glGenTextures(num, &indices) to get identifiers for the objects glBindTexture(GL_TEXTURE_2D, identifier) to bind the texture Following texture commands refer to the bound texture glTexParameteri(GL_TEXTURE_2D, …, …) to specify parameters for use when applying the texture glTexImage2D(GL_TEXTURE_2D, ….) to specify the texture data (the image itself) MORE… 4/1/04 © University of Wisconsin, CS559 Spring 2004

25 Basic OpenGL Texturing (cont)
Enable texturing: glEnable(GL_TEXTURE_2D) State how the texture will be used: glTexEnvf(…) Texturing is done after lighting You’re ready to go… 4/1/04 © University of Wisconsin, CS559 Spring 2004

26 © University of Wisconsin, CS559 Spring 2004
Nasty Details There are a large range of functions for controlling the layout of texture data: You must state how the data in your image is arranged Eg: glPixelStorei(GL_UNPACK_ALIGNMENT, 1) tells OpenGL not to skip bytes at the end of a row You must state how you want the texture to be put in memory: how many bits per “pixel”, which channels,… Textures must be square with width/height a power of 2 Common sizes are 32x32, 64x64, 256x256 Smaller uses less memory, and there is a finite amount of texture memory on graphics cards 4/1/04 © University of Wisconsin, CS559 Spring 2004

27 Controlling Different Parameters
The “pixels” in the texture map may be interpreted as many different things. For example: As colors in RGB or RGBA format As grayscale intensity As alpha values only The data can be applied to the polygon in many different ways: Replace: Replace the polygon color with the texture color Modulate: Multiply the polygon color with the texture color or intensity Similar to compositing: Composite texture with base color using operator 4/1/04 © University of Wisconsin, CS559 Spring 2004

28 Example: Diffuse shading and texture
Say you want to have an object textured and have the texture appear to be diffusely lit Problem: Texture is applied after lighting, so how do you adjust the texture’s brightness? Solution: Make the polygon white and light it normally Use glTexEnvi(GL_TEXTURE_2D, GL_TEXTURE_ENV_MODE, GL_MODULATE) Use GL_RGB for internal format Then, texture color is multiplied by surface (fragment) color, and alpha is taken from fragment 4/1/04 © University of Wisconsin, CS559 Spring 2004

29 © University of Wisconsin, CS559 Spring 2004
Specular Color Typically, texture mapping happens after lighting More useful in general Recall plastic surfaces and specularities: the highlight should be the color of the light But if texturing happens after the lighting, the color of the specularity will be modified by the texture – the wrong thing OpenGL lets you do the specular lighting after the texture 4/1/04 © University of Wisconsin, CS559 Spring 2004

30 © University of Wisconsin, CS559 Spring 2004
Some Other Uses There is a “decal” mode for textures, which replaces the surface color with the texture color, as if you stick on a decal But texture happens after lighting, so the light info is lost BUT, you can use the texture to store lighting info, and generate better looking lighting You can put the color information in the polygon, and use the texture for the brightness information Called “light maps” Normally, use multiple texture layers, one for color, one for light 4/1/04 © University of Wisconsin, CS559 Spring 2004


Download ppt "© University of Wisconsin, CS559 Spring 2004"

Similar presentations


Ads by Google