# Course Note Credit: Some of slides are extracted from the course notes of prof. Mathieu Desburn (USC) and prof. Han-Wei Shen (Ohio State University). CSC.

## Presentation on theme: "Course Note Credit: Some of slides are extracted from the course notes of prof. Mathieu Desburn (USC) and prof. Han-Wei Shen (Ohio State University). CSC."— Presentation transcript:

Course Note Credit: Some of slides are extracted from the course notes of prof. Mathieu Desburn (USC) and prof. Han-Wei Shen (Ohio State University). CSC 830 Computer Graphics Lecture 6 Texture & more

Announcements Term Project Proposal Due – April 14 th (next class) Assignment 4 [Shading] & 5 [texture]

Can you do this …

Texture Mapping Surfaces “in the wild” are very complex Cannot model all the fine variations We need to find ways to add surface detail How?

Texture Mapping Particles and fractals –gave us lots of detail information –not easy to model –mathematically and computationally challenging

Texture Mapping Of course, one can model the exact micro-geometry + material property to control the look and feel of a surface But, it may get extremely costly So, graphics use a more practical approach – texture mapping

Texture Mapping Solution - (its really a cheat!!) How? MAP surface detail from a predefined multi-dimensional table (“texture”) to a simple polygon

Slide Courtesy of Leonard McMillan & Jovan Popovic, MIT

OpenGL functions - demo During initialization read in or create the texture image and place it into the OpenGL state. glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, imageWidth, imageHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, imageData); Before rendering your textured object, enable texture mapping and tell the system to use this particular texture. glBindTexture (GL_TEXTURE_2D, 13);

OpenGL functions During rendering, give the cartesian coordinates and the texture coordinates for each vertex. glBegin (GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f (0.0, 0.0, 0.0); glTexCoord2f (1.0, 0.0); glVertex3f (10.0, 0.0, 0.0); glTexCoord2f (1.0, 1.0); glVertex3f (10.0, 10.0, 0.0); glTexCoord2f (0.0, 1.0); glVertex3f (0.0, 10.0, 0.0); glEnd ();

What happens when outside the 0-1 range? (u,v) should be in the range of 0~1 What happens when you request (1.5 2.3)? –Tile: repeat (OGL); the integer part of the value is dropped, and the image repeats itself across the surface –Mirror: the image repeats itself but is mirrored (flipped) on every other repetition –Clamp to edge – value outside of the range are clamped to this range –Clamp to border – all those outside are rendered with a separately defined color of the border

Methods for modifying surface After a texture value is retrieved (may be further transformed), the resulting values are used to modify one or more surface attributes Called combine functions or texture blending operations –Replace: replace surface color with texture color –Decal: replace surface color with texture color, blend the color with underlying color with an alpha texture value, but the alpha component in the framebuffer is not modified –Modulate: multiply the surface color by the texture color (shaded + textured surface)

Slide Courtesy of Leonard McMillan & Jovan Popovic, MIT Okay, then how can you implement?

Texture and Texel Each pixel in a texture map is called a Texel Each Texel is associated with a (u,v) 2D texture coordinate The range of u, v is [0.0,1.0] due to normalization

(u,v) tuple For any (u,v) in the range of (0-1, 0-1) multiplied by texture image width and height, we can find the corresponding value in the texture map

F How do we get F(u,v)? We are given a discrete set of values: –F[i,j] for i= 0,…,N, j= 0,…,M Nearest neighbor: –F –F(u,v) = F[ round(N*u), round(M*v) ] Linear Interpolation: –i = floor(N*u), j = floor(M*v) –interpolate from F[i,j], F[i +1,j], F[i,j +1 ], F[i +1,j] Filtering in general !

Interpolation Nearest neighborLinear Interpolation

Filtering Textures Footprint changes from pixel to pixel: no single filter Resampling theory: Magnification:Interpolation Minification:Averaging We would like a constant cost per pixel TextureImage Image => Texture Texture => Image

Mip Mapping [Williams] MIP = Multim In Parvo = Many things in a small place G R B R GB d v u Trilinear interpolation

Mip Mapping - Example Courtesy of John hart

Was it working correctly?

No perspective Correction Perspective Correction

Probably the most common form of perspective texturing is done via a divide by Z. Its a very simple algorithm. Instead of interpolate U and V, we instead interpolate U/Z and V/Z. 1/Z is also interpolated. At each pixel, we take our texture co-ords, and divide them by Z. Hang on, you're thinking - if we divide by the same number twice (Z) don't we get back to where we started - like a double reciprocal? Well, sort of. Z is also interpolated, so we're not dividing by the same Z twice. We then take the new U and V values, index into our texture map, and plot the pixel. Pseudo-code might be: su = Screen-U = U/Z sv = Screen-V = V/Z sz = Screen-Z = 1/Z for x=startx to endx u = su / sz v = sv / sz PutPixel(x, y, texture[v][u]) su += deltasu sv += deltasv sz += deltasz end Very simple, and very slow.

Are correct_s & correct_t integer?

Useful links (Google – perspective correct texture) http://www.whisqu.se/per/docs/graphics16.htm http://p205.ezboard.com/fyabasicprogrammingfr m20.showMessage?topicID=20.topic (Perspective correction with Z-buffering)http://p205.ezboard.com/fyabasicprogrammingfr m20.showMessage?topicID=20.topic http://csdl2.computer.org/persagen/DLAbsToc.js p?resourcePath=/dl/proceedings/&toc=comp/proc eedings/cgiv/2005/2392/00/2392toc.xml&DOI=1 0.1109/CGIV.2005.58 (Perspective Correct Normal Vectors for Phong Shading )http://csdl2.computer.org/persagen/DLAbsToc.js p?resourcePath=/dl/proceedings/&toc=comp/proc eedings/cgiv/2005/2392/00/2392toc.xml&DOI=1 0.1109/CGIV.2005.58 http://easyweb.easynet.co.uk/~mrmeanie/tmap/t map.htmhttp://easyweb.easynet.co.uk/~mrmeanie/tmap/t map.htm

Procedural Texture Periodic Checkerboard Scale: s= 10 If (u * s) % 2=0 && (v * s)%2=0 texture(u,v) = 0; // black Else texture(u,v) = 1; // white

Slide Courtesy of Leonard McMillan & Jovan Popovic, MIT

t

Environment Maps Use texture to represent reflected color Texture indexed by reflection vector Approximation works when objects are far away from the reflective object

Environment Maps Using a spherical environment map Spatially variant resolution

Environment Maps Using a cubical environment map

Environment Mapping Environment mapping produces reflections on shiny objects Texture is transferred in the direction of the reflected ray from the environment map onto the object Reflected ray: R=2(N·V)N- V What is in the map? Object Viewer Reflected ray Environment Map

Approximations Made The map should contain a view of the world with the point of interest on the object as the eye –We can’t store a separate map for each point, so one map is used with the eye at the center of the object –Introduces distortions in the reflection, but the eye doesn’t notice –Distortions are minimized for a small object in a large room The object will not reflect itself The mapping can be computed at each pixel, or only at the vertices

Environment Maps The environment map may take one of several forms: –Cubic mapping –Spherical mapping (two variants) –Parabolic mapping Describes the shape of the surface on which the map “resides” Determines how the map is generated and how it is indexed What are some of the issues in choosing the map?

Example

Refraction Maps Use texture to represent refraction

Opacity Maps Use texture to represent opacity Useful for billboarding

Illumination Maps Use texture to represent illumination footprint

Illumination Maps Quake light maps

Bump Mapping Use texture to perturb normals - creates a bump-like effect + = original surface bump map modified surface Does not change silhouette edges

Bump Mapping Many textures are the result of small perturbations in the surface geometry Modeling these changes would result in an explosion in the number of geometric primitives. Bump mapping attempts to alter the lighting across a polygon to provide the illusion of texture.

Bump Mapping This modifies the surface normals.

Bump Mapping

Consider the lighting for a modeled surface.

Bump Mapping We can model this as deviations from some base surface. The question is then how these deviations change the lighting. N

Bump Mapping Step 1: Putting everything into the same coordinate frame as B(u,v). –x(u,v), y(u,v), z(u,v) – this is given for parametric surfaces, but easy to derive for other analytical surfaces. –Or O(u,v)

Bump Mapping Define the tangent plane to the surface at a point (u,v) by using the two vectors O u and O v. The normal is then given by: N = O u  O v N

Bump Mapping The new surface positions are then given by: O’(u,v) = O(u,v) + B(u,v) N Where, N = N / |N| Differentiating leads to: O’ u = O u + B u N + B (N) u  O’ u = O u + B u N O’ v = O v + B v N + B (N) v  O’ v = O v + B v N If B is small.

Bump Mapping This leads to a new normal: N’(u,v) = O u  O v - B u (N  O v ) + B v (N  O u ) + B u B v (N  N) »= N - B u (N  O v ) + B v (N  O u ) »= N + D N D N’

Bump Mapping For efficiency, can store B u and B v in a 2-component texture map. The cross products are geometry terms only. N’ will of course need to be normalized after the calculation and before lighting. –This floating point square root and division makes it difficult to embed into hardware.

Displacement Mapping Use texture to displace the surface geometry + = Bump mapping only affects the normals, Displacement mapping changes the entire surface (including the silhouette)

3D Textures Use a 3D mapping Usually stored procedurally Can simulate an object carved from a material

3D Textures - Noise Pseudo-random Bandlimited few high or low frequencies Controllable

3D Textures - Noise Create a nD integer-aligned lattice of random numbers For any nDPoint p, noise is defined as: noise(nDPoint p) Find 2 neighbors of p Linearly interpolate neighbors’ table values Return interpolated value

Turbulence Noise with self-similarity Add together many octaves of noise

Turbulence

Animating Turbulence Use an extra dimension as time

Slide Courtesy of Leonard McMillan & Jovan Popovic, MIT

Download ppt "Course Note Credit: Some of slides are extracted from the course notes of prof. Mathieu Desburn (USC) and prof. Han-Wei Shen (Ohio State University). CSC."

Similar presentations