Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 15Slide 16.837 Fall 2002 Texture Mapping Why texture map? How to do it How to do it right Spilling the beans A couple tricks Difficulties with.

Similar presentations


Presentation on theme: "Lecture 15Slide 16.837 Fall 2002 Texture Mapping Why texture map? How to do it How to do it right Spilling the beans A couple tricks Difficulties with."— Presentation transcript:

1 Lecture 15Slide 16.837 Fall 2002 Texture Mapping Why texture map? How to do it How to do it right Spilling the beans A couple tricks Difficulties with texture mapping Projective mapping Shadow mapping Environment mapping Most slides courtesy of Leonard McMillan and Jovan Popovic

2 Lecture 15Slide 26.837 Fall 2002 Administrative Office hours Durand & Teller by appointment Ngan Thursday 4-7 in W20-575 Yu Friday 2-5 in NE43-256 Deadline for proposal: Friday Nov 1 Meeting with faculty & staff about proposal Next week Web page for appointment

3 Lecture 15Slide 36.837 Fall 2002 The Quest for Visual Realism For more info on the computer artwork of Jeremy Birn see http://www.3drender.com/jbirn/productions.htmlhttp://www.3drender.com/jbirn/productions.html

4 Lecture 15Slide 46.837 Fall 2002 Photo-textures The concept is very simple!

5 Lecture 15Slide 56.837 Fall 2002 Texture Coordinates Specify a texture coordinate at each vertex (s, t) or (u, v) Canonical coordinates where u and v are between 0 and 1 Simple modifications to triangle rasterizer Void EdgeRec::init() { … wstart=1./z1; wend=1./z2; dw=wend-wstart; wcurr=wstart; sstart=s1; send=s2; ds=sstart-send; scurr=sstart; tstart=t1; tend=t2; dt=tstart-tend; tcurr=tstart; } Void EdgeRec::update () { ycurr+=1; xcurr+=dx; wcurr+=dw; scurr+=ds; tcurr+=dt; } static void RenderScanLine ( … ) { … for (e1 = AEL->ToFront(); e1 != NULL; e1 = AEL->Next() ) { e2=AEL->NextPolyEdge(e1->poly); x1=[e1->xcurr]; x2=[e2->xcurr]; dx=x2-x1; w1=e1->wcurr; w2=e2->wcurr; dw=(w2-w1)/dx; s1=e1->scurr; s2=e2->scurr; ds=(s2-s1)/dx; t1=e1->tcurr; t2=e2->tcurr; dt=(t2-t1)/dx; for (int x=x1; x<x2; x++) { w+=dw; s+=ds; t+=dt; if (w<wbuffer[x]) { wbuffer[x]=w; raster.setPixel(x, texturemap[s,t]) } raster->write(y); } (0,0)(1,0) (0,1) //Note: here we use w=1/z

6 Lecture 15Slide 66.837 Fall 2002 The Result Wait a minute... that doesn't look right. What's going on here? Let's try that out... Texture mapping applet (image)Texture mapping applet (image) Notice how the texture seems to bend and warp along the diagonal triangle edges. Let's take a closer look at what is going on. Let's try again with a simpler texture... Texture mapping applet (simple texture)Texture mapping applet (simple texture)

7 Lecture 15Slide 76.837 Fall 2002 Looking at One Edge First, let's consider one edge from a given triangle. This edge and its projection onto our viewport lie in a single common plane. For the moment, let's look only at that plane, which is illustrated below:

8 Lecture 15Slide 86.837 Fall 2002 Visualizing the Problem Notice that uniform steps on the image plane do not correspond to uniform steps along the edge. Let's assume that the viewport is located 1 unit away from the center of projection.

9 Lecture 15Slide 96.837 Fall 2002 Linear Interpolation in Screen Space Compare linear interpolation in screen space

10 Lecture 15Slide 106.837 Fall 2002 Linear Interpolation in 3-Space to interpolation in 3-space

11 Lecture 15Slide 116.837 Fall 2002 How to Make Them Mesh Still need to scan convert in screen space... so we need a mapping from t values to s values. We know that the all points on the 3-space edge project onto our screen- space line. Thus we can set up the following equality: and solve for s in terms of t giving: Unfortunately, at this point in the pipeline (after projection) we no longer have z 1 lingering around (Why?). However, we do have w 1 = 1/z 1 and w 2 = 1/z 2.

12 Lecture 15Slide 126.837 Fall 2002 Interpolating Parameters We can now use this expression for s to interpolate arbitrary parameters, such as texture indices (u, v), over our 3-space triangle. This is accomplished by substituting our solution for s given t into the parameter interpolation. Therefore, if we premultiply all parameters that we wish to interpolate in 3-space by their corresponding w value and add a new plane equation to interpolate the w values themselves, we can interpolate the numerators and denominator in screen-space. We then need to perform a divide a each step to get to map the screen-space interpolants to their corresponding 3-space values. Once more, this is a simple modification to our existing triangle rasterizer.

13 Lecture 15Slide 136.837 Fall 2002 Modified Rasterizer Void EdgeRec::init() { … wstart=1./z1; wend=1./z2; dw=wend-wstart; wcurr=wstart; swstart=s1*w1; swend=s2*w2; dsw=swstart-swend; swcurr=swstart; twstart=t1*w1; twend=t2*w2; dtw=twstart-twend; twcurr=twstart; } Void EdgeRec::update () { ycurr+=1; xcurr+=dx; wcurr+=dw; swcurr+=dsw; twcurr+=dtw; } static void RenderScanLine ( … ) { … for (e1 = AEL->ToFront(); e1 != NULL; e1 = AEL->Next() ) { e2=AEL->NextPolyEdge(e1->poly); x1=[e1->xcurr]; x2=[e2->xcurr]; dx=x2-x1; w1=e1->wcurr; w2=e2->wcurr; dw=(w2-w1)/dx; sw1=e1->swcurr; sw2=e2->swcurr; dsw=(sw2-sw1)/dx; tw1=e1->twcurr; tw2=e2->twcurr; dtw=(tw2-tw1)/dx; for (int x=x1; x<x2; x++) { w+=dw; float denom = 1.0f / w; sw+=dsw; tw+=dtw; correct_s=sw*denom; correct_t=tw*denom; if (w<wbuffer[x]) { wbuffer[x]=w; raster.setPixel(x, texturemap[correct_s, correct_t]) } raster->write(y); }

14 Lecture 15Slide 146.837 Fall 2002 Demonstration For obvious reasons this method of interpolation is called perspective- correct interpolation. The fact is, the name could be shortened to simply correct interpolation. You should be aware that not all 3-D graphics APIs implement perspective-correct interpolation. Applet with correct interpolation You can reduce the perceived artifacts of non-perspective correct interpolation by subdividing the texture-mapped triangles into smaller triangles (why does this work?). But, fundamentally the screen-space interpolation of projected parameters is inherently flawed. Applet with subdivided triangles

15 Lecture 15Slide 156.837 Fall 2002 Reminds you something? When we did Gouraud shading didn't we interpolate illumination values, that we found at each vertex using screen- space interpolation? Didn't I just say that screen-space interpolation is wrong (I believe "inherently flawed" were my exact words)? Does that mean that Gouraud shading is wrong?

16 Lecture 15Slide 166.837 Fall 2002 Gouraud is a big simplification Gourand shading is wrong. However, you usually will not notice because the transition in colors is very smooth (And we don't know what the right color should be anyway, all we care about is a pretty picture). There are some cases where the errors in Gouraud shading become obvious. When switching between different levels-of-detail representations At "T" joints. Applet showing errors in Gouraud shading

17 Lecture 15Slide 176.837 Fall 2002 Texture Tiling Often it is useful to repeat or tile a texture over the surface of a polygon. … if (w<wbuffer[x]) { wbuffer[x]=w; raster.setPixel(x, texturemap[tile(correct_s, max), tile(correct_t, max)]) } … int tile(int val, int size) { if (val >= size) { do { val -= size; } while (val >= size); } else { while (val < 0) { val += size; } } return val; } Tiling applet Can also use symmetries…

18 Lecture 15Slide 186.837 Fall 2002 Texture Transparency There was also a little code snippet to handle texture transparency. … if (w<wbuffer[x]) { Color4=raster.setPixel(x, oldColor=raster.getPixel(x); textureColor= texturemap[tile(correct_s, max), tile(correct_t, max)]; float t= textureColor.transparency() newColor = t*textureColor+ (1-t)*oldColor; raster.setPixel(x, newColor) wbuffer[x]=w; //NOT SO SIMPLE.. BEYOND THE SCOPE OF THIS LECTURE } … Applet showing texture transparency

19 Lecture 15Slide 196.837 Fall 2002 Summary of Label Textures Increases the apparent complexity of simple geometry Must specify texture coordinates for each vertex Projective correction (can't linearly interpolate in screen space) Specify variations in shading within a primitive Two aspects of shading Illumination Surface Reflectace Label textures can handle both kinds of shading effects but it gets tedious Acquiring label textures is surprisingly tough

20 Lecture 15Slide 206.837 Fall 2002 Difficulties with Label Textures Tedious to specfiy texture coordinates for every triangle Textures are attached to the geometry Easier to model variations in reflectance than illumination Can't use just any image as a label texture The "texture" can't have projective distortions Reminder: linear interploation in image space is not equivalent to linear interpolation in 3- space (This is why we need "perspective- correct" texturing). The converse is also true. Textures are attached to the geometry Easier to model variations in reflectance than illumination Makes it hard to use pictures as textures

21 Lecture 15Slide 216.837 Fall 2002 Projective Textures Treat the texture as a light source (like a slide projector) No need to specify texture coordinates explicitly A good model for shading variations due to illumination A fair model for reflectance (can use pictures)

22 Lecture 15Slide 226.837 Fall 2002 Projective Textures [Segal et al. 1992] textureProjected onto the interior of a cube

23 Lecture 15Slide 236.837 Fall 2002 The Mapping Process During the Illumination process: For each vertex of triangle (in world or lighting space) Compute ray from the projective texture's origin to point Compute homogeneous texture coordinate, [ti, tj, t] (use projective matrix) During scan conversion (in projected screen space) Interpolate all three texture coordinates in 3-space (premultiply by w of vertex) Do normalization at each rendered pixel i = t i / t j = t j / t Access projected texture

24 Lecture 15Slide 246.837 Fall 2002 Projective Texture Examples Modeling from photograph Using input photos as textures [Debevec et al. 1996]

25 Lecture 15Slide 256.837 Fall 2002 Adding Texture Mapping to Illumination Texture mapping can be used to alter some or all of the constants in the illumination equation. We can simply use the texture as the final color for the pixel, or we can just use it as diffuse color, or we can use the texture to alter the normal, or... the possibilities are endless! Phong's Illumination Model Constant Diffuse ColorDiffuse Texture Color Texture used as LabelTexture used as Diffuse Color

26 Lecture 15Slide 266.837 Fall 2002 Texture-Mapping Tricks Textures and Shading Combining Textures Bump Mapping Solid Textures

27 Lecture 15Slide 276.837 Fall 2002 Shadow Maps Projective Textures with Depth Textures can also be used to generate shadows. Will be discussed in a later lecture

28 Lecture 15Slide 286.837 Fall 2002 Environment Maps If, instead of using the ray from the surface point to the projected texture's center, we used the direction of the reflected ray to index a texture map. We can simulate reflections. This approach is not completely accurate. It assumes that all reflected rays begin from the same point.

29 Lecture 15Slide 296.837 Fall 2002 What's the Best Chart?

30 Lecture 15Slide 306.837 Fall 2002 Reflection Mapping

31 Lecture 15Slide 316.837 Fall 2002 Questions? Image computed using the Dali ray tracer by Henrik Wann jensen Environment map by Paul Debevec

32 Lecture 15Slide 326.837 Fall 2002

33 Lecture 15Slide 336.837 Fall 2002 Texture Mapping Modes Label textures Projective textures Environment maps Shadow maps …

34 Lecture 15Slide 346.837 Fall 2002 The Best of All Worlds All these texture mapping modes are great! The problem is, no one of them does everything well. Suppose we allowed several textures to be applied to each primitive during rasterization.

35 Lecture 15Slide 356.837 Fall 2002 Multipass vs. Multitexture Multipass (the old way) - Render the image in multiple passes, and "add" the results. Multitexture - Make multiple texture accesses within the rasterizing loop and "blend" results. Blending approaches: Texture modulation Alpha Attenuation Additive textures Weird modes

36 Lecture 15Slide 366.837 Fall 2002 Texture Mapping in Quake Quake uses light maps in addition to texture maps. Texture maps are used to add detail to surfaces, and light maps are used to store pre-computed illumination. The two are multiplied together at run-time, and cached for efficiency. Texture Maps Light Maps DataRGBIntensity InstancedYesNo ResolutionHighLow Light map image by Nick Chirkov

37 Lecture 15Slide 376.837 Fall 2002 Bump Mapping Textures can be used to alter the surface normal of an object. This does not change the actual shape of the surface -- we are only shading it as if it were a different shape! This technique is called bump mapping. The texture map is treated as a single-valued height function. The value of the function is not actually used, just its partial derivatives. The partial derivatives tell how to alter the true surface normal at each point on the surface to make the object appear as if it were deformed by the height function. Since the actual shape of the object does not change, the silhouette edge of the object will not change. Bump Mapping also assumes that the Illumination model is applied at every pixel (as in Phong Shading or ray tracing). Sphere w/Diffuse Texture Swirly Bump Map Sphere w/Diffuse Texture & Bump Map

38 Lecture 15Slide 386.837 Fall 2002 Bump mapping Compute bump map partials by numerical differentiation Initial point Normal Simulated elevated point after bump Variation of normal in u direction Variation of normal in v direction If Pu and Pv are orthogonal and N is normalized:

39 Lecture 15Slide 396.837 Fall 2002 Bump mapping Compute bump map partials by numerical differentiation Initial point Normal Simulated elevated point after bump Variation of normal in u direction Variation of normal in v direction General case

40 Lecture 15Slide 406.837 Fall 2002 Bump mapping derivation

41 Lecture 15Slide 416.837 Fall 2002 More Bump Map Examples Cylinder w/Diffuse Texture Map Bump Map Cylinder w/Texture Map & Bump Map

42 Lecture 15Slide 426.837 Fall 2002 One More Bump Map Example Notice that the shadow boundaries remain unchanged.

43 Lecture 15Slide 436.837 Fall 2002 Displacement Mapping We use the texture map to actually move the surface point. This is called displacement mapping. How is this fundamentally different than bump mapping? The geometry must be displaced before visibility is determined. Is this easily done in the graphics pipeline? In a ray-tracer?

44 Lecture 15Slide 446.837 Fall 2002 Displacement Mapping Example It is possible to use displacement maps when ray tracing. Image from Geometry Caching for Ray-Tracing Displacement Maps by Matt Pharr and Pat Hanrahan.

45 Lecture 15Slide 456.837 Fall 2002 Another Displacement Mapping Example Image from Ken MusgraveKen Musgrave

46 Lecture 15Slide 466.837 Fall 2002 Three Dimensional or Solid Textures The textures that we have discussed to this point are two-dimensional functions mapped onto two-dimensional surfaces. Another approach is to consider a texture as a function defined over a three-dimensional surface. Textures of this type are called solid textures. Solid textures are very effective at representing some types of materials such as marble and wood. Generally, solid textures are defined procedural functionsrather than tabularized or sampled functions as used in 2-D (Any guesses why?) The approach that we will explore is based on An Image Synthesizer, by Ken Perlin, SIGGRAPH '85. The vase to the right is from this paper.

47 Lecture 15Slide 476.837 Fall 2002 Noise and Turbulence When we say we want to create an "interesting" texture, we usually don't care exactly what it looks like -- we're only concerned with the overall appearance. We want to add random variations to our texture, but in a controlled way. Noise and turbulence are very useful tools for doing just that. A noise function is a continuous function that varies throughout space at a uniform frequency. To create a simple noise function, consider a 3D lattice, with a random value assigned to each triple of integer coordinates:

48 Lecture 15Slide 486.837 Fall 2002 Turbulence Noise is a good start, but it looks pretty ugly all by itself. We can use noise to make a more interesting function called turbulence. A simple turbulence function can be computed by summing many different frequencies of noise functions: One Frequency Two Frequencies Three FrequenciesFour Frequencies Now we're getting somewhere. But even turbulence is rarely used all by itself. We can use turbulence to build even more fancy 3D textures...

49 Lecture 15Slide 496.837 Fall 2002 Marble Example We can use turbulence to generate beautiful 3D marble textures, such as the marble vase created by Ken Perlin. The idea is simple. We fill space with black and white stripes, using a sine wave function. Then we use turbulence at each point to distort those planes. By varying the frequency of the sin function, you get a few thick veins, or many thin veins. Varying the amplitude of the turbulence function controls how distorted the veins will be. Marble = sin(f * (x + A*Turb(x,y,z)))


Download ppt "Lecture 15Slide 16.837 Fall 2002 Texture Mapping Why texture map? How to do it How to do it right Spilling the beans A couple tricks Difficulties with."

Similar presentations


Ads by Google