# Week 7 - Wednesday.  What did we talk about last time?  Transparency  Gamma correction  Started texturing.

## Presentation on theme: "Week 7 - Wednesday.  What did we talk about last time?  Transparency  Gamma correction  Started texturing."— Presentation transcript:

Week 7 - Wednesday

 What did we talk about last time?  Transparency  Gamma correction  Started texturing

 We've got polygons, but they are all one color  At most, we could have different colors at each vertex  We want to "paint" a picture on the polygon  Because the surface is supposed to be colorful  To appear as if there is greater complexity than there is (a texture of bricks rather than a complex geometry of bricks)  To apply other effects to the surface such as changes in material or normal

 We never get tired of pipelines  Go from object space to parameter space  Go from parameter space to texture space  Get the texture value  Transform the texture value Projector function Object space Corresponder function Parameter space Obtain value Texture space Value transform function Texture value Transformed value

 The projector function goes from the model space (a 3D location on a surface) to a 2D (u,v) coordinate on a texture  Usually, this is based on a map from the model to the texture, made by an artist  Tools exist to help artists "unwrap" the model  Different kinds of mapping make this easier  In other scenarios, a mapping could be determined at run time

 From (u,v) coordinates we have to find a corresponding texture pixel (or texel)  Often this just maps directly from u,v  [0,1] to a pixel in the full width, height range  But matrix transformations can be applied  Also, values outside of [0,1] can be given, with different choices of interpretation

 Usually the texture value is just an RGB triple (or an RGBα value)  But, it could be procedurally generated  It could be a bump mapping or other surface data  It might need some transformation after retrieval

 Usually, texturing is image texturing:  Gluing a 2D image onto a polygon  Recall that textures have certain limitations  Usually 2 m x 2 n texels  Some old cards require square textures  Most new cards don't have to do powers of 2  Maximum sizes vary: ▪ 2048 x 2048 might be all your laptop can do ▪ 8192 x 8192 is required by DirectX 10

 Sometimes a small texture will cover a much larger area on screen  This effect is called magnification  Sometimes a large texture will cover very little area on the screen  This effect is called minification  Different techniques exist to overcome these problems

 Magnification is often done by filtering the source texture in one of several ways:  Nearest neighbor (the worst) takes the closest texel to the one needed  Bilinear interpolation linearly interpolates between the four neighbors  Bicubic interpolation probably gives the best visual quality at greater computational expense (and is generally not directly supported)

 Perhaps unsurprisingly, the book gives a pretty good explanation of bilinear interpolation on p. 159  I guess I should direct people there in the future

 Bilinear interpolation tends to blur sharp edges, but you can interpolate non-linearly, remapping bright colors to a bright value and dark to a dark value  Another alternative is detail textures  Overlay higher resolution textures (representing details like scratches) onto a magnified texture

 Minification is just as big of a problem (if not bigger)  Bilinear interpolation can work  But an onscreen pixel might be influenced by many more than just its four neighbors  We want to, if possible, have only a single texel per pixel  Main techniques:  Mipmapping  Summed-area tables  Anisotropic filtering

 Mipmapping is the most popular texture antialiasing solution  Mip = "multum in parvo," meaning "many things in a small place"  The trouble with minification is that a single pixel needs to be colored by lots of texels  The solution: when loading the texture, create many smaller filtered versions of the texture, then use the appropriate one for rendering

 Typically a chain of mipmaps is created, each half the size of the previous  That's why cards like square power of 2 textures  Often the filtered version is made with a box filter, but better filters exist  The trick is figuring out which mipmap level to use  The level d can be computed based on the change in u relative to a change in x

 One way to improve quality is to interpolate between u and v texels from the nearest two d levels  Picking d can be affected by a level of detail bias term which may vary for the kind of texture being used

 Sometimes we are magnifying in one axis of the texture and minifying in the other  Summed area tables are another method to reduce the resulting overblurring  It sums up the relevant pixels values in the texture  It works by precomputing all possible rectangles

 Summed area tables work poorly for non-rectangular projections into texture space  Modern hardware uses unconstrained anisotropic filtering  The shorter side of the projected area determines d, the mipmap index  The longer side of the projected area is a line of anisotropy  Multiple samples are taken along this line  Memory requirements are no greater than regular mipmapping

 SharpDX examples  Textures in shader code  Project 2 and Assignment 3 work day?

 Keep working on Project 2  Finish Assignment 3  Keep reading Chapter 6  Internship opportunity:  Naval Supply Systems Command  Student Trainee Position as Information Technology Specialist  For more information: ▪ https://www.usajobs.gov/GetJob/ViewDetails/39551370 0

Download ppt "Week 7 - Wednesday.  What did we talk about last time?  Transparency  Gamma correction  Started texturing."

Similar presentations