Display The output from the pipeline is reduced to a series of pixels - blobs of colour of varying intensity - which are burned onto the screen (even a HMD consists of two flat screens) The final stage is the process of forming the pixels on the disp[lay device. What devices are used to display graphics ?
Colour representations on computer are based on trichromatic theory. This theory states that any colour can be recreated by a suitable combination of primary colours (eg. Red, Green, and Blue. This is represented in computers by the Screen RGB values.
Colour Each RGB value represents the intensity of each of the separate components of a colour. Intensity is usually related to the voltage passed to the electron guns when drawing the image on the screen. To understand intensity we start by looking at greylevel images.
Greylevel Pixels = In a grey level image as in this case, each g nn is a single integer number. Pixel =235 Pixel=15
Grey Level Pixels Grey Level pixels are usually represented as integer numbers, ranging from 0 (black) to the highest possible value for the bit depth of the image. The bit depth for many greyscale images is 8 (8 bits= 1 byte of storage for each pixel). This gives a range of 256 grey levels ranging from 0 to 255.
Colour Pixels In this case images are stored as a triplet of values. where
Colour Pixels In this case bit depth refers to the number of bits required to store all of the available colours. For example 24 bit colour allows for 16.7 million different colours. When this is the case, the number of bits is divided by three to provide the number of bits for each separate channel (24bits=8 R bits+8 G bits+8 B bits).
Interleaved and Non- Interleaved colour images Interleaved images Each pixel stored as consecutive triplets in the image file “RGBRGBRGB…” Non Interleaved Images – Each channel recorded as a separate image “RRRRRRRRRR…” “GGGGGGGGG…” “BBBBBBBBBB…”
Colour Pixels – Alpha Channels Colour Pixels sometimes have four channels. In this case the fourth channel (the Alpha channel A) Determines the transparency of the pixel. If A=1 the object is opaque, if A=0, the object is transparent. A can have intermediate values, and act as a colour filter.
Alternative Colour Representations Mostly colour is represented as RGB (or RGBA), however other colour representations exist. Other means of colour representation exist, why are they necessary?
RGB Colour Space 0 R G B M C Y Conceptually a cube Three orthogonal axes RGB Independent
RGB Colour Space - Problems Cone sensitivities. Notice the overlap between red and green. This overlap implies that red and green are not orthogonal
Other Representation Schemes Hue Saturation and Intensity Transformation of RGB colour space. Black White Side View S Red=0 o Blue Green Magenta Cyan Yellow Intensity
Texture Definition Texture is an elusive context. Most people know it when they see it!! Often describe textures as “coarse” or “fine”, “rough” or “smooth”. Often defined in terms of how particular objects feel to the touch.
Visual Texture These slides have visual texture. There is a pattern in the background that forms the visual texture. Arguably the text contributes to this. Texture is an important surface property of an object.
Textured Objects Given that texture is a surface property, it is important in the generation of VR worlds in order to add verosimilitude. Furthermore texture is an important part of the tactile experience of the world and some haptic devices allow for texture to be experienced in particular ways.
Texture In the development of VR worlds, texture is an important part of the experience. In this lecture the issue of adding texture to the geometric objects we have created is discussed. The process used is called Texture Mapping.
Each texture is stored as a two dimensional image. The texture can be generated artificially by computer graphics. Can be scanned from a photograph. Obtained by whatever means in order to achieve the requisite level of realism.
Texels The bitmaps of the images are stored as pixels. Although we actually call the pixels texels in order to differentiate. Pixels are used to define the screen.
Texture Mapping Wrapping a 2D texture around a 3D object is usually effected using affine transformations similar to those discussed in earlier lectures. The precise details of how textures are mapped can be found in various text books (eg. Angel - Interactive Computer Graphics) The Open GL API simplifies much of the task.
However large the textured image is in texels, it is usually convenient to normalise the size of the image in terms of s and t. Say our image is 512x512, we define the texture coordinate for say (256,300) as (256/512,300/512) so that the coordinates are expressed on the interval [0,1].
Texture Coordinates Applying a texture to the surface of an object requires that the size of the object is scaled appropriately on the [0,1] interval. This helps to simplify the mapping process. 0,01,0 0,11,1
Magnification Texture is smaller than surface to be textured
Minification Surface is smaller than the texture
Shrinking and Stretching The shrinking and stretching required to perform minification and magnification involve the use of filters to effect the transition. These filters are usually based on some form of interpolation in order to make the textured image fit onto the surface.
Mipmapping Mipmapped textures are often used in VR worlds and games, particularly where movement of some description is necessary. The level of detail seen will vary according to the distance of the viewer from the object. Several different textured images are stored corresponding to the necessary level of detail.
Mipmapping Store the textured image at different scales.
Mipmapping Depending on the distance from the textured object, the most appropriate scale of iamge is selected in order to represent the texture.
Storing Textures. Textured images often require considerable amounts of storage. Textured images are usually stored in the memory of the computer. To represent a fully textured 3D world can require significant overheads. Dedicated hardware is often used.
Tutorial Discuss how to perform texture mapping in Open GL.
Defining Textured Images Could be scanned from photographs. Stored in the memory as arrays. Store the elements in these arrays as texels (as opposed to pixels) Open GL Standard requires that these bitmap images are an integer power of 2.
Texture Maps A texture map associates a unique point of the texture, with each point on a geometric object, that is itself mapped to screen coordinates for display purposes. If the object is represented in spatial coordinates as [x,y,z] then there are functions so that x=x(s,t) y=y(s,t) z=z(s,t)
Texture Mapping One of the problems is finding the precise nature of this mapping. Also have to consider the process of going from world coordinates to screen coordinates. Can think of texture mapping as two concurrent mappings: From Texture Coordinates to geometric coordinates Second from parametric coordinates to geometric coordinates.
Texture Mapping Conceptually the process is simple. An area of texture pattern is mapped on to an area of the geometric surface, corresponding to a pixel in the final image. This leads to a number of difficulties.
Texture Mapping 2d texture is defined as rectangular image. The mapping from this rectangle to an arbitrary shape in a 3D world might involve complicated processing – squaring the circle.