Presentation is loading. Please wait.

Presentation is loading. Please wait.

More on Advanced Interfaces, Image Basics Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Friday, November 21, 2003.

Similar presentations


Presentation on theme: "More on Advanced Interfaces, Image Basics Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Friday, November 21, 2003."— Presentation transcript:

1 More on Advanced Interfaces, Image Basics Glenn G. Chappell CHAPPELLG@member.ams.org U. of Alaska Fairbanks CS 381 Lecture Notes Friday, November 21, 2003

2 21 Nov 2003CS 3812 Review: CS 481; Lighting Details Last time we looked at: CS 481 Preview If you have questions about CS 481/681, feel free to ask! Some OpenGL Lighting Details Two-sided lighting. Front side of polygons: use front material. Back side: use back material and reverse the normals. Spotlights. Lights with a predominant direction. Exponents are better than cutoffs! Local-viewer mode. Compute viewing direction correctly in Phong Model specular computation. Often not worth doing. Not a bad idea when flying. Necessary in VR.

3 21 Nov 2003CS 3813 One Last OpenGL Lighting Detail: Light-Model Ambient In OpenGL ambient light comes either from a light source or the light-model ambient. Set the former with glLightfv(GL_LIGHT …, GL_AMBIENT, … ); Set the latter with something like this: GLfloat lmodel_ambient[] = { 0., 0., 0., 1. }; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient); The light-model ambient defaults to (0.2, 0.2, 0.2, 1); If you have ambient in your light sources, you may want to turn the light-model ambient off (as above).

4 21 Nov 2003CS 3814 More on Advanced Interfaces: Introduction Recall our discussion of advanced interfaces (object manipulation, driving, flying, etc.). Often the hardest interfaces to write are those that involve arbitrary camera motions (e.g., flying). Oddly, the implementations we discussed did not keep a record of the camera location. It was hidden somewhere in the viewing matrix. This means that fancy interfaces can be written without knowledge of the camera location. But if a program needs to know the camera location for some other purpose, then you have a problem.  Shortly we will look at a partial solution.

5 21 Nov 2003CS 3815 More on Advanced Interfaces: Camera Location — When? When might a program need to know the camera location? When doing collision detection. Other uses are all variations on this theme: Am I in a room, am I near an object, have I walked through a wall, etc.

6 21 Nov 2003CS 3816 More on Advanced Interfaces: Finding the Camera How do we determine the camera location? Assume the model/view and projection transformations are being used “correctly”. Then the camera is at (0, 0, 0) in the world, after the viewing transformation is done. So we want to know what point we can multiply by the model/view matrix to get (0, 0, 0). Finding this is equivalent to solving a system of linear equations.  Fortunately, the hard work is done for us: gluUnProject.

7 21 Nov 2003CS 3817 More on Advanced Interfaces: Using gluUnProject Function gluUnProject takes a point in window coordinates backwards through the three transformations. But we only want to go back through model/view. Solution: create “dummy” projection and viewport transformations, then call gluUnProject. See whereami.cpp for sample code. This is on the web page. This program implements flying through a lit world. The code to determine the camera location is in function whereami.

8 21 Nov 2003CS 3818 More on Advanced Interfaces: Collision Detection Collision detection means determining when two objects, or an object and the camera, bump into each other. Collision detection is an active research field. There is no terribly nice solution to the problem. So we look at a very special case: How can we determine whether the camera location is inside a simple object? Simple: cube, sphere, etc.

9 21 Nov 2003CS 3819 Image Basics: Introduction Thus far, we have been looking at how to render a geometry-based 3-D scene. We have done little with pixels, despite the fact that these are the building blocks of all images. Next we cover Chapter 7 in the blue book (“Discrete Techniques”). This talks about pixel-level and image-level operations. Remember: “All graphics is 2-D graphics.” But first, a bit on images and the pipeline.

10 21 Nov 2003CS 38110 Image Basics: Terminology A raster image is a 2-D array of color values that represents a visible picture. So a raster image is what ends up in the frame buffer. Typically, we display a screen-aligned rectangle, with each image pixel corresponding to one screen pixel. Common graphic file formats (GIF, JPEG) hold raster images. We will use the following terms interchangeably. Raster image (or just image). Pixel rectangle. Pixmap. A bitmap is a special kind of image in which each pixel is represented by a single bit (off/on). Some people use “bitmap” to refer to general images. We will not. A texture is an image that can be painted on a polygon. Such a polygon is said to be textured. The process is known as texture mapping (or just texturing). Pixels in a texture image are often referred to as texels.

11 21 Nov 2003CS 38111 Image Basics: Primitives OpenGL (along with other graphics libraries) provides the following types of primitives: Points. Polylines. Filled polygons. Raster images. Bitmaps. So far, we have dealt mostly with the OpenGL geometry pipeline, which handles the first three above. Now we look at the image pipeline, which handles the last two.

12 21 Nov 2003CS 38112 Top: geometry pipeline. Bottom: image pipeline. This is a huge over-simplification, as usual. Algorithmically, the image pipeline is the simpler. Think: How do you rasterize a pixmap? The image pipeline has the same data type at start & end. So it can run backwards. Image Basics: More of the Pipeline FragmentsVertex Pixmap Fragment Pixmap Vertex Operations Rasterization Fragment Operations Frame Buffer Vertex Data Pixel Operations Pixel Data

13 21 Nov 2003CS 38113 Image Basics: Issues Geometry is better than images: It can be transformed (model/view, etc.). It can easily be edited. Can be faster when dealing with a small list of vertices. More compact representation when dealing with polygonal objects. Images are better than geometry: Can be more confident of correct appearance. Faster where appropriate. More appropriate for representing real-world images (photographs, etc.) Where have we seen issues like these before? Text fonts. Outline/stroke fonts are geometry. Bitmap fonts are images. The same advantages and disadvantages that these have, extend to more general contexts.

14 21 Nov 2003CS 38114 Discrete Techniques: Overview In the next few class meetings, we will cover: How images are stored, used, and operated on. Tossing around pixmaps and pixels in OpenGL. Texture mapping: painting an image on a polygon. Generating interesting textures.


Download ppt "More on Advanced Interfaces, Image Basics Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Friday, November 21, 2003."

Similar presentations


Ads by Google