Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Perception and VR MONT 104S, Fall 2008 Lecture 20 Computer Graphics and VR.

Similar presentations


Presentation on theme: "1 Perception and VR MONT 104S, Fall 2008 Lecture 20 Computer Graphics and VR."— Presentation transcript:

1 1 Perception and VR MONT 104S, Fall 2008 Lecture 20 Computer Graphics and VR

2 2 Image File Compression Images on the web are stored as separate files that must be downloaded from the server to the client to be viewed. If the image file size is large, it will take a long time to download. Therefore, it is important to try to keep the file size as small as possible without losing much image quality. Ways to reduce file size: 1. Reduce the size of the image (in pixels) 2. Reduce the resolution of the image. 3. Reduce the bit depth of the image 4. Use image file compression (GIF or JPEG)

3 3 GIF files The Graphics Interchange Format (GIF) is excellent for compressing images with large areas of uniform color. It is "lossless", meaning that the original image can be regenerated with no loss of information. GIF supports transparency in images. GIF supports animations (animated GIF's) This format is limited to 256 colors.

4 4 JPEG compression The JPEG (Joint Photographic Experts Group) compression method works well for complex images, such as photographs. JPEG supports millions of colors (up to a bit depth of 24). JPEG is "lossy", meaning that the original image cannot be regenerated exactly from the original. Some information is lost in the conversion to JPEG.

5 5 Computer Graphics in VR In virtual reality applications, the graphics are often computed as the program runs. The color value for each pixel is stored in computer memory (RAM) Values for each pixel on the monitor are stored in the frame buffer. If we are using 24 bit color, each pixel needs 24 bits. There are 8 bits in a byte, so each pixel needs 3 bytes. If the screen is 600 x 800 resolution, how many pixels are there? How many bytes are needed to store color values for the entire screen?

6 6 Creating Graphics for VR The synthetic camera model: 1) Define the 3D object(s) in space. 2) Specify lighting, shading and material properties 3) Specify camera properties (position, orientation, projection system, etc). 4) Imaging process: i) Transformation: Put the object in the camera's coordinate system. ii)Clipping: Eliminate points outside the camera's field of view. iii)Projection: Convert from 3D to 2D coordinates. iv)Rasterization: Projected objects represented as pixels in the frame buffer.

7 7 Building 3D models Objects in the computer graphics world are specified by the 3D positions of corners (called vertices). All curves are made up of line segments. All Surfaces are made up of polygons (e.g. triangles).

8 8 Lighting and Shading The physics of light is too complex to compute in real time. Simplified models of lighting and shading are used to make objects appear realistically shaded. Position, color and type of light sources are specified. Material properties indicate color and shininess of the material.

9 9 Specifying Camera and Object Positions Specifying the camera position: The position of the simulated camera relative to the objects in the scene determine what will appear on the screen. Changing the simulated camera position over time gives the appearance of moving around within the scene. Specifying the object position: Objects can be positioned in the 3D world using transformations. The object can be translated, rotated, or scaled.

10 10 Clipping The simulated camera in a graphics system cannot see everything, but has a limited field of view. The region of the synthetic world that the camera can see is specified by the clipping volume. Objects (or parts of objects) that are outside the clipping volume are not rendered in the image. Camera position Clipping Volume Rendered Not Rendered

11 11 Projection and Rasterization The projection converts the 3D model of the world to a 2D image. Different types of projection can be used: Perspective projection: Uses the pinhole camera model to project onto the image plane Orthographic projection: Simply sets the Z value to zero. The lines of projection are perpendicular to the image plane. Rasterization converts the 2D image into a pixelated image stored in the frame buffer. Problems: How do you draw a straight line on a pixelated monitor? How do you fill the inside of a polygon? (What's inside?) How do you keep track of which object is in front of other objects (so you only render the nearest object)?

12 12 Review Binary and Hexadecimal Numbers 1.What is the value of the following binary number in decimal? 110101 2.What is the value of the following Hexadecimal number in decimal? 1B3 3.Convert the following binary number to Hexadecimal and to decimal. 1111011 4. In class--work on Homework 7


Download ppt "1 Perception and VR MONT 104S, Fall 2008 Lecture 20 Computer Graphics and VR."

Similar presentations


Ads by Google