Download presentation
1
Graphics Pipeline
2
Goals Understand the difference between inverse-mapping and forward-mapping approaches to computer graphics rendering Be familiar with the graphics pipeline From transformation perspective From operation perspective
3
Approaches to computer graphics rendering
Ray-tracing approach Inverse-mapping approach: starts from pixels A ray is traced from the camera through each pixel Takes into account reflection, refraction, and diffraction in a multi-resolution fashion High quality graphics but computationally expensive Not for real-time applications Pipeline approach Forward-mapping approach Used by OpenGL and DirectX State-based approach: Input is 2D or 3D data Output is frame buffer Modify state to modify functionality For real-time and interactive applications, especially games
4
Ray-tracing – Inverse mapping
For every pixel, construct a ray from the eye for every object in the scene intersect ray with object find closest intersection with the ray compute normal at point of intersection compute color for pixel shoot secondary rays
5
Pipeline – Forward mapping
Start from the geometric primitives to find the values of the pixels
6
The general view (Transformations)
Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation Rasterization 3D scene, Camera Parameters, and light sources graphics Pipeline Framebuffer Display
7
Input and output of the graphics pipeline
Geometric model Objects Light sources geometry and transformations Lighting model Description of light and object properties Camera model Eye position, viewing volume Viewport model Pixel grid onto which the view window is mapped Output: Colors suitable for framebuffer display
8
Graphics pipeline What is it?
The nature of the processing steps to display a computer graphic and the order in which they must occur. Primitives are processed in a series of stages Each stage forwards its result on to the next stage The pipeline can be drawn and implemented in different ways Some stages may be in hardware, others in software Optimizations and additional programmability are available at some stages Two ways of viewing the pipeline: Transformation perspective Operation perspective
9
Modeling transformation
3D models defined in their own coordinate system (object space) Modeling transforms orient the models within a common coordinate frame (world space) Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation Object space World space Rasterization
10
Lighting (shading) Vertices lit (shaded) according to material properties, surface properties (normal) and light sources Local lighting model (Diffuse, Ambient, Phong, etc.) Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation Rasterization
11
Viewing transformation
It maps world space to eye (camera) space Viewing position is transformed to origin and viewing direction is oriented along some axis (usually z) Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation Rasterization
12
Projection transformation (Perspective/Orthogonal)
Specify the view volume that will ultimately be visible to the camera Two clipping planes are used: near plane and far plane Usually perspective or orthogonal Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation Rasterization
13
Clipping The view volume is transformed into standard cube that extends from -1 to 1 to produce Normalized Device Coordinates. Portions of the object outside the NDC cube are removed (clipped) Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation Rasterization
14
Viewport transformation (to screen space)
Maps NDC to 3D viewport: xy gives the screen window z gives the depth of each point Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation Rasterization
15
Rasterization (scan conversion)
Rasterizes objects into pixels Interpolate values as we go (color, depth, etc.) Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation Rasterization
16
Summary of transformations
glMatrixMode(GL_MODELVIEW); //glTranslate; glRotate; glScale //gluLookAt glMatrixMode(GL_PROJECTION); //glTranslate; glRotate; glScale //gluPerspective //glFrustrum glViewPort(0, 0, w, h);
17
DirectX transformations
World transformation Device.Transform.World = Matrix.RotationZ(…) OpenGL does not have one View transformation: Device.Tranform.View = Matrix.LookAtLH(…) Projection transformation: device.Transform.Projection = Matrix.PerspectiveFovLH
18
OpenGL pipeline (operations)
19
OpenGL pipeline Display list Vertex Operation Primitive Assembly
A group of OpenGL commands that have been stored (compiled) for later execution. Vertex and pixel data can be stored/cached in a display list. (Why?) Vertex Operation Each vertex and its normal coordinates are transformed by GL_MODELVIEW matrix from object coordinates to eye coordinates. When lighting is enabled, the lighting calculation of a vertex is performed using the transformed vertex and normal data; thus producing new color for the vertex. Primitive Assembly The geometrical primitives are transformed by projection matrix then clipped by viewing volume clipping planes. After that, viewport transform is applied in order to map 3D scene to screen space coordinates. Lastly, if culling is enabled, the culling test is performed. Pixel Transfer Operation Unpacked pixels may undergo transfer operations such as scaling, wrapping, and clamping. The transferred data are either stored in texture memory or rasterized directly to fragments.
20
OpenGL pipeline Texture Memory Rasterization Fragment Operation
Texture images are loaded into texture memory to be applied onto geometric objects. Rasterization The conversion of both geometric and pixel data into fragment. Fragments obtained form a rectangular array containing color, depth, line width, point size, and anti-aliasing calculations (GL_POLYGON_SMOOTH). When requested, the interior pixels of a polygon will be filled. A fragment corresponds to a pixel in the frame buffer. Fragment Operation Fragments are converted to pixels onto frame buffer. The first step in this stage is to generate a texture element, texel, from texture memory and apply it to each fragment. Fog calculations are then performed. When enabled, several fragment tests are performed in the order: Scissor Test , Alpha Test, Stencil Test, and Depth Test. Finally, blending, dithering, logical operation, and masking by bitmasks are performed and actual pixel data are stored in frame buffer. Feedback Used to get OpenGL’s current states and information (glGet*() and glIsEnabled() commands are just for that). A rectangular area of pixel data from frame buffer can be read using glReadPixels(). Fully transformed vertex data can be obtained using the feedback rendering mode and the feedback buffer.
21
Zoom into OpenGL pipeline (see the OpenGL bluebook)
22
Summary of operations
23
Fixed Pipeline Programmable Pipeline High-level Shading Language
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.