Presentation is loading. Please wait.

Presentation is loading. Please wait.

Game Programming (3D Pipeline Overview)

Similar presentations


Presentation on theme: "Game Programming (3D Pipeline Overview)"— Presentation transcript:

1 Game Programming (3D Pipeline Overview)
2016. Spring

2 3D Pipeline Overview 3D Game Engine Contents
No matter how powerful target platform is, you will always need more Contents Fundamental Data Types Vertex, Color, Texture Geometry Formats Graphics Pipeline Visibility determination Clipping, Culling, Occlusion testing Resolution determination (LOD) Transform, lighting Rasterization

3 3D Rendering Pipeline (for direct illumination)
Transform into 3D world coordinate system Illuminate according to lighting and reflection Transform into 3D camera coordinate system Transform into 2D camera coordinate system Clip primitives outside camera’s view Transform into image coordinate system Draw pixels (includes texturing, hidden surface, …)

4 Y Y Z X X Z Coordinate Systems Left-handed Right-handed
(typical computer graphics Reference system – Blitz3D, DarkBASIC) Right-handed Coordinate System (conventional Cartesian reference system) Y Y Z X X Z

5 Transformations Translate Scale Rotate
Transformation occurs about the origin of the coordinate system’s axis Rotate

6 Order of Transformations Make a Difference
Box centered at origin Rotate about Z 45; Translate along X 1 Translate along X 1; Rotate about Z 45 Order of Transformations

7 Hierarchy of Coordinate Systems
Local coordinate system Also called: Scene graphs Called Skeletons in DarkBASIC because it is usually used to represent people (arms, legs, body).

8 Transformations

9 Viewing Transformations
World X Y Z Viewing Transformations

10 Perspective Projection
The Camera Parallel Projection Perspective Projection

11

12 (light starts to drop off to zero here)
Lighting Ambient basic, even illumination of all objects in a scene Directional all light rays are in parallel in 1 direction - like the sun Point all light rays emanate from a central point in all directions – like a light bulb Spot point light with a limited cone and a fall-off in intensity – like a flashlight Penumbra angle (light starts to drop off to zero here) Cone angle

13 Diffuse Reflection (Lambertian Lighting Model)
The greater the angle between the normal and the vector from the point to the light source, the less light is reflected. Most light is reflected when the angle is 0 degrees, none is reflected at 90 degrees.

14 Specular Reflection (Phong Lighting Model)
Maximum specular reflectance occurs when the viewpoint is along the path of the perfectly reflected ray (when alpha is zero). Specular reflectance falls off quickly as alpha increases. Falloff approximated by cosn(alpha). n varies from 1 to several hundred, depending on the material being modelled. n=1 provides broad, gentle falloff Higher values simulate sharp, focused highlight. For perfect reflector, n would be infinite.

15 Fall off in Phong Shading
Large n Small n

16 Approximating Curved Surfaces with Flat Polygons
Flat Shading – each polygon face has a normal that is used to perform lighting calculations.

17 Texture Maps Used in Tank Game

18 Fundamental Data Types
Vertices Store in XYZ coordinates in a sequential manner Most today’s graphics processing units (GPUs) will only use floats as their internal format Simple approach Can be used for primitives with triangles that share vertices between their face Ex) Triangle 0, 1st vertex Triangle 0, 2nd vertex Triangle 0, 3rd vertex Triangle 1, 1st vertex Triangle 1, 2nd vertex Triangle 1, 3rd vertex (…) Ex) a cube (8 vertices, 6 faces, 12 triangles)  12(triangles) x 3(vertices) x 3 (floats) x 4 (bytes)  432 bytes Disadvantage Repeating the same vertices many times

19 Fundamental Data Types
z z z z z Indexed Primitives Two lists Vertex list : Put the vertices in a list Indexed list Put the indices of the vertices for each face Ex) a cube: a cube (8 vertices, 12 triangles) vertex list: 8(verices) x 3(float) x 4(bytes) =96 bytes indexed list: 12(triangles) x 3(vertices index: unsigned integer) x 2 (bytes) = 72 bytes Total: = 168 bytes (about 40%) Advantage Sending half the data is twice as fast All phases in the pipeline can work with this one Disadvantage Additional burden A vertex shared two faces that have different materials identifiers and texture coordinates

20 Fundamental Data Types
Quantization A lossy techniques Minimize the loss and achieve additional gain Downsampling Storing values in lower precision data types Ex) coding floats into unsigned short or unsigned byte Decompression (=reconstruction) TC methods Truncate and then centers Ex) truncate the floating point to an integer to decode, decompressed by adding 0.5 반올림(Round)

21 Fundamental Data Types
Color RGB (or RGBA) color space Ex) floating-point  black(0,0,0), white (1,1,1) 24 bits modes Color coded as bytes are internally supported by most APIs and GPUs Some special case Hue-Saturation-Brightness Cyan-Magenta-Yellow BGR colors (Targa texture format) Luminance value

22 Fundamental Data Types
Alpha Encode transparency (32 bit RGBA value) The lower value, the less opacity 0: invisible (transparency)  255 (opaque) Disadv. Using alpha values embedded into texture maps Make the texture grow by one forth To save precious memory Using a regular RGB map and specifying alpha per vertex instead

23 Fundamental Data Types
Texture Mapping Increase the visual appeal of a scene by simulating the appearance of different materials Two issues which texture map will use for each primitives How the texture will wrap around it which texture map will use Side effect A shared vertex A single vertex can have more than one texture Multitexturing (=multipass rendering) (chap. 17) Layer several textures in the same primitive to simulate a combination of them

24 Fundamental Data Types
How textures are mapped onto triangles (U, V) map that vertex to a virtual texture space Usually stored as floats into the range(0, 1) Special effects Reflection map (on the fly) Create the illusion of reflection by applying the texture

25 Packing Textures Problem Solution
The limits on texture width/height make it inefficient to store many textures For example: long, thin objects Solution Artists pack the textures for many objects into one image The texture coordinates for a given object may only index into a small part of the image Care must be taken at the boundary between sub-images to achieve correct blending Mipmapping is restricted

26 Combining Textures

27 Multitexturing Bump skin + Key-frame model WOW! geometry Result
Some effects are easier to implement if multiple textures can be applied Future lectures: Light maps, bump maps, shadows, … Key-frame model geometry Decal skin Bump skin Gloss skin WOW! Result +

28 Stores heights: can derive normals
Bump Mapping Texture values perturb surface normals Bump map Stores heights: can derive normals + Bump mapped geometry = geometry

29 Dot Product bump mapping
Store normal vectors in the bump map Apply the bump map using the dot3 operator Takes a dot product

30 Light Maps Speed up lighting calculations by pre-computing lighting and storing it in maps Allows complex illumination models to be used in generating the map (eg shadows, radiosity) Used in complex rendering algorithms (Radiance), not just games Issues: How is the mapping determined? How are the maps generated? How are they applied at run-time?

31 Example Call of duty

32 Why Shadows? Shadows tell us about the relative locations and motions of objects

33 Shadows in Light Maps Static shadows can be incorporated into light maps When creating the map, test for shadows by ray-casting to the light source - quite efficient Area light sources should cast soft shadows Interpolating the texture will give soft shadows, but not good ones, and you loose hard shadows Sampling the light will give better results: Cast multiple rays to different points on the area light, and average the results Should still filter for best results

34 Geometry Formats Geometry Formats Geometry stream
How we will deliver the geometry stream to the graphics subsystem The geometry packing methods An optimal way  Can achieve x2 performance Geometry stream Five data types Vertices, normals, texture coordinates, colors Indices to them to avoid repetition Ex) 3 floats per vertex, 2 floats per texture, 3 floats per normal, 3 floats per color  V3f T2f N3f C3f  132 bytes per triangle Ex) Pre-illuminated vertices (static lighting)  V3f T2f N0f C3f  96 bytes per triangle Ex) bytes  V3b T2b N0 C1b  18 bytes per triangle A indexed mesh usually takes b/w 40 and 60 % of the space by the original data set

35 A Generic Graphics Pipeline
Four stages Visibility determination Clipping, Culling, Occlusion testing Resolution determination LOD analysis Transform, lighting Rasterization

36 Hardware Graphics Pipelines
CPU GPU

37 GPU Fundamentals: The Graphics Pipeline
CPU GPU Graphics State Application Transform Rasterizer Shade Video Memory (Textures) Vertices (3D) Xformed, Lit Vertices (2D) Fragments (pre-pixels) Final pixels (Color, Depth) Render-to-texture A simplified graphics pipeline Note that pipe widths vary Many caches, FIFOs, and so on not shown

38 GPU Fundamentals: The Modern Graphics Pipeline
CPU GPU Graphics State Vertex Processor Fragment Processor Application Vertex Processor Rasterizer Pixel Processor Video Memory (Textures) Vertices (3D) Xformed, Lit Vertices (2D) Fragments (pre-pixels) Final pixels (Color, Depth) Render-to-texture Programmable vertex processor! Programmable pixel processor!

39 GPU Pipeline: Transform
Vertex Processor (multiple operate in parallel) Transform from “world space” to “image space” Compute per-vertex lighting

40 GPU Pipeline: Rasterizer
Convert geometric rep. (vertex) to image rep. (fragment) Fragment = image fragment Pixel + associated data: color, depth, stencil, etc. Interpolate per-vertex quantities across pixels

41 GPU Pipeline: Shade Fragment Processors (multiple in parallel)
Compute a color for each pixel Optionally read colors from textures (images)

42 Visibility culling (Clipping & Culling)
Occlusion culling View frustum culling (=clipping) Back-face culling

43 Clipping Clipping The process of eliminating unseen geometry by testing it against a clipping volume, such as screen If the test fails, Discard that geometry The clipping test must be faster than drawing the primitives The camera has horizontal aperture of 60 degrees (standard) 60/360: 17 % of geometry  visible , 83% discard Ex) FPS, driving simulators Clipping methods Triangle Clipping Object Clipping Bounding Sphere, Bounding Box

44 Clipping Triangle Clipping
Clipping each and every triangle prior to rasterizing it The triangle level test will not provide good results hardware clipping Great performance with no coding But, not using the bus very efficiently Sending whole triangles through the bus to the graphics card Clipping test in the graphics chip Send lots of invisible triangles to the card Application Stage 3D Triangles Geometry Stage 2D Triangles Rasterization Stage Pixels

45 Clipping Object Clipping Work on an object level
if a whole object is completely invisible  discard if a whole object is completely or partially within the clipping volume  H/W triangle clipping will be used Bounding Volume (Ex: spheres and boxes) Represent the boundary of the object False positive The BV will be visible but the object won’t Provide us with constant-cost clipping methods Ex) 1000-triangle object vs trianle object Application Stage 3D Triangles Geometry Stage 2D Triangles Rasterization Stage Pixels

46 Clipping Bounding Sphere Defined by its center and radius
Center (SCx, SCy, SCz), radius (SR) Given six clipping planes  View volume Ax + By + Cz + D = 0 (clipping plane) A,B,C: plane normal D: defined by A,B,C and a point in the plane Clipping test A * SCx + B * SCy + C * SCz + D < -SR Return true if the sphere lies in the hemispace opposite the plane normal

47 Clipping Advantage Inexpensive  The test is the same as testing a point Rotation invariance Disadvantage Tend not to fit the geometry well Lots of false positive Ex) a pencil-shaped object

48 Clipping Bounding Boxes Provide a tighter fit
Less false positive will happen More complex than with spheres Don’t have rotational invariance Boxes can either be axis aligned or generic An axis-aligned bounding box (AABB) Parallel to the X, Y, Z axes Defined by two points (from all the points) the minimum X, Y, Z value found in any of them the maximum X, Y, Z value found in any of them

49 Culling Culling Eliminate geometry depending on its orientation
Well-formed object The vertices of each triangle are defined in CCW order Eliminate About one half of the processed triangles House mesh Polygon soup

50 Culling Boundary representation (B-rep)
A volume enclosed by the object No openings or holes that allow us to look inside

51 Culling Culling Test 3D well-formed object
front back eye Culling Culling Test 3D well-formed object Back-facing normals  Cull away The faces whose normals are pointing away from the viewpoint The simplest form of culling Culling take place inside the GPU Object culling is by far less popular than object clipping The benefit of culling: 50%, clipping: about 80% If your geometry is prepacked in linear structures, you don’t want to reorder it because of the culling

52 Culling Object Culling Reject back-facing primitives
Classify different triangles in an object according to their orientation (load time or preprocess) Create the cluster Partition of the normal space value Longitude and latitude Use a cube as an intermediate primitive Application Stage 3D Triangles Geometry Stage 2D Triangles Rasterization Stage Pixels

53 Occlusion Testing After Clipping and culling
Some redundant triangles can still survive Camera-facing primitive overlap each other Painting them using Z-buffer to properly order them (overdraw) Occluder: the one closest to the viewer Occlusion prevention policies Indoor rendering (chap. 13) Potentially Visible Set (PSV) culling Portal rendering Outdoor rendering (chap. 14) Application Stage 3D Triangles Geometry Stage 2D Triangles Rasterization Stage Pixels

54 Occlusion Testing Occlusion Testing Draw the geometry front to back
Large occluders are painted first if its BV will alter the Z-buffer Paint the geometry if the BV will not affect the results Reject the object (fully behind other objects)

55 Resolution Determination
Examples: huge mountains and thousands of trees Clipping (aperture of 60)  1/6 of the total triangle Culling  1/12 (= 1/2 * 1/6) Occlusion  about 1/15 Geometry Terrain 20km*20km square terrain patch with sampled every meter  400 million triangle map Trees Realistic tree  25,000 triangle One tree for every 20 meters  25 billion triangle per frame ??

56 Level-of-detail rendering
use different levels of detail at different distances from the viewer IDEA: Does not make sense to use 10,000 triangles when the object cover 50 pixels on-screen Different levels of detail == different number of polygons BECAUSE, when an object is far away,

57 Level-of-detail rendering
not much visual difference, but a lot faster Other criteria to select LOD: distance to the object, rendering budget (I.e., do we have time to draw a more complex LOD?) Many different types of LODs : what we’ve shown here are called “Discrete Geometry LODs” Read more in the book about Alpha LODs, Geomorph LODS, and more sophisticated LOD selection critera. Speedups: usually at least a factor of 2 use area of projection of BV to select appropriate LOD

58 Resolution Determination
Multiresolution Human visual system tends to focus on larger, closer object (Especially if they are moving) Two components A resolution-selection (heuristic) The distance from the object to the viewer Far from perfect (The object is far away, but huge??) The area of the projected polygon Perceived size, not with distance Rendering algorithms that handles the desired resolution A discrete approach (memory intensive)  (Noticeable popping) Simply select the best-suited model from a catalog of varying quality models A continuous method (high CPU hit) Derive a model with the right triangle count on the fly Clipping, Culling, and occlusion tests determine what we are seeing Resolution test determine how we see it

59 Transform and Lighting
Transform stage Perform geometric transformation to the incoming data stream (rotation, scaling, translation, …) Handle the projection of the transformed vertices to screen-coordinates (3D coord.  2D coord.) Lighting stage Most current API and GPUs offer H/W lighting Only per-vertex Global illumination must be computed per pixel Light mapping (using multitexturing) (chap.17) Stores the light information in low-resolution textures Allow per-pixel quality at a reasonable cost Application Stage 3D Triangles Geometry Stage 2D Triangles Rasterization Stage Pixels

60 Rasterization Rasterization (perform fully in H/W)
The process by which our geometry is converted to pixel sequences on a monitor Immediate mode Sending primitives individually down the bus, one by one Easiest to code the worst performance (bus fragmentation) glBegin (GL_TRIANGLES); glColor3f(1, 1, 1); glVertex3f(-1, 0, 0); glVertex3f(1, 0, 0); glVertex3f(0, 1, 0); glEnd();

61 Rasterization Packed primitives
Vertex arrays(OpenGL), vertex buffers(DirectX) Pack all the data in one or more arrays Three separate array: vertex, color and texture information Use a single call to deliver the whole structure to the H/W Interleaved arrays Interleave the different types of information Ex) V[0].x V[0].y V[0].z C[0].r C[0].g C[0].b T[0].u T[0].v (…) Application Stage 3D Triangles Geometry Stage 2D Triangles Rasterization Stage Pixels


Download ppt "Game Programming (3D Pipeline Overview)"

Similar presentations


Ads by Google