CSE 381 – Advanced Game Programming Basic 3D Graphics

Slides:



Advertisements
Similar presentations
Real-Time Rendering 靜宜大學資工研究所 蔡奇偉副教授 2010©.
Advertisements

Graphics Pipeline.
Computer Graphic Creator: Mohsen Asghari Session 2 Fall 2014.
Week 7 - Monday.  What did we talk about last time?  Specular shading  Aliasing and antialiasing.
3D Graphics Rendering and Terrain Modeling
CAP4730: Computational Structures in Computer Graphics Visible Surface Determination.
CHAPTER 12 Height Maps, Hidden Surface Removal, Clipping and Level of Detail Algorithms © 2008 Cengage Learning EMEA.
Computer Graphics Visible Surface Determination. Goal of Visible Surface Determination To draw only the surfaces (triangles) that are visible, given a.
CS 4363/6353 INTRODUCTION TO COMPUTER GRAPHICS. WHAT YOU’LL SEE Interactive 3D computer graphics Real-time 2D, but mostly 3D OpenGL C/C++ (if you don’t.
Computer Graphics Hardware Acceleration for Embedded Level Systems Brian Murray
CGDD 4003 THE MASSIVE FIELD OF COMPUTER GRAPHICS.
Game Engine Design ITCS 4010/5010 Spring 2006 Kalpathi Subramanian Department of Computer Science UNC Charlotte.
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Models and Architectures Ed Angel Professor of Computer Science, Electrical and Computer.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
CHAPTER 7 Viewing and Transformations © 2008 Cengage Learning EMEA.
Geometric Objects and Transformations Geometric Entities Representation vs. Reference System Geometric ADT (Abstract Data Types)
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Beyond Meshes Spring 2012.
Hidden Surface Removal
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
CSE 381 – Advanced Game Programming 3D Mathematics
Basic Graphics Concepts Day One CSCI 440. Terminology object - the thing being modeled image - view of object(s) on the screen frame buffer - memory that.
Basics of Rendering Pipeline Based Rendering –Objects in the scene are rendered in a sequence of steps that form the Rendering Pipeline. Ray-Tracing –A.
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) CS 426 Intro to 3D Computer Graphics © 2003, 2004, 2005 Jason Leigh Electronic.
COMP 175: Computer Graphics March 24, 2015
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Week 2 - Wednesday CS361.
Computer Graphics World, View and Projection Matrices CO2409 Computer Graphics Week 8.
Image Synthesis Rabie A. Ramadan, PhD 2. 2 Java OpenGL Using JOGL: Using JOGL: Wiki: You can download JOGL from.
Week 11 - Thursday.  What did we talk about last time?  Image processing  Blurring  Edge detection  Color correction  Tone mapping  Lens flare.
Buffers Textures and more Rendering Paul Taylor & Barry La Trobe University 2009.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Week 5 - Wednesday.  What did we talk about last time?  Project 2  Normal transforms  Euler angles  Quaternions.
MIT EECS 6.837, Durand and Cutler Graphics Pipeline: Projective Transformations.
CS 450: COMPUTER GRAPHICS REVIEW: INTRODUCTION TO COMPUTER GRAPHICS – PART 2 SPRING 2015 DR. MICHAEL J. REALE.
Graphics Systems and OpenGL. Business of Generating Images Images are made up of pixels.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics The Rendering Pipeline - Review CO2409 Computer Graphics Week 15.
The Rendering Pipeline CS 445/645 Introduction to Computer Graphics David Luebke, Spring 2003.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
CS 450: COMPUTER GRAPHICS PROJECTIONS SPRING 2015 DR. MICHAEL J. REALE.
Computer Graphics Zhen Jiang West Chester University.
Mark Nelson 3d projections Fall 2013
Basic 3D Concepts. Overview 1.Coordinate systems 2.Transformations 3.Projection 4.Rasterization.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Chapters 5 2 March Classical & Computer Viewing Same elements –objects –viewer –projectors –projection plane.
CS COMPUTER GRAPHICS LABORATORY. LIST OF EXPERIMENTS 1.Implementation of Bresenhams Algorithm – Line, Circle, Ellipse. 2.Implementation of Line,
1 Perception and VR MONT 104S, Fall 2008 Lecture 20 Computer Graphics and VR.
Shadows David Luebke University of Virginia. Shadows An important visual cue, traditionally hard to do in real-time rendering Outline: –Notation –Planar.
Viewing and Projection. The topics Interior parameters Projection type Field of view Clipping Frustum… Exterior parameters Camera position Camera orientation.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
1 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 Models and Architectures 靜宜大學 資訊工程系 蔡奇偉 副教授 2012.
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
Introduction to Computer Graphics
- Introduction - Graphics Pipeline
Intro to 3D Graphics.
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Models and Architectures
Models and Architectures
Introduction to Computer Graphics with WebGL
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
Chapter V Vertex Processing
Lecture 13 Clipping & Scan Conversion
Game Programming Algorithms and Techniques
Presentation transcript:

CSE 381 – Advanced Game Programming Basic 3D Graphics A nice simple OpenGL triangle

Today’s Important Terms projection transform render right-handed coordinate system scene screen coordinates surface transformed coordinates transforms untransformed coordinates vertex buffer view frustum world space world transform aspect ratio back face culling camera double buffering face field of view frame buffer GPU index buffer left-handed coordinate system model space

Coordinate Systems 3D Graphics is about transforming data from one coordinate system (space) to another Model Space World Space Screen Space

Drawing Using Screen Coordinates What is screen space? drawing on a flat, 2D space (the monitor) draw objects at x, y locations 0, 0 is top-left-hand corner of screen (0, 0) (width, height)

2D Games naturally use screen coordinates x y Shift Flash Game: http://www.flashninjaclan.com/zzz883.php

You change the frame buffer pixel data What is a frame buffer? A 2D array of colors what dimensions? Represents the pixels we view on the monitor (screen space) GPU Often represented using a single 1D array How? You change the frame buffer pixel data It ends up on the screen

What is double buffering? A GPU has 2 (or more) frame buffers Why? one for filling in one for current display So? when we’re done filling one in, we swap them this prevents flickering why would flickering happen?

So what else will the GPU do for us? Has efficient implementations of OpenGL & DirectX functions matrix math model transformations for rendering & animating texturing storing data shading etc.

But 3D Graphics have 3Ds (duh) A game world is a cube It has an origin (0,0,0) We place objects at locations inside this world volume (x, y, z) camera (s) models (artwork) terrain particle systems light sources We project objects onto the camera’s “screen”

What is world space? A coordinate system for placing all game objects Every model has coordinates (x,y,z) Refers to either the coordinates of: the model’s bounding box’s center one of the model’s corners A 3D game world is in a box or sphere called its bounding volume

A game world’s bounding volume Keeps all objects inside world when objects move, test to make sure they don’t leave An additional BV provides art for out of reach background sky box or sky sphere You’ll make both for HW 2

How do we project 3D objects onto a 2D screen? Linear algebra We’ll let OpenGL & the GPU do this for us We could alternatively do this in software What are pros & cons? More on this in a minute

Which objects should we project? That’s the trick Projecting models is computationally expensive even with a fast GPU detailed models have lots & lots of data We don’t want to have to try to project all objects in the game world So, when rendering a world, the procedure is: Calculate which objects should be drawn Render only those objects This will be a big issue all semester

What are models? Data describing 3D game assets (artwork) also called meshes What data might a model have? vertices triangles adjacencies materials textures bone hierarchy animations mipmaps billboards and more http://www.wowmodelviewer.org/

Models are created using modeling software Like Maya, Blender, 3ds max, etc.

Modeling Software uses Model Space Each model has its own origin (0, 0, 0) So how do they end up in world space? Linear algebra matrix transformations

What’s a matrix? A 2D array of numbers used to position a vertex in 3D space also used to project a vertex in 3D space onto a 2D screen NOTE: If you want to be a game programmer you must know Matrix Math Identity Matrix What will this do? 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 2 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1

Different Transformation Matrices scaleX 0 0 0 0 scaleY 0 0 0 0 scaleZ 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 moveX moveY moveZ 1 Also rotation matrices, depending on axis of rotation

Matrices can be combined Multiply matrices for combined effects Result is transformation matrix for a given object This is used to position all vertices of that object in world space How? by performing matrix math on x,y,z values of vertices

Model Vertices Conversions Note: models are created in tools using their own coordinate system (model space) During rendering of a model, for each vertex: Perform necessary transformations to convert from model to world coordinates translate (move to xyz location in world), rotate, scale Convert to view coordinates (view transform) Convert to screen coordinates (projection transform) You’ll learn all about this in CSE 328

Scene An assembly of objects presented on the screen at once

Rendering The process of producing a pixel by pixel image of the scene for the screen What is this thing?

The volume of the game world to be projected onto the screen View Frustum The volume of the game world to be projected onto the screen http://www.resourcecode.de/view.php?id=2059

A point for viewing the objects in world space What is the camera? A point for viewing the objects in world space the camera is placed in world space the camera looks in a specific direction in world space based on where the camera is, where it’s looking, and some other conditions (ex: z-plane, culling), we can draw the appropriate graphics on the screen How do we define the camera? 2 matrices (linear algebra), a.k.a. transforms view transform projection transform

Tells the computer three things: View Transform Defined by a 4 X 4 matrix Tells the computer three things: the camera position in world space (xc, yc, zc) the direction in which the camera is pointed defined as a 3D vector (xvc, yvc, zvc) the orientation of the camera the direction that is “up” for the camera also defined as a 3D vector

Projection Transform Defined by a 4 X 4 matrix Tells the computer how the scene should be projected onto the monitor does so by defining the viewing frustum only those elements inside the viewing frustum are rendered to the screen http://www.resourcecode.de/view.php?id=2059 Defines 4 things: front clipping plane (z near plane) back clipping plane (z far plane) aspect ratio y field of view

Field of View? Z far plane Z near plane Camera Position fov Z If you have the y fov, the x fov can be calculated using the aspect ratio fov defined using radians

3 connected points (vertices) Model’s Vertices What is a triangle? 3 connected points (vertices) if we combine 3 vertices, we can easily draw lines to connect them Transformation Matrix View Transform Matrix Projection Transform Matrix A bunch of vertices on screen Note: this data then feeds texture mapping process

A wireframe model of a table Models face the area between lines (sides) of a polygon typically triangles or quadrilaterals surface a face or many faces together A wireframe model of a table

Textures Textures are 2D images We can wrap the wireframe in a texture

How is texturing done? We have triangle data (3 vertices) We specify for each triangle, texture mapping in U,V coordinates refers to pixel region of an image After vertices are transformed: texels are mapped onto triangles specified by a modeler We’re talking mapping individual texture pixels onto screen high-resolution texture takes longer Note: GPUs are optimized for all of this (0, 0) (1, 1)

Table & Camera Matrices Table Data Matrix Projections Texturing Table & Camera Matrices 3D Game World Matrix Projections Texturing Game Matrices

Common question: how should we store this data? For a cube How many triangles 12 (2 per face) How many vertices? 8 Common question: how should we store this data?

Making other shapes using Triangles How many vertices, edges, & triangles to make a cube? 8 vertices? 18 edges 12 triangles How about an x-wing fighter? 3099 vertices 9190 edges 6076 triangles http://gts.sourceforge.net/samples.html

Make a data structure called triangle with 3D One approach Make a data structure called triangle with 3D x, y, z For each model, store a data structure of triangles For cube, 12 triangles would fill data structure How many vertices is that? 36 Not efficient

Graphics Data While a scene is being rendered, where should the vertex data be stored? storing it in system memory (RAM) requires slow copying to the video card storing it directly in the graphics card is best this is where vertex buffers come in

What is a Vertex Buffer? A memory store in the GPU for vertices In Direct3D for example, one can use the VertexBuffer class Can store all types of vertices (ex: transformed, with normal, etc …) In OpenGL, we’ll use glVertext3D method Note: other platforms also have vertex buffers

What is an Index Buffer? Remember constructing 36 vertices to create a cube using triangles? inefficient use of memory a real cube only has 8 vertices (one for each cube corner) In a large-scale application, redundant data is costly Index buffers are a mechanism for sharing vertex data among primitives a buffer that stores indices into vertex data What does that mean? an array (or vector, etc.) that describes shapes (e.g., triangles) using indices of constructed vertices in vertex buffer groups of 3 indices describe one triangle

A Cube Index Buffer Solution private short[] indices = { 0,1,2, // Front Face 1,3,2, // Front Face 4,5,6, // Back Face 6,5,7, // Back Face 0,5,4, // Top Face 0,2,5, // Top Face 1,6,7, // Bottom Face 1,7,3, // Bottom Face 0,6,1, // Left Face 4,6,0, // Left Face 2,3,7, // Right Face 5,2,7 // Right Face };

Advantages & Disadvantages of Index Buffers What are the advantages? more efficient memory management What are the disadvantages? vertices have to share color data vertices have to share normal data may result in lighting errors

What is a Depth Buffer? A means for storing depth information for rendering Used during rasterization to determine how pixels occlude (block) each other, algorithm: convert each surface that’s drawn to a set of pixels for each pixel, compute the distance from the view plane before each pixel is drawn, compare it with the depth value already stored at that on-screen pixel if the new pixel is closer (in front of) what’s in the depth buffer, the new pixel’s color & depth values replace those that are currently there In Direct3D & OpenGL, can be simply turned-on: depth buffer management is done for you you may choose format: z-buffers w-buffers, a type of z-buffer that provides more precision not as widely supported in hardware