Underlying Technologies Part Two: Software Mark Green School of Creative Media.

Slides:



Advertisements
Similar presentations
Real-Time Rendering 靜宜大學資工研究所 蔡奇偉副教授 2010©.
Advertisements

CS 352: Computer Graphics Chapter 7: The Rendering Pipeline.
Graphics Pipeline.
3D Graphics Rendering and Terrain Modeling
CS 4363/6353 INTRODUCTION TO COMPUTER GRAPHICS. WHAT YOU’LL SEE Interactive 3D computer graphics Real-time 2D, but mostly 3D OpenGL C/C++ (if you don’t.
Rasterization and Ray Tracing in Real-Time Applications (Games) Andrew Graff.
Computer Graphics Hardware Acceleration for Embedded Level Systems Brian Murray
ATI GPUs and Graphics APIs Mark Segal. ATI Hardware X1K series 8 SIMD vertex engines, 16 SIMD fragment (pixel) engines 3-component vector + scalar ALUs.
Introduction to Virtual Reality Mark Green School of Creative Media.
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Models and Architectures Ed Angel Professor of Computer Science, Electrical and Computer.
SM3121 Software Technology Mark Green School of Creative Media.
Object Orientated Data Topic 5: Multimedia Technology.
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
Hidden Surface Removal
SET09115 Intro Graphics Programming
CASE Tools And Their Effect On Software Quality Peter Geddis – pxg07u.
Revision Lesson : DESIGNING COMPUTER-BASED INFORMATION SYSTEMS.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 19 Other Graphics Considerations Review.
1 Perception and VR MONT 104S, Spring 2008 Lecture 22 Other Graphics Considerations Review.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
11 A First Game Program Session Session Overview  Begin the creation of an arcade game  Learn software design techniques that apply to any form.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Principles of I/0 hardware.
Week 2 - Wednesday CS361.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
Chris Kerkhoff Matthew Sullivan 10/16/2009.  Shaders are simple programs that describe the traits of either a vertex or a pixel.  Shaders replace a.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
Object Orientated Data Topic 5: Multimedia Technology.
Homogeneous Form, Introduction to 3-D Graphics Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Monday, October 20,
IT253: Computer Organization
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics The Rendering Pipeline - Review CO2409 Computer Graphics Week 15.
1Computer Graphics Lecture 4 - Models and Architectures John Shearer Culture Lab – space 2
1 The Rendering Pipeline. CS788 Topic of HCI 2 Outline  Introduction  The Graphics Rendering Pipeline  Three functional stages  Example  Bottleneck.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Motion Planning in Games Mark Overmars Utrecht University.
Computing & Information Sciences Kansas State University Lecture 19 of 42CIS 636/736: (Introduction to) Computer Graphics Lecture 19 of 42 William H. Hsu.
Computer Graphics Chapter 6 Andreas Savva. 2 Interactive Graphics Graphics provides one of the most natural means of communicating with a computer. Interactive.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
A Few Things about Graphics Jian Huang Computer Science University of Tennessee.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Emerging Technologies for Games Deferred Rendering CO3303 Week 22.
Marwan Al-Namari 1 Digital Representations. Bits and Bytes Devices can only be in one of two states 0 or 1, yes or no, on or off, … Bit: a unit of data.
Software Development. Software Development Loop Design  Programmers need a solid foundation before they start coding anything  Understand the task.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Game Programming Patterns Game Loop From the book by Robert Nystrom
COMP135/COMP535 Digital Multimedia, 2nd edition Nigel Chapman & Jenny Chapman Chapter 2 Lecture 2 – Digital Representations.
11 Computers, C#, XNA, and You Session 1.1. Session Overview  Find out what computers are all about ...and what makes a great programmer  Discover.
1 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 Models and Architectures 靜宜大學 資訊工程系 蔡奇偉 副教授 2012.
1 INTRODUCTION TO COMPUTER GRAPHICS. Computer Graphics The computer is an information processing machine. It is a tool for storing, manipulating and correlating.
Applications and Rendering pipeline
Introduction to Computer Graphics
Software Development.
- Introduction - Graphics Pipeline
MPEG-4 Binary Information for Scenes (BIFS)
Chapter 1 An overview on Computer Graphics
Chapter 9 – Real Memory Organization and Management
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Understanding Theory and application of 3D
Models and Architectures
Models and Architectures
Models and Architectures
Introduction to Computer Graphics with WebGL
Models and Architectures
Models and Architectures
Tonga Institute of Higher Education IT 141: Information Systems
Tonga Institute of Higher Education IT 141: Information Systems
Chapter 14 Shading Models.
Presentation transcript:

Underlying Technologies Part Two: Software Mark Green School of Creative Media

Introduction  Software not as easy as hardware: wide range of software techniques, hard to classify like hardware wide range of software techniques, hard to classify like hardware several components that need to work together, hard to know where to start several components that need to work together, hard to know where to start wide range of hardware configurations, not as simple as 2D software wide range of hardware configurations, not as simple as 2D software

Hardware Configurations  In 2D have a standard hardware configuration: input: keyboard and mouse input: keyboard and mouse output: single 2D display output: single 2D display  with 3D can have many configurations: HMD HMD projection projection single screen single screen

Hardware Configurations  Want to produce an application once, not once for every possible hardware configuration  software needs to be more adaptable, change based on hardware configuration  complicates the development of support software

Range of Software Techniques  Want our software to be very efficient: reduce latency, high update rates  some applications can be quite large, need to efficiently organize data  all of this complicates VR software, too many things to consider, hard to know where to start

Components  What are the main components of a VR application? 3D Objects: geometry and appearance, but may also want sound and force 3D Objects: geometry and appearance, but may also want sound and force Behavior: the objects need to be able to do things, move and react Behavior: the objects need to be able to do things, move and react Interaction: users want to interact with the application, manipulate the objects Interaction: users want to interact with the application, manipulate the objects

3D Objects  Need object geometry, object’s shape, basis for everything else, called model  polygons used for geometry, sometimes restricted to triangles  different from animation, free form surfaces based on sophisticated math  need speed, so restricted to polygons

3D Objects  Where does geometry come from?  Really depends on the application  Could use a text editor to enter all the polygon vertices, some people actually do this!  Could use a program, for example OpenGL, works for small models

3D Objects  Use a 3D modeling or animation program  for non-programmers this is the easiest way, but it takes time to develop modeling skills  also many different program and file formats  want a modeler that does a good job of polygons, not all modelers are good at this

3D Objects  Another source of objects is scientific and engineering computations  can be easy to convert to polygons, already have position data  other types of data can also be converted into geometry, but this can be more difficult

3D Objects  Also need to consider appearance: colour of the object colour of the object how it reflects light how it reflects light transparency transparency texture texture  can be done with modeler, or later in the VR program

Behavior  How should objects behave? What happens when the user hits an object? What happens when the user hits an object? What happens when an object hits another object? What happens when an object hits another object? Can objects move around the environment? Can objects move around the environment?  Each object could have a range of behaviors, react differently to different events in the environment

Behavior  Behavior is harder than modeling  animation programs can be useful, but not always  animation is quite different: animator is in complete control, knows what’s happening all of the time animator is in complete control, knows what’s happening all of the time in VR the user is in control, can interrupt or mess up any animation in VR the user is in control, can interrupt or mess up any animation

Behavior  Short animations (less than 5 seconds) can be useful, basic motion units  other types of behaviors must be programmed or scripted  more flexible, can respond to the events that occur in the environment  easier to combine, objects can do two things at same time

Interaction  Users want to interact with the environment  pick up objects and move them around  very different from 2D interaction  much more freedom, more direct interaction  still exploring the design space, not stable like 2D interaction  still working on standard techniques

Application Structure  look at application structure  provides a framework for discussing various software technologies  divide an application into various components, and then look at the components individually

Application Structure Model Application Processing Output Devices Input Devices Model Traversal Input Processing

Application Structure  Model: representation of objects in the environment, geometry and behavior  Traversal: convert the model into graphical, sound, force, etc output  Input Processing: determine user’s intentions, act on other part of application  application processing: non-VR parts of the application

Interaction Loop  Logically the program consists of a loop that samples the user, performs computations and traverses the model Input processing Computation Model Traversal

Model  Contains the information required to display the environment: geometry, sound, force geometry, sound, force behavior behavior  the graphical part is the most developed, so concentrate on it  try to position sound and force within this model

Geometry  This is what we know the best  need to have a graphical representation of objects in the environment: accurate shape representation accurate shape representation ease of modeling ease of modeling efficient display efficient display integrates with behavior integrates with behavior

Scene Graph  Main technique for structuring the model  based on hierarchical structure, divide the object into parts or components  simplifies the modeling task, work on one part at a time  easy to modify the individual parts  add behaviors, sound, force, etc to the model

Scene Graph car Wheel Body

Scene Graph  Individual units are called nodes: shapes: polygons, meshes, cubes, etc shapes: polygons, meshes, cubes, etc transformations: position the nodes in space transformations: position the nodes in space material: colour and texture of objects material: colour and texture of objects grouping: collecting nodes together as a single object grouping: collecting nodes together as a single object sounds sounds behavior behavior

Scene Graph  Many different scene graph architectures, will look at one in more detail later  differences: scene graph for whole VE Vs. one per object scene graph for whole VE Vs. one per object types of nodes in the scene graph types of nodes in the scene graph ease of modification, static Vs dynamic ease of modification, static Vs dynamic

Behavior  Harder to deal with than geometry  simple motions aren’t too bad, but much harder to get sophisticated behavior  the general solution now is to write code, okay for programmers  would like to have a higher level approach for non-programmers

Behavior  Problem: want objects to respond to events in the environment  can have some motions that are simple animations, but most of the motions need some knowledge of the environment  example: an object moving through the environment must be aware of other objects so it doesn’t walk through them

Behavior  Some simple motions produced by animating transformation nodes  animation variables used to control transformation parameters, example: rotation or translation  could import animations, use some form of keyframing package to produce the motion

Behavior  Simple motions could be triggered by events in the environment  example: collision detection, if an object is moving through the environment and a collision detected it changes direction  hard to come up with good trigger conditions, a few obvious ones, but not much after that

Behavior  Another approach is to use a general motion model  best example of this is physics, try to simulate real physics in the environment  this gives a number of natural motions, and objects respond to the environment  works well in some environment, but has some problems

Behavior  One problem is the complexity of the mathematics, often need to simplify  computations can be a problem, particularly for complex objects  hard to control, need to know forces and torque's that produce the desired motions, can be very hard to determine

Behavior  Some attempts to produce general motion controllers  maybe the eventual solution, but nothing much now  use of scripting languages, can add some program control to the scene graph, but not full programming

Model Traversal  The process of going through the model and generating the information to be displayed  this is part software and part hardware, look through the entire process  hardware parts have implications for how we build models and the graphics techniques used

A Simple Model  A simplified model of the display process, explains hardware performance traverse geometry Pixel Model Screen

Traverse  Traverse the model, determine objects to be drawn, send to graphics hardware  usually combination software/hardware, depends on CPU and bus speed  early systems were hardware, didn’t scale well  many software techniques for culling models

Geometry  Geometrical computations on polygons: transformations and lighting  floating point intensive  divide polygons into fragments, screen aligned trapezoid  time proportional to number of polygons and vertices

Pixel  Fill fragments, colour interpolation, texture mapping, transparency, hidden surface  all the per pixel computations, time depends on number of pixels, also colour depth on low end displays  scalable operations, can add more processors for more speed

Design Considerations  Any of the stages could block, depend on display mix  lots of small polygons cause traversal and geometry stages to block  large polygons cause pixel stage to block  can use buffers to reduce blocking  a careful balancing process

Design Considerations  CPU/Image Generator trade-off  cheap boards just do pixel stage, use CPU for everything else: scales with CPU speed scales with CPU speed large polygons and texture mapping large polygons and texture mapping  moving geometry onto board increases performance, trend in low cost displays

PC Hardware Evolution  Start with CPU doing most of the work  Graphics board: image memory image memory fill and hidden surface fill and hidden surface texture mapping texture mapping  graphics speed determined by CPU, limited assistance from graphics card

Graphics Card Memory  Memory used for three things: image store image store hidden surface (z buffer) hidden surface (z buffer) texture maps texture maps  texture can be stored in main memory with AGP, but this isn’t most efficient  better to have texture memory on board

Image Memory  Amount depends on image size  double buffer, two copies of image memory front buffer: image displayed on screen front buffer: image displayed on screen back buffer: where the next image is constructed back buffer: where the next image is constructed  can construct next image while the current image is displayed, better image quality and faster display

Z Buffer  Used for hidden surface removal  z buffer: one value for each pixel, distance from eye to object drawn at that pixel  when drawing a pixel, compare depth of pixel to z buffer  if closer draw pixel and update z buffer  otherwise, ignore the pixel

Graphics Acceleration  Next step: move pixel operations to graphics card  fill and z buffer 3D triangles  add smooth shading and texture mapping  CPU does traversal and geometry processing

Graphics Acceleration  Next step: move geometry processing to graphics card  CPU traverses model, send graphics primitives to display card  all transformations and lighting done on graphics card  less dependence on CPU

Current Trends  Pixel processing (Geforce 2): a program that processes each pixel, control lighting and other effects  support for multiple textures, etc  Vertex processing (Geforce 3): a program processes each vertex, can change geometry at display time  real-time deformations and IKA

Current Trends  Move to programming all aspects of the graphics card (3DLabs VP series)  Also making programming more sophisticated, closer to CPU  Floating point textures and image memory (ATI and 3DLabs VP series)  Higher dynamic range -> better image quality, better for programming

Input Processing  Users need to interact with the environment  they have a set of input devices, have position and orientation information  need to translate this into their intentions  Interaction Technique (IT): basic unit of interaction, converts user input into something the application understands

Input Processing  Each IT address a particular interaction task, something that the user wants to do  look at interaction tasks first, then talk a little bit about ITs for them  interaction tasks divide into two groups: application independent: required by many different applications application independent: required by many different applications application dependent application dependent

Interaction Tasks  Mainly look at application independent interaction tasks  the main ones are: navigation navigation selection selection manipulation manipulation combination combination

Navigation  Need to get from one part of the environment to another  two types: local local global global  with local navigation the destination is within view, move on continuous path from current location to destination

Navigation  In global navigation the destination is remote, can’t move directly to it  need some way of locating destination, and then some way of jumping to it  variation: browsing / exploration don’t have a destination, exploring the environment or looking for particular objects

Selection  The selection tasks involves selecting something  there are several variations, depending upon what’s being selected: list or command selection list or command selection object selection object selection location selection location selection

Selection  List selection: a pre-defined list of things to select from  example: the commands on a menu  need to present the list, and the user selects one item from the list  object selection: number of objects not pre-defined, created by the user, changes in size as the program runs

Selection  For object selection can’t just present a list of objects to be selected from  location selection: selecting a point in space, may be used as location of object, or as part of an object’s shape  can’t see a point in empty space, so this is harder than the previous two

Manipulation  Standard set of object manipulations, change position, size and orientation  grab the object and move it  could also have deformations that change the object’s shape  hard to get general techniques beyond the few standard ones

Combination  Take two or more objects and put them together to form a new object  need to match up the shapes exactly, so they join in the right way  difficult to do unaided, usually need some form of constraint to simplify the process

Application Dependent Tasks  Usually involve the application data  ways of controlling the view of the data  ways of manipulating the data  example: a CAD or animation program will have a different set of manipulations than a network visualization program

Interaction Techniques  Not a well established set of techniques, yet  depend on input devices and style  example: a fixed range device (tracker) sometimes works best with pointing at objects, while a puck or joystick might work better with grabbing  need to try different combinations

Interaction Techniques  Some problems encountered: distance: objects too far away to grab distance: objects too far away to grab feedback: how do you highlight the object that has been selected? feedback: how do you highlight the object that has been selected? Object to be selected may be hidden by other objects Object to be selected may be hidden by other objects object density may make selection and manipulation difficult object density may make selection and manipulation difficult

Application Processing  Not much to say here  some applications have a considerable amount of processing, computation based on user input  don’t want this to effect application latency  need to control resources devoted to computation, use other processors

Making it run right  Now that we have an idea of what’s involved, how do we put it all together  want to have low system latency, get images on the screen as fast as possible  don’t want to wait for anything  divide the application into components that execute separately

Decoupled Simulation Model  Separate process for application computations, this is easy  separate process for expensive input devices, trackers that need lots of computation or have latency  a separate process for input processing and display  maybe a separate process for model

Application Structure Model Application Processing Output Devices Input Devices Model Traversal Input Processing

Decoupled Simulation Model  Each process can run at its own rate  display process runs as fast as possible, doesn’t wait for other processes  uses most recent value from input devices and application computation  reduces system latency