Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 450: Computer Graphics INTRODUCTION TO COMPUTER GRAPHICS

Similar presentations


Presentation on theme: "CS 450: Computer Graphics INTRODUCTION TO COMPUTER GRAPHICS"— Presentation transcript:

1 CS 450: Computer Graphics INTRODUCTION TO COMPUTER GRAPHICS
Spring 2015 Dr. Michael J. Reale

2 DEFINITION AND APPLICATIONS

3 WHAT IS COMPUTER GRAPHICS?
Computer graphics – generating and/or displaying imagery using computers Also touches on problems related to 3D model processing, etc. The focus of this course will be on the underlying mechanics and algorithms of computer graphics As opposed to a computer art course, for instance, which focuses on using tools to make graphical content

4 SO, WHY DO WE NEED IT? Practically EVERY field/discipline/application needs to use computer graphics in some way Science Art Engineering Business Industry Medicine Government Entertainment Advertising Education Training …and more!

5 APPLICATIONS: GRAPHS, CHARTS, AND DATA Visualization
GAMES!!! APPLICATIONS: GRAPHS, CHARTS, AND DATA Visualization Graphics and charts One of the earliest applications  plotting data using printer Data visualization Sciences Show visual representation  see patterns in data E.g., the flow of fluid (LOx) around a tube is described using streamtubes Challenges: large data sets, best way to display data Medicine 2D images (CT, MRI scans)  3D volume rendering E.g., volume rendering and image display from the visible woman dataset Images from VTK website:

6 APPLICATIONS: CAD/CADD/CAM
Design and Test CAD = Computer-Aided Design CADD = Computer-Aided Drafting and Design Usually rendered in wireframe Manufacturing CAM = Computer-Aided Manufacturing Used for: Designing and simulating vehicles, aircraft, mechanical devices, electronic circuits/devices,… Architecture and rendered building designs ASIDE: Often need special graphics cards to make absolutely SURE the rendered image is correct (e.g., NVIDIA Quadro vs. a garden-variety Geforce) GAMES!!!

7 APPLICATIONS: COMPUTER ART / MOVIES
First film to use a scene that was completely computer generated: Tron (1982) Star Trek II: Wrath of Khan (1982) …(depends on who you talk to) First completely computer-generated full-length film: Toy Story (1995)

8 APPLICATIONS: VIRTUAL-REALITY ENVIRONMENTS / TRAINING SIMULATIONS
Used for: Training/Education Military applications (e.g., flight simulators) Entertainment "Virtuix Omni Skyrim (cropped)" by Czar - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons -

9 APPLICATIONS: GAMES! Unreal 4 Engine Crytek 3 Engine

10 Unity Engine (Wasteland 2)
APPLICATIONS: GAMES! Unity Engine (Wasteland 2) Source Engine (Portal 2)

11 APPLICATIONS: GAMES! Wolfenstein: Then and Now
Wolfenstein: Then and Now

12 CHARACTERISTICS OF COMPUTER GRAPHICS
Depending on your application, your focus and goals will be different: Real-time vs. Non-real-time Virtual Entities / Environments vs. Visualization / Representation Developing Tools / Algorithms vs. Content Creation

13 CG CHARACTERISTICS: Real-TIME vs. NON-REAL-TIME
Real-time rendering 15 frames per second (AT BARE MINIMUM – still see skips but it will look more or less animated) 24 fps = video looks smooth (no skips/jumps) 24 – 60 fps is a more common requirement Examples: first-person simulations, games, etc. Non-real-time Could take hours for one frame Examples: CG in movies, complex physics simulations, data visualization, etc. Often trade-off between speed and quality (image, accuracy, etc.)

14 Face using Crytek engine: http://store.steampowered.com/app/220980/
CG CHARACTERISTICS: VIRTUAL ENTITIES / ENVIRONMENTS vs. VISUALIZATION / REPRESENTATION Virtual Entities / Environments Rendering a person, place, or thing Often realistic rendering, but it doesn’t have to be Examples: simulations (any kind), games, virtual avatars, movies Visualization / Representation Rendering data in some meaningful way Examples: graphics/charts, data visualization, (to a lesser extent) graphics user interfaces Both Rendering some object / environment, but also highlighting important information Examples: CAD/CAM Remy from Pixar’s Ratatouille: Tiny and Big: Grandpa’s Leftovers game:

15 CG CHARACTERISTICS: TOOLS/ALGORITHMS VS. CONTENT CREATION
Developing Tools / Algorithms It’s…well…developing tools and algorithms for graphical purposes Using computer-graphics application programming interfaces (CG API) Common CG APIs: GL, OpenGL, DirectX, VRML, Java 2D, Java 3D, etc. Interface between programming language and hardware Also called “general programming packages” in Hearn-Baker book Example: how do I write code that will render fur realistically? Content Creation Using pre-made software to create graphical objects Called “special purpose software packages” in Hearn-Baker book Example: how do I create a realistic-looking dog in a 3D modeler program?

16 THIS COURSE In this course, we’ll mostly be focusing on developing tools / algorithms to render in real-time virtual entities / environments

17 VIDEO DISPLAY DEVICES

18 INTRODUCTION A lot of why computer graphics works the way it does is based in the hardware Graphics cards, display devices, etc. In this section, we’ll talk about video display devices (and some of the terminology associated with them) CRT Plasma LCD/LED 3D For an excellent explanation of how… LCD/LED monitors work: Plasma monitors work:

19 CRT: CATHODE-RAY TUBE Primary video display mechanism for a long time
Now mostly replaced with LCD monitors/TVs Cathode = an electrode (conductor) where electrons leave the device Cathode rays = beam of electrons Basic idea: Electron gun (cathode + control grid) shoots electrons in vacuum tube Magnetic or electric coils focus and deflect beam of electrons so they hit each location on screen Screen coated with phosphor  glows when hit by electrons Phosphor will fade in a short time  so keep directing electron beam over same screen points Called refresh CRT By Theresa Knott (en:Image:Cathode ray Tube.PNG) [GFDL ( or CC-BY-SA-3.0 ( via Wikimedia Commons

20 DISPLAY DEFINITIONS Refresh rate = frequency that picture is redrawn
Term still used for things other than CRTs Usually expressed in Hertz (e.g., 60 Hz) Persistence = how long phosphors emit light after being hit by electrons Low persistence  need higher refresh rates LCD monitors have an analogous concept  response time

21 MORE DISPLAY DEFINITIONS
Pixels = “picture element”; single point on screen Resolution = maximum number of points that can be displayed without overlap For CRTs  more analogue device, so definition is a little more involved Now, usually just means (number of pixels in width) x (number of pixels in height) Aspect ratio = resolution width / height (Although sometimes vice versa)

22 TYPES OF CRT MONITORS There were two basic types of CRT monitors:
Vector displays Raster-scan displays

23 CRT: VECTOR DISPLAYS Also called random-scan, stroke-writing, or calligraphic displays Electron beam actually draws points, lines, and curves directly List of things to draw stored in display list (also called refresh display file, vector file, or display program) Long list  just draw as quick as you can Short list  delay refresh cycle so you don’t burn out screen! Advantages: draws non-aliased lines Disadvantages: not very flexible; cannot draw shaded polygons Mostly abandoned in favor of raster-scan displays

24 CRT: RASTER-SCAN DISPLAYS
Most common type of CRT Refresh buffer (or frame buffer) = contains picture of screen you want to draw Electron gun sweeps across screen, one row at a time, from top to bottom Each row = scan line Advantages: flexible Disadvantages: lines, edge, etc. can look jagged (i.e., aliased)

25 CRT: INTERLACING Interlacing = first draw even-numbered scan lines, then do odd-numbered lines Effectively doubles your refresh rate Also used to save data in TV transmission

26 CRT: COLOR Two ways to do color with CRTs: Beam-penetration
Have red and green layer of phosphors Slow electrons  only red lay Fast electrons  only green layer Medium-speed electrons  both Inexpensive, but limited in number of colors Shadow-mask Uses red-green-blue model for color (RGB) Three electron guns and three phosphor dots (one for red, one for green, and one for blue) Shadow mask makes sure 3 guns hit the 3 dots

27 PLASMA DISPLAYS Fill region between two glass plates with mixture of gases (usually includes neon) Vertical conducting ribbons on one plate; horizontal conducting ribbons on the other Firing voltages at intersecting pair of horizontal and vertical conductors  gas at intersection breaks down into glowing plasma of electrons and ions For color  use three subpixels (red, green, and blue) Advantages: very thin display; pixels very bright, so good at any angle Disadvantages: expensive

28 LCD DISPLAYS LCD = Liquid Crystal Displays
Liquid crystal = maintain a certain structure, but can move around like liquid Structure is twisted, but applying electrical current straightens it out Basic idea: Two polarized light filters (one vertical, one horizontal) Light passes through first filter  polarized light in vertical direction “ON STATE”  no current  crystal twisted  causes light to be reoriented so it passes through horizontal filter “OFF STATE”  current  crystal straightens out  light does NOT pass through

29 LCD DISPLAYS: WHERE DOES THE LIGHT COME FROM?
Mirror in back of display Cheap LCD displays (e.g., calculator) Just reflects ambient light in room (or prevents it from reflecting) Fluorescent light in center of display LED lights Could be edge-lit or full array (i.e., LEDS covering entire back of screen) Usually what people mean when they say an “LED monitor” = LCD display backlit by LEDs

30 LCD DISPLAYS: PASSIVE VS. ACTIVE
Passive-matrix LCDs  use grid that sends charge to pixels through transparent conductive materials Simple Slow response time Imprecise voltage control When activating one pixel, nearby pixels also turned on  makes image fuzzy Active-matrix LCDs  use transistor at each pixel location using thin-film transistor technology Transistors control voltage at each pixel location  prevent leakage to other pixels Control voltage get 256 shades of gray

31 LCD DISPLAYS: COLOR Color  have 3 subpixels (one red, one green, and one blue)

32 3D DISPLAYS: INTRODUCTION
In real life, we see depth because of we have two eyes (binocular vision) One eye sees one angle, the other sees another Brain meshes two images together to figure out how far away things are By “3D” displays, we mean giving the illusion of depth by purposely giving each eye a different view

33 3D DISPLAYS: OLDER APPROACHES
Anaglyph 3D Red and blue 3D glasses Show different images (one more reddish, the other more blue-ish) Color quality (not surprisingly) is not that great View Master toys First introduced 1939 Take photograph at two different angles

34 3D DISPLAYS: ACTIVE 3D Active 3D
Special “shutter” glasses that sync up with monitor TV Showing only one image at time (but REALLY fast) Show left image on TV  glasses close right eye Show right image on TV  glasses close left eye Advantages: If your monitor/TV has a high enough refresh rate, you’re good to go See full screen resolution Disadvantages: If out of sync, see flickering Image can look darker overall Glasses can be cumbersome

35 3D DISPLAYS: PASSIVE 3D Passive 3D Polarized light glasses
TV shows two images at once Uses alternating lines of resolution Similar to red-blue glasses, but color looks right Advantages: Glasses are lightweight Image is brighter than active 3D Disadvantages: Need special TV Only seeing HALF the vertical resolution! Left: Passive 3D through glasses - Middle: Passive 3D without glasses - Right: Active 3D

36 3D DISPLAYS: VR Virtual reality displays (like the Oculus Rift) basically have two separate screens (one for each eye) Advantages: no drop in resolution or brightness Disadvantages: heavy headset

37 THE GRAPHICS RENDERING PIPELINE

38 DEFINITIONS Graphics Rendering Pipeline We’ve got this…
Generates (or renders) a 2D image given a 3D scene AKA the “pipeline” A 3D scene contains: A virtual camera – has a position and orientation (which way it’s point and which way is up), like in the image above 3D Objects – stuff to render; have position, orientation, and scale Light sources - where the light is coming from, what color the light is, what kind of light is it, etc. Textures/Materials – determine how the surface of the objects should look We’ve got this… …and we want this

39 PIPELINE STAGES The Graphics Rendering Pipeline can be divided into 3 broad stages: Application Determined by (you guessed it) the application you’re running Runs on CPU Example operations: collision detection, animation, physics, etc. Geometry Computes what will be drawn, how it will be drawn, and where it will be drawn Deals with transforms, projections, etc. (we’ll talk about these later) Typically runs on GPU (graphics processing unit, or your graphics card) Rasterizer Renders final image and performs per-pixel computations (if desired) Runs completely on GPU Each of these stages can also be pipelines themselves

40 PIPELINE SPEEDS Like any pipeline, only runs as fast as slowest stage
Slowest stage (bottleneck)  determines rendering speed Rendering speed usually expressed in: Frames per second (fps) Hertz (1/seconds) Rendering speed  called throughput in other pipeline contexts

41 APPLICATION STAGE Programmer has complete control over this
Can be parallelized on CPU (if you have the cores for it) Whatever else it does, it must send geometry to be rendered to the Geometry Stage Geometry – rendering primitives, like points, lines, triangles, polygons, etc.

42 DEFINING 3D OBJECTS A 3D object (or 3D model) is defined in term of geometry or geometric primitives Vertex = point Most basic primitives: Points (1 vertex) Lines (2 vertices) Triangles (3 vertices) MOST of the time, an object/model is defined as a triangle mesh

43 DEFINING 3D OBJECTS Why triangles?
Simple Fits in single plane Can define any polygon in terms of triangles At minimum, a triangle mesh includes: Vertices Position in (x,y,z)  Cartesian coordinates Face definitions For each face, a list of vertex indices (i.e., which vertices go with which triangle) Vertices can be reused (Optional) Normals, texture coordinates, color/material information, etc.

44 DEFINING 3D OBJECTS In addition to points, lines, and triangles, other primitives exist such as: Polygons Again, however, you can define any polygon in terms of triangles Circles, ellipses, and other curves Often approximated with line segments Splines Also often approximated with lines (or triangles, if using splines to define a 3D surface) Sphere Center point + radius Often approximated with triangles

45 DEFINING 3D OBJECTS Sometimes certain kinds of 3D shapes are referred to as “primitives” (but ultimately these are often approximated with a triangle mesh) Cube Sphere Torus Cylinder Cone Teapot

46 ASIDE: Wait, TEAPOT? The Utah teapot or Newell teapot
Original drawing of the teapot: ASIDE: Wait, TEAPOT? The Utah teapot or Newell teapot In 1975, Martin Newell at the University of Utah needed a 3D model, so he measured a teapot and modeled it by hand Has become a very standard model for testing different graphical effects (and a bit of an inside joke)

47 GEOMETRY STAGE Performs majority of per-polygon and per-vertex operations In days of yore (before graphics accelerators), this stage ran on the CPU Has 5 sub-stages Model and View Transform Vertex Shading Projection Clipping Screen Mapping

48 MODEL COORDINATES Usually the vertices of a polygon mesh are relative to the model’s center point (origin) Example: vertices of a 2D square (-1,1) (1,1) (1,-1) (-1,-1) Called modeling or local coordinates Before this model gets to the screen, it will be transformed into several different spaces or coordinate systems When we start, the vertices are in model space (that is, relative to the model itself)

49 GEOMETRY STAGE: MODEL TRANSFORM
Let’s say I have a teapot in model coordinates I can create an instance (copy) of that model in the 3D world Each instance has its own model transform Transforming model coordinates  to world coordinates Coordinates are now in world space Transform may include translation, rotation, scaling, etc. Different model transforms

50 RIGHT-hAND RULE Before we go further with coordinate spaces, etc., we need to talk about which way the x, y, and z axes go relative to each other OpenGL (and other systems) use the right-hand rule Point right hand toward X, with palm up towards Y  thumb points toward Z

51 GEOMETRY STAGE: VIEW TRANSFORM
Only things visible by the virtual camera will be rendered Camera has a position and orientation The view transform will transform both the camera and all objects so that: Camera starts at world origin (0,0,0) Camera points in direction of negative z axis Camera has up direction of positive y axis Camera is set up up such that the x-axis points to right NOTE: This is with the right-hand rule setup (OpenGL) DirectX uses left-hand rule Coordinates are now in camera space (or eye space)

52 OUR GEOMETRIC NARRATIVE THUS FAR…
Model coordinates  MODEL TRANSFORM  World coordinates  VIEW TRANSFORM  Camera coordinates Or, put another way: Model space  MODEL TRANSFORM  World space  VIEW TRANSFORM  Camera (Eye) space

53 GEOMETRY STAGE: VERTEX SHADING
Lights are defined in the 3D scene 3D objects usually have one or more materials attached to them So, a metal can model might have a metallic-looking material, for instance Shading – determining the effect of a light (or lights) on a material Vertex shading – shading calculations using vertex information Vertex shading is programmable! I.e., you have a great deal of control over what happens during this stage We’ll talk about this in more detail later; for now, know that the geometry stage handles this part…

54 GEOMETRY STAGE: PROJECTION
View volume – area inside camera’s view that contains the objects we must render For perspective projections, called view frustum Ultimately, we will need to map 3D coordinates to 2D coordinates (i.e., points on the screen)  points must be projected from three dimensions to two dimensions Projection – transforms view volume into a unit cube Converts world coordinates  normalized device coordinates Simplifies clipping later Still keep z coordinates for now In OpenGL, unit cube = (-1,-1,-1) to (1,1,1)

55 GEOMETRY STAGE: PROJECTION
Two most commonly used projection methods: Orthographic (or parallel) View volume = rectangular box Parallel lines remain parallel Perspective View volume = truncated pyramid with rectangular base  called view frustum Things look smaller when farther away Orthographic Perspective

56 GEOMETRY STAGE: CLIPPING
What we draw is determined by what’s in the view volume: Completely inside  draw Completely outside  don’t draw Partially inside  clip against view volume and only draw part inside view volume When clipping, have to add new vertices to primitive Example: line is clipped against view volume, so a new vertex is added where the line intersects with the view volume

57 GEOMETRY STAGE: SCREEN MAPPING
We now have our (clipped) primitives in normalized device coordinates (which are still 3D) Assuming we have a window with a minimum corner (x1, y1) and maximum corner (x2, y2) Screen mapping x and y of normalized device coordinates  x’ and y’ screen coordinates (also device coordinates) z coordinates unchanged (x’,y’,z) = window coordinates = screen coordinates + z Window coordinates passed to rasterizer stage

58 GEOMETRY STAGE: SCREEN MAPPING
Where is the starting point (origin) in screen coordinates? OpenGL  lower-left corner (Cartesian) DirectX  sometimes the upper-left corner Pixel = picture element Basically each discrete location on the screen Where is the center of a pixel? Given pixel (0,0): OpenGL  0.5, 0.5 DirectX  0.0 ,0.0 In OpenGL

59 RASTERIZER STAGE Have transformed and projected vertices with associated shading data from geometry stage Primary goal  rasterization (or scan conversion) Computing and setting colors for pixels covered by the objects Convert 2D vertices + z value + shading info  pixels on screen Has four basic stages: Triangle setup Triangle traversal Pixel shading Merging Runs completely on GPU

60 RASTERIZER STAGE: TRIANGLE SETUP AND TRIANGLE TRAVERSAL
Performs calculations needed for next stage Triangle Traversal (or Scan Conversion) Finds which samples/pixels are inside each triangle Generates a fragment for each part of a pixel covered by a triangle Fragment properties  interpolated from triangle vertices

61 RASTERIZER STAGE: PIXEL SHADING
Performs per-pixel shading computations using interpolated shading data from previous stage Example: texture coordinates interpolated across triangles  get correct texture value for each pixel Output: one or more colors for each pixel Programmable!

62 FRAGMENT DEFINITION Fragment = data necessary to shade/color a pixel due to a primitive covering or partially covering that pixel Data can include color, depth, texture coordinates, normal, etc. Values are interpolated from primitive’s vertices Can have multiple fragments per pixel Final pixel color will either be one of the fragments (i.e., z-buffer chooses nearest one) or combination of fragments (e.g., alpha blending)

63 RASTERIZER STAGE: MERGING
Color buffer  stores color for each pixel Merging stage  combine each fragment color with color current stored in color buffer Need to check if fragment is visible  e.g., with a Z-buffer Check z value of incoming fragment If closer to camera than previous value in Z-buffer  override color in color buffer and update z value Advantages: O(n)  n = number of primitives Simple Can draw OPAQUE objects in any order Disadvantages: Transparent objects more complicated

64 RASTERIZER STAGE: MERGING
Frame buffer Means all buffers on system (but sometimes just refers to color + Z-buffer) To prevent the user from seeing the buffer while it’s being updated  use double-buffering Two buffers, one visible and one invisible Draw on invisible buffer  swap buffers

65 FIXED-FUNCTION vs. PROGRAMMABLE
Fixed-function pipeline stages Elements are set up in hardware a specify way Usually can only turn things on or off or change options (defined by hardware and graphics API) Programmable pipeline stages Vertex shading and pixel (fragment) shading Have direct control over what is done at given stage

66 OTHER PIPELINES The outline we described is NOT the only way to do a graphics pipeline E.g., ray tracing renderers  pretty much do EVERYTHING in software, and then just set the pixel color with the graphics card

67 COMPUTER GRAPHICS API

68 DEFINITIONS Computer-graphics application programming interfaces (CG API) Common CG APIs: GL, OpenGL, DirectX, VRML, Java 2D, Java 3D, etc. Interface between programming language and hardware Let’s briefly go over some CG APIs…

69 GKS and PHIGS GKS (Graphical Kernel System) – 1984
International effort to develop standard for computer graphics software Adopted as first graphics software standard by ISO (International Standards Organization) and ANSI (American National Standards Institute) Original 2D  3D extension developed later PHIGS (Programmer’s Hierarchical Interactive Graphics Standard) Extension of GKS Developed in 1980’s  standard by 1989 3D standard Increased capabilities for hierarchical modeling, color specifications, surface rendering, and picture manipulations PHIGS+  added more advanced 3D surface rendering

70 GL and OPENGL GL (Graphics Library) OpenGL
SGI O2 workstation: GL (Graphics Library) Developed by Silicon Graphics, Inc. (SGI) for their graphics workstations Became de facto graphics standard Fast, real-time rendering Proprietary system OpenGL Developed as hardware-independent version of GL in 1990’s Specification Was maintained/updated by OpenGL Architecture Review Board; now maintained by the non-profit Khronos Group Both are consortiums of representatives from many graphics companies and organizations Designed for efficient 3D rendering, but also handles 2D (just set z = 0) Stable; new features added as extensions

71 DIRECTX and DIRECT3D DirectX
Developed by Microsoft for Windows 95 in 1996 Originally called “Game SDK” Actually combinations of different APIs: Direct3D, DirectSound, DirectInput, etc. Less stable  adopts new features fairly quickly (for better or for worse) Only works on Windows and Xbox

72 WHY ARE WE USING OPENGL? In this course, we will be using OpenGL because: It works with practically ever platform/system (Windows, Unix/Linux, Mac, etc.) It’s arguably easier to learn/understand It is NOT because DirectX/Direct3D is a bad system

73 REFERENCES Many of the images in these slides come from the book “Real-Time Rendering” by Akenine-Moller, Haines, and Hoffman (3rd Edition) as well as the online supplemental material found on their website: Some also are from the book “Computer Graphics with OpenGL” by Hearn, Baker, and Carithers (4th edition)


Download ppt "CS 450: Computer Graphics INTRODUCTION TO COMPUTER GRAPHICS"

Similar presentations


Ads by Google