Graphics System CSC 2141 Introduction to Computer Graphics.

Slides:



Advertisements
Similar presentations
Real-Time Rendering 靜宜大學資工研究所 蔡奇偉副教授 2010©.
Advertisements

Computer Graphics- SCC 342
Fundamentals of Computer Graphics Part 1
Graphics Pipeline.
Display Hardware Yingcai Xiao Display Hardware Yingcai Xiao.
HCI 530 : Seminar (HCI) Damian Schofield.
Graphics Systems I-Chen Lin’s CG slides, Doug James’s CG slides Angel, Interactive Computer Graphics, Chap 1 Introduction to Graphics Pipeline.
Image Formation Mohan Sridharan Based on slides created by Edward Angel CS4395: Computer Graphics 1.
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Models and Architectures Ed Angel Professor of Computer Science, Electrical and Computer.
Ch 1 Intro to Graphics page 1CS 367 First Day Agenda Best course you have ever had (survey) Info Cards Name, , Nickname C / C++ experience, EOS experience.
CSC 461: Lecture 2 1 CSC461 Lecture 2: Image Formation Objectives Fundamental imaging notions Fundamental imaging notions Physical basis for image formation.
Computer Graphics Hardware and Software Lecture Notes, CEng 477.
Computer Graphics/and Multimedia CMM472/CIT773 What is CG ?, History of CG, Course Overview.
Course Overview, Introduction to CG Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Friday, September 5, 2003.
1 Chapter 1: Graphics Systems and Models. 2 Applications of C. G. – 1/4 Display of information Maps GIS (geographic information system) CT (computer tomography)
C O M P U T E R G R A P H I C S Guoying Zhao 1 / 46 C O M P U T E R G R A P H I C S Guoying Zhao 1 / 46 Computer Graphics Introduction II.
Basics of a Computer Graphics System Introduction to Computer Graphics CSE 470/598 Arizona State University Dianne Hansford.
Chapter 1 Graphics Systems and Models. What is Computer Graphics? Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media.
Computer Graphics/and Multimedia CMM472/CIT773 What is CG ?, History of CG, Course Overview.
1Computer Graphics Lecture 3 - Image Formation John Shearer Culture Lab – space 2
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics I, Fall 2008 Image Formation.
1 Image Formation. 2 Objectives Fundamental imaging notions Physical basis for image formation ­Light ­Color ­Perception Synthetic camera model Other.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
ISC/GAM 4322 ISC 6310 Multimedia Development and Programming Unit 1 Graphics Systems and Models.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
1 Computer Graphics Week3 –Graphics & Image Processing.
Computer Graphics.
1 By Dr. Hany Elsalamony. 2 3  Computer graphics generally means creation, storage and manipulation of models and images  Such models come from diverse.
Chapter 1: Graphics Systems and Models Instructor: Shih-Shinh Huang 1.
Lecture No. 3.  Screen resolution  Color  Blank space between the pixels  Intentional image degradation  Brightness  Contrast  Refresh rate  Sensitivity.
Graphics Systems and Models Chapter 1. CS 480/680 2Chapter 1 -- Graphics Systems and Models Introduction: Introduction: Computer Graphics Computer Graphics.
CS 450: COMPUTER GRAPHICS REVIEW: INTRODUCTION TO COMPUTER GRAPHICS – PART 2 SPRING 2015 DR. MICHAEL J. REALE.
Graphics Systems and OpenGL. Business of Generating Images Images are made up of pixels.
COMPUTER GRAPHICS Hochiminh city University of Technology Faculty of Computer Science and Engineering CHAPTER 01: Graphics System.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
CS 480/680 Computer Graphics Image Formation Dr. Frederick C Harris, Jr.
OpenGL Conclusions OpenGL Programming and Reference Guides, other sources CSCI 6360/4360.
CSE Real Time Rendering Week 2. Graphics Processing 2.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics The Rendering Pipeline - Review CO2409 Computer Graphics Week 15.
1Computer Graphics Lecture 4 - Models and Architectures John Shearer Culture Lab – space 2
INT 840E Computer graphics Introduction & Graphic’s Architecture.
Computer Graphics Chapter 6 Andreas Savva. 2 Interactive Graphics Graphics provides one of the most natural means of communicating with a computer. Interactive.
Introduction to OpenGL  OpenGL is a graphics API  Software library  Layer between programmer and graphics hardware (and software)  OpenGL can fit in.
1 Angel: Interactive Computer Graphics4E © Addison-Wesley 2005 Image Formation.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
1 Angel: Interactive Computer Graphics5E © Addison- Wesley 2009 Image Formation Fundamental imaging notions Fundamental imaging notions Physical basis.
1 Angel and Shreiner: Interactive Computer Graphics6E © Addison-Wesley 2012 Image Formation 靜宜大學 資訊工程系 蔡奇偉 副教授
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
1 Chapter 1: Graphics Systems and Models. 2 Applications of C. G. – 1/4 Display of information Maps GIS (geographic information system) CT (computer tomography)
1 Angel and Shreiner: Interactive Computer Graphics6E © Addison-Wesley 2012 Image Formation Sai-Keung Wong ( 黃世強 ) Computer Science National Chiao Tung.
Chapter 1 Graphics Systems and Models Models and Architectures.
1 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley 2012 Models and Architectures 靜宜大學 資訊工程系 蔡奇偉 副教授 2012.
Graphics Graphics Korea University cgvr.korea.ac.kr Introduction to Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
1 Chapter 1: Introduction to Graphics. 2 What is computer graphics.
Rendering Pipeline Fall, 2015.
- Introduction - Graphics Pipeline
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Models and Architectures
Models and Architectures
Models and Architectures
Introduction to Computer Graphics with WebGL
The Graphics Pipeline Lecture 5 Mon, Sep 3, 2007.
Models and Architectures
Models and Architectures
Chapter 2 Overview of Graphics Systems
Presentation transcript:

Graphics System CSC 2141 Introduction to Computer Graphics

A Graphics System Separate hardware support: relieves CPU from graphics related tasks

Raster Graphics  Image produced as a two-dimensional array (the raster) of picture elements (pixels) in the frame buffer  Pixel: small area/location in the image

Frame Buffer  Two-dimensional array of pixel values  Color and other information (e.g. depth in 3D) per pixel  Not a device, a chunk of RAM memory  In early systems, frame buffer was part of system memory  Today, virtually all graphics systems have GPUs (graphics processing units) which may include the frame buffer.  Usually is implemented with special types of memory chips that enable fast redisplay of contents  A hardware device called video controller reads the frame buffer and produces the image on display

Frame Buffer  Depth: number of bits used to represent each pixel  How many colors can be represented?  1-bit: 2 colors (black and white)  8-bit: 256 colors (gray scale)  24-bit: (True Color or RGB-color)  8-bits per red, green and blue components  R,G and B combined in varying proportions  If all full intensity (255, 255, 255): you get white  If all are off (0,0,0): you get black  Alternative: color map  8-bits per pixel, index to a 256-element array,  each array entry stores an RGB color  Called indexed color, or pseudo-color

Frame Buffer  Resolution: number of pixels in the frame buffer  Determines the level of detail in the image

Rasterization/Scan conversion  The processor takes specifications of graphical primitives (lines, circles, polygons)  Converts them to pixel assignments in the frame buffer

Output Devices  Standard graphics display: raster display  Two main types: 1.CRT (cathode-ray tube) display 2.Flat-screen technologies

CRT  Consists of a screen with phosphor coating  Each pixel is illuminated for a short time (few milliseconds) when struck by an electron beam  Level of intensity can be varied to achieve gray values  Display has to be continuously refreshed to avoid flickering.  Refresh rate  Older systems: 60Hz (60 times per second)  Modern displays: 85 Hz

CRT (interlaced, noninterlaced)  Two ways of displaying pixels  Noninterlaced: row by row (scan line by scan line)  at the refresh rate  Interlaced: odd rows and even rows refreshed alternately.  Smaller refresh rate is fine. e.g. 30 times a second seems like 60 times a second.

Color CRT  Three different colored phosphors arranged in small groups  Red, Green, Blue  Three electron beams

Flat-screen Technologies  LCD (liquid crystal display)  LED (light-emitting diodes)  Plasma panels

Image Synthesis CSC 2141 Introduction to Computer Graphics

Image Synthesis  In computer graphics, we form images using a process analogous to how images are formed by optical imaging systems ( Camera, Human visual system)  We will construct a model of the image formation process in optical systems that we can use to understand CG imaging systems  Basic model of viewing  A viewer holding up a synthetic camera  …to a model of the scene we wish to render

Major elements of our model: Objects and Viewer  Objects: description of the 3D scene, including  Positions, geometric structure, color, surface properties (texture, shininess, transparency) of objects  exist independent of the viewer  Viewer: description of the  location of the viewer and  properties of the synthetic camera (direction, field of view, etc.)

Geometric models  How to describe our 3D scene in a manner that can be processed by graphics programs?  Mathematically based objects are easy to model  Line : two vertices  Polygon: an ordered set of vertices  Circle: center and a point on the circle  Cube, cylinder, sphere… ..but natural objects (hair, water, clouds) are hard

Geometric models  Simplest: Polyhedral models  Solid objects are described by their 2D boundaries  Boundaries will be constructed from flat elements: points, line segments, and planar polygonal faces  Faces: basic rendering element: In OpenGL, just a list of vertices  Advanced models: curved surfaces: Bezier, NURBs, subdivision surfaces, fractals, etc. 69,451 triangles

Light and light sources  Locations of light sources determine  shading of the rendered objects: which are dark which are light?  And, the location of the shadows  We assume point light sources (like sun): emit energy from a single location in all directions  Light is a form of electromagnetic energy  Over the visible spectrum  Different wavelengths are seen as different colors  To simplify: geometric optics model  light sources as emitters of energy, and have a fixed intensity  Light travels in straight lines (light ray)

Color of light  We will simply model the color of light as some combination of red, green and blue color components  What we see in an object is not its color, but the color of the light that is reflected from that object toward our eye.  If object reflects only red light but light sources emit green light, then we will see the object as ?

Color of light  We will simply model the color of light as some combination of red, green and blue color components  What we see in an object is not its color, but the color of the light that is reflected from that object toward our eye.  If object reflects only red light but light sources emit green light, then we will see the object as black!

Light propagation in the scene  Light is emitted from light sources  Strikes and illuminates objects in the scene  Interacts with object surfaces depending on surface characteristics  Being absorbed all or partially by some objects  Being reflected from or refracted through surfaces  Some light rays eventually enter our eyes  Some leave the scene—no contribution to what we see

Lighting Models  Global illumination models  Simulate this complex interaction of light and objects  Computationally expensive!  Local illumination models  Adapted by most commercial interactive graphics systems  Assume that the light rays emitted from an object come directly from light sources

Camera model: The pin-hole camera  Simple example of an optical imaging system Small hole at the center of one side: only single ray of light can enter “center of projection” Film plane, imaging plane

The pinhole camera: side view  Pointing along positive z-axis  The center of projection (COP) is at the origin (0,0,0)  Film plane located at distance d from the pinhole  Film plane is at z = -d  is projection of -d

Field of view (also called angle of view)  Assume height of the box is h  A point will be visible on the image if it lies within a cone centered at the origin with an angle  This angle is called field of view (for y). A similar formula holds for x.

Observation  The image is inverted in the projection process.  Why?

Observation  The image is inverted in the projection process.  Why?  Because the film is behind the pinhole (center of projection) -

Synthetic camera model  We will move the image plane in front of the camera  The image of the point is located where the projector passes through the projection (image) plane (all projectors are rays emanating from COP) Projection plane projector center of projection

Clipping  We must consider the limited size of the image  Recall: field of view in pinhole camera  Not all objects can be imaged onto the film  In our synthetic camera:  place a clipping window through which the viewer sees the world  given the location of the viewer/camera, the location and orientation of the projection plane, and the size of the clipping rectangle, we can determine which objects will appear in the image

World coordinate system v.s. Camera coordinate system  Observe that the projection transformation was based on the assumption that objects are represented according to the camera-centered coordinate system.  Camera moves around!  Necessary to apply a transformation to map coordinates of objects from world coordinate system to camera coordinate system  Will see details later

Camera specifications  We need to inform the graphics system of:  Camera location: location of the center of projection (COP)  Camera direction: what direction is the camera pointed in.  Camera orientation: what direction is “up” in the final image  Focal length: the distance from the COP to the projection plane (image plane)  Image size: clipping window size and location  Variety of ways to specify these  OpenGL asks for field of view and image aspect ratio (ratio of its width and height) rather than focal length and image size

Application Programmer’s Interface (API)  Interface between an application program and the graphics system can be specified through a set of functions in the graphics library  Programmer only sees the API, and is shielded from details of the both hardware and software implementations of the graphics library  The functions that are available through the API should match our image formation model Drivers

OpenGL is an API  Based on the synthetic-camera model  We need functions to specify (and, we have them)  Objects  Viewer/camera  Light sources  Material properties of objects

Object Specification  Most APIs support a limited set of primitives including  Points (0D object)  Line segments (1D objects)  Polygons (2D objects)  Some curves and surfaces  Quadrics  Parametric polynomials  All are defined through locations in space or vertices

Example (OpenGL) glBegin(GL_POLYGON) glVertex3f(0.0, 0.0, 0.0); glVertex3f(0.0, 1.0, 0.0); glVertex3f(0.0, 0.0, 1.0); glEnd( ); type of object location of vertex end of object definition

How is an API implemented?

Physical Approach?  Ray tracing: follow rays of light from center of projection until they either are absorbed by objects or go off to infinity  Can handle global effects  Multiple reflections  Translucent objects  Slow

Practical Approach  Process objects one at a time in the order they are generated by the application  Can consider only local illumination  Pipeline architecture  All vertices go through the pipeline Vertices Vertex Processor Clipper and primitive assembler Rasterizer Fragment Processor Pixels

Vertex Processing  Carry out transformations and compute a color for each vertex  Each vertex is processed independently Vertices Vertex Processor Clipper and primitive assembler Rasterizer Fragment Processor Pixels

Transformations  Much of the work in the pipeline is in converting object representations from one coordinate system to another  Object coordinates  Camera (eye) coordinates  Screen coordinates  Every change of coordinates is equivalent to a matrix transformation  Eventually, the geometry is transformed by a perspective projection---also can be represented by matrices  Retain 3D information as long as possible in the pipeline  Thus, more general projections than we just saw Vertices Vertex Processor Clipper and primitive assembler Rasterizer Fragment Processor Pixels

Perspective projection Y Z X View Plane Center of Projection (0.7, 0.5, -4.0) Projectors

Clipping The synthetic camera can only see part of the world  Clipping volume  Objects that are not within this volume are clipped out of the scene  Before clipping, sets of vertices are asssembled into primitives (such as line segments or polygons)  Output of this step: set of primitives whose projections can appear in the image Vertices Vertex Processor Clipper and primitive assembler Rasterizer Fragment Processor Pixels

Rasterization  Each primitive is rasterized  Generate pixels (that finally will be used to update the frame buffer) representing this primitive  Output of this step: set of fragments for each primitive  Fragment: potential pixel that carries color, location and depth information with it Vertices Vertex Processor Clipper and primitive assembler Rasterizer Fragment Processor Pixels

Fragment processor  Updates pixels in the frame buffer using the fragments  Some fragments may occlude others (use depth information here) Vertices Vertex Processor Clipper and primitive assembler Rasterizer Fragment Processor Pixels

Fragment processor  Updates pixels in the frame buffer using the fragments  Color of a fragment may be changed by applying textures, etc. Vertices Vertex Processor Clipper and primitive assembler Rasterizer Fragment Processor Pixels

Fragment processor  Updates pixels in the frame buffer using the fragments  Translucent effects may also be generated by blending colors of fragments. Vertices Vertex Processor Clipper and primitive assembler Rasterizer Fragment Processor Pixels