Presentation is loading. Please wait.

Presentation is loading. Please wait.

Welcome to CSCI 480, Computer Graphics Introduction Instructor: Erin Shaw, TA: TBDInstructor: Erin Shaw, TA: TBD Class web page

Similar presentations


Presentation on theme: "Welcome to CSCI 480, Computer Graphics Introduction Instructor: Erin Shaw, TA: TBDInstructor: Erin Shaw, TA: TBD Class web page"— Presentation transcript:

1 Welcome to CSCI 480, Computer Graphics Introduction Instructor: Erin Shaw, TA: TBDInstructor: Erin Shaw, TA: TBD Class web page Assignment Reading: Chapter 1of Angel textbookReading: Chapter 1of Angel textbook Project 1: Follow homework icon to project1Project 1: Follow homework icon to project1 Online registration and survey: TBPOnline registration and survey: TBPIntroduction Instructor: Erin Shaw, TA: TBDInstructor: Erin Shaw, TA: TBD Class web page Assignment Reading: Chapter 1of Angel textbookReading: Chapter 1of Angel textbook Project 1: Follow homework icon to project1Project 1: Follow homework icon to project1 Online registration and survey: TBPOnline registration and survey: TBP

2 What is Computer Graphics? Computer Graphics is The technology for presenting information Towards this end, we need to... process, transform, and display dataprocess, transform, and display data...and must take the following into account Origin (where does it come from?)Origin (where does it come from?) Throughput (how much of it can I process?)Throughput (how much of it can I process?) Latency (how long do I have to wait)?Latency (how long do I have to wait)? Presentation (what does it look like?)Presentation (what does it look like?) Computer Graphics is The technology for presenting information Towards this end, we need to... process, transform, and display dataprocess, transform, and display data...and must take the following into account Origin (where does it come from?)Origin (where does it come from?) Throughput (how much of it can I process?)Throughput (how much of it can I process?) Latency (how long do I have to wait)?Latency (how long do I have to wait)? Presentation (what does it look like?)Presentation (what does it look like?)

3 CSCI 480 is Computer Science! CSCI 480 is Computer Science! We will cover... Graphics programming algorithmsGraphics programming algorithms Graphics data structuresGraphics data structures Color & human visionColor & human vision Graphical interface design & programmingGraphical interface design & programming Geometry & modelingGeometry & modeling …..and by necessity, OpenGl…..and by necessity, OpenGl We will cover... Graphics programming algorithmsGraphics programming algorithms Graphics data structuresGraphics data structures Color & human visionColor & human vision Graphical interface design & programmingGraphical interface design & programming Geometry & modelingGeometry & modeling …..and by necessity, OpenGl…..and by necessity, OpenGl It is not Image/Illus. packages - Adobe Photoshop, Illus. CAD packages - AutoCAD Rendering packages - Renderman,Lightscape Modeling packages - 3D Studio MAX Animation packages - Flash, Digimation

4 CSCI 480 goals The goal of this semester is many-fold hands-on graphics programming experience with industry toolshands-on graphics programming experience with industry tools emphasis on 3D and renderingemphasis on 3D and rendering mathematical underpinningsmathematical underpinnings familiarity with the fieldfamiliarity with the field If only we had more time! The goal of this semester is many-fold hands-on graphics programming experience with industry toolshands-on graphics programming experience with industry tools emphasis on 3D and renderingemphasis on 3D and rendering mathematical underpinningsmathematical underpinnings familiarity with the fieldfamiliarity with the field If only we had more time!

5 Who am I? Non-faculty Research scientist at USC’s Information Sciences Institute (since 1995)Research scientist at USC’s Information Sciences Institute (since 1995)Information Sciences InstituteInformation Sciences Institute -Distance education, virtual intelligent tutors, UI design, past research in CG and digital libraries Graphics background MS from Program of Computer Graphics at Cornell UniversityMS from Program of Computer Graphics at Cornell UniversityProgram of Computer GraphicsProgram of Computer Graphics Employee #12 of startup company Edsun Labs, built new CEG chip for anti-aliasingEmployee #12 of startup company Edsun Labs, built new CEG chip for anti-aliasingNon-faculty Research scientist at USC’s Information Sciences Institute (since 1995)Research scientist at USC’s Information Sciences Institute (since 1995)Information Sciences InstituteInformation Sciences Institute -Distance education, virtual intelligent tutors, UI design, past research in CG and digital libraries Graphics background MS from Program of Computer Graphics at Cornell UniversityMS from Program of Computer Graphics at Cornell UniversityProgram of Computer GraphicsProgram of Computer Graphics Employee #12 of startup company Edsun Labs, built new CEG chip for anti-aliasingEmployee #12 of startup company Edsun Labs, built new CEG chip for anti-aliasing

6 Lecture Graphics applicationsGraphics applicationsGraphics applicationsGraphics applications Display architectures and devicesDisplay architectures and devices ImagesImages -Light -Human visual system -Camera models APIs and OpenGLAPIs and OpenGL Rendering pipelineRendering pipeline Graphics applicationsGraphics applicationsGraphics applicationsGraphics applications Display architectures and devicesDisplay architectures and devices ImagesImages -Light -Human visual system -Camera models APIs and OpenGLAPIs and OpenGL Rendering pipelineRendering pipeline

7 Graphics displays Motivation How to keep display screen refreshed as more data is displayed at higher resolutions?How to keep display screen refreshed as more data is displayed at higher resolutions? Frame buffer (video memory) holds screen data Buffer size (width x len) = screen resolutionBuffer size (width x len) = screen resolution Buffer depth = no. bits per pixel = no. colorsBuffer depth = no. bits per pixel = no. colors -number of colors = 2 number_of_bits -1 bit per pixel: bit can be 0 or 1, or black or white -8 bits per pixel gives 256 colors, or intensities Motivation How to keep display screen refreshed as more data is displayed at higher resolutions?How to keep display screen refreshed as more data is displayed at higher resolutions? Frame buffer (video memory) holds screen data Buffer size (width x len) = screen resolutionBuffer size (width x len) = screen resolution Buffer depth = no. bits per pixel = no. colorsBuffer depth = no. bits per pixel = no. colors -number of colors = 2 number_of_bits -1 bit per pixel: bit can be 0 or 1, or black or white -8 bits per pixel gives 256 colors, or intensities

8 Graphics displays Color systems require at least 3 times as much memory Typically 8-bit buffer for each color (RGB)Typically 8-bit buffer for each color (RGB) True color is at least 24 bits per pixelTrue color is at least 24 bits per pixel 32 bit systems are RGBA, where A is alpha, the transparency value32 bit systems are RGBA, where A is alpha, the transparency value Video memory is highly specializedVideo memory is highly specialized -V ideo RAM, D ynamic RAM, SDRAM, etc. -article on accelerator boards article on accelerator boardsarticle on accelerator boards Color systems require at least 3 times as much memory Typically 8-bit buffer for each color (RGB)Typically 8-bit buffer for each color (RGB) True color is at least 24 bits per pixelTrue color is at least 24 bits per pixel 32 bit systems are RGBA, where A is alpha, the transparency value32 bit systems are RGBA, where A is alpha, the transparency value Video memory is highly specializedVideo memory is highly specialized -V ideo RAM, D ynamic RAM, SDRAM, etc. -article on accelerator boards article on accelerator boardsarticle on accelerator boards

9 Graphics displays Rasterization Is the process of converting geometric entities (lines and curves) to pixel assignments in the frame bufferIs the process of converting geometric entities (lines and curves) to pixel assignments in the frame buffer Most monitors and printers are raster-basedMost monitors and printers are raster-based Rasterization can be done in hardware, sofware, or a combination of the twoRasterization can be done in hardware, sofware, or a combination of the two Rasterization is the final step of the rendering pipelineRasterization is the final step of the rendering pipelineRasterization Is the process of converting geometric entities (lines and curves) to pixel assignments in the frame bufferIs the process of converting geometric entities (lines and curves) to pixel assignments in the frame buffer Most monitors and printers are raster-basedMost monitors and printers are raster-based Rasterization can be done in hardware, sofware, or a combination of the twoRasterization can be done in hardware, sofware, or a combination of the two Rasterization is the final step of the rendering pipelineRasterization is the final step of the rendering pipeline

10 Lecture Graphics applicationsGraphics applications Display architectures and devicesDisplay architectures and devicesDisplay architectures and devicesDisplay architectures and devices ImagesImages -Light -Human visual system -Camera models APIs and OpenGLAPIs and OpenGL Rendering pipelineRendering pipeline Graphics applicationsGraphics applications Display architectures and devicesDisplay architectures and devicesDisplay architectures and devicesDisplay architectures and devices ImagesImages -Light -Human visual system -Camera models APIs and OpenGLAPIs and OpenGL Rendering pipelineRendering pipeline

11 Display devices More on display devices videohandbook/tvset.htmlhttp://www.magnavox.com:81/electreference/ videohandbook/tvset.html More on display devices videohandbook/tvset.htmlhttp://www.magnavox.com:81/electreference/ videohandbook/tvset.html

12 Lecture Graphics applicationsGraphics applications Display architectures and devicesDisplay architectures and devices ImagesImages -Light -Human visual system -Camera models APIs and OpenGLAPIs and OpenGL Rendering pipelineRendering pipeline Graphics applicationsGraphics applications Display architectures and devicesDisplay architectures and devices ImagesImages -Light -Human visual system -Camera models APIs and OpenGLAPIs and OpenGL Rendering pipelineRendering pipeline

13 Images We will follow the book’s top-down approach Instead of starting with 2D and generalizing, we will jump right into 3D, using OpenGL.Instead of starting with 2D and generalizing, we will jump right into 3D, using OpenGL. 3D CG images are synthetic That is, they do not exist physicallyThat is, they do not exist physically Often this is obvious, sometimes it is not - this is the goal of physically-based renderingOften this is obvious, sometimes it is not - this is the goal of physically-based rendering We want to create synthetic images the same way we create traditional imagesWe want to create synthetic images the same way we create traditional images We will follow the book’s top-down approach Instead of starting with 2D and generalizing, we will jump right into 3D, using OpenGL.Instead of starting with 2D and generalizing, we will jump right into 3D, using OpenGL. 3D CG images are synthetic That is, they do not exist physicallyThat is, they do not exist physically Often this is obvious, sometimes it is not - this is the goal of physically-based renderingOften this is obvious, sometimes it is not - this is the goal of physically-based rendering We want to create synthetic images the same way we create traditional imagesWe want to create synthetic images the same way we create traditional images

14 Images 3D objects are first modeled Typically, with a CAD program that outputs either a list of polygons (vertices) or a scene graph of primitive objects.Typically, with a CAD program that outputs either a list of polygons (vertices) or a scene graph of primitive objects. They exist independently of a viewerThey exist independently of a viewer -object space verse camera (eye) space 3D objects are first modeled Typically, with a CAD program that outputs either a list of polygons (vertices) or a scene graph of primitive objects.Typically, with a CAD program that outputs either a list of polygons (vertices) or a scene graph of primitive objects. They exist independently of a viewerThey exist independently of a viewer -object space verse camera (eye) space

15 Images 3D objects are then rendered We account for the viewer at this stageWe account for the viewer at this stage We are the cameras in our world (but synthetic cameras are more versatile!)We are the cameras in our world (but synthetic cameras are more versatile!) Images are 2 dimensionalImages are 2 dimensional Images would be black without light!Images would be black without light! 3D objects are then rendered We account for the viewer at this stageWe account for the viewer at this stage We are the cameras in our world (but synthetic cameras are more versatile!)We are the cameras in our world (but synthetic cameras are more versatile!) Images are 2 dimensionalImages are 2 dimensional Images would be black without light!Images would be black without light!

16 What is light? Visible light is electromagnetic radiation Specifically, it’s the portion of the electromagnetic spectrum that the eye can detectSpecifically, it’s the portion of the electromagnetic spectrum that the eye can detect Electromagnetic radiation = radiant energyElectromagnetic radiation = radiant energy Characterized by eitherCharacterized by either -wavelength ( ), in nanometers (nm) -frequency (f), in Hertz (Hz) Visible light is electromagnetic radiation Specifically, it’s the portion of the electromagnetic spectrum that the eye can detectSpecifically, it’s the portion of the electromagnetic spectrum that the eye can detect Electromagnetic radiation = radiant energyElectromagnetic radiation = radiant energy Characterized by eitherCharacterized by either -wavelength ( ), in nanometers (nm) -frequency (f), in Hertz (Hz)

17 Electromagnetic spectrum cosmic rays gamma rays x rays ultravioletvisibleinfraredradarradio short wave TVFMelectricity (ac current) 10 5 nm  3.9 x inch 10 5 nm  3.9 x inch nm  3100 miles nm  3100 miles Frequency (Hz) Wavelength (nm)

18 Imaging systems Three examples Human visual systemHuman visual system Pinhole cameraPinhole camera Graphics rendererGraphics renderer Our goal is to understand how images are formed visually, with a camera, then on a computer Three examples Human visual systemHuman visual system Pinhole cameraPinhole camera Graphics rendererGraphics renderer Our goal is to understand how images are formed visually, with a camera, then on a computer

19 Human visual system 2. Iris opens and closes pupil to adjust amount of light 3. Lens helps focus image on retina 4. Photoreceptors in retina collect light and convert energy to impulses 1. Light enters through cornea *Reference:http://members.aol.com/osleye/Main.htmhttp://members.aol.com/osleye/Main.htm

20 Visual acuity Visual acuity is defined as A measure of the ability of the eye to distinguish detailA measure of the ability of the eye to distinguish detail Four factors affect visual acuity Size, luminance, contrast, timeSize, luminance, contrast, time The amount of light reaching the eye is brightness (a subjective interpretation)brightness (a subjective interpretation) luminance (an objective physical quantity)luminance (an objective physical quantity) radiance (also an objective physical quantity)radiance (also an objective physical quantity) Visual acuity is defined as A measure of the ability of the eye to distinguish detailA measure of the ability of the eye to distinguish detail Four factors affect visual acuity Size, luminance, contrast, timeSize, luminance, contrast, time The amount of light reaching the eye is brightness (a subjective interpretation)brightness (a subjective interpretation) luminance (an objective physical quantity)luminance (an objective physical quantity) radiance (also an objective physical quantity)radiance (also an objective physical quantity)

21 Radiometry vs photometry Radiometry Physical measurement (all electromag energy)Physical measurement (all electromag energy) Used by optical and radiation engineersUsed by optical and radiation engineersPhotometry How a human observer responds to lightHow a human observer responds to light Perceptual measurement (visible light only)Perceptual measurement (visible light only) Used by illumination engineers and perceptual psychologistsUsed by illumination engineers and perceptual psychologistsRadiometry Physical measurement (all electromag energy)Physical measurement (all electromag energy) Used by optical and radiation engineersUsed by optical and radiation engineersPhotometry How a human observer responds to lightHow a human observer responds to light Perceptual measurement (visible light only)Perceptual measurement (visible light only) Used by illumination engineers and perceptual psychologistsUsed by illumination engineers and perceptual psychologists

22 Radiometric vs photometric Radiant energy (joule) Radiant power (W=J/sec) Irradiance (W/m²) Radiant Exitance (W/m²) Radiance (W/sr m²) Radiant energy (joule) Radiant power (W=J/sec) Irradiance (W/m²) Radiant Exitance (W/m²) Radiance (W/sr m²) Luminous energy (talbot) Luminous power (lm=talbot/sec) Illuminance (lm/ m²) Luminous Exitance (lm/m²) Luminance (cd/m²) In Computer Graphics both are used! In Computer Graphics both are used! * W=watt, m=meter, sr=steradian, lm=lumen, cd=candela

23 Photoreceptors Rods Occupy the peripheral retinaOccupy the peripheral retina Responsible for detection of movements, shapes and night time visionResponsible for detection of movements, shapes and night time visionCones Responsible for fine detail and color vision.Responsible for fine detail and color vision. Occupy a small portion of the retina (macula)Occupy a small portion of the retina (macula) Three subtypes - Red, Blue and GreenThree subtypes - Red, Blue and GreenRods Occupy the peripheral retinaOccupy the peripheral retina Responsible for detection of movements, shapes and night time visionResponsible for detection of movements, shapes and night time visionCones Responsible for fine detail and color vision.Responsible for fine detail and color vision. Occupy a small portion of the retina (macula)Occupy a small portion of the retina (macula) Three subtypes - Red, Blue and GreenThree subtypes - Red, Blue and Green

24 Photoreceptor sensitivity Red, green & blue cones are sensitive to different frequencies of light (guess which!) Red, green & blue cones are sensitive to different frequencies of light (guess which!) Red Blue Green Sensitivity of a single RGB cone

25 Photoreceptor sensitivity A great image from the Architectural Science Lab at the Univ. of Western Australia, showing the three characteristic peaks of sensitivity within the red/orange, green and blue frequency bands

26 Human visual system What to take away from this discussion The basic system of image formationThe basic system of image formation The basis for the RGB computer color model is the tristimulus theory of vision based on the sensitivity curvesThe basis for the RGB computer color model is the tristimulus theory of vision based on the sensitivity curves That there is a lot of processing after an image is formed that we will not (cannot) modelThat there is a lot of processing after an image is formed that we will not (cannot) model What to take away from this discussion The basic system of image formationThe basic system of image formation The basis for the RGB computer color model is the tristimulus theory of vision based on the sensitivity curvesThe basis for the RGB computer color model is the tristimulus theory of vision based on the sensitivity curves That there is a lot of processing after an image is formed that we will not (cannot) modelThat there is a lot of processing after an image is formed that we will not (cannot) model

27 Pinhole camera Simplistic model -box with small hole on one side -hole allows only one ray of light to enter Simplistic model -box with small hole on one side -hole allows only one ray of light to enter d is the distance to the image plane d is the distance to the image plane center at origin

28 Pinhole camera Align the distance to the image plane along the z axis, giving z = -dAlign the distance to the image plane along the z axis, giving z = -d Find the image point using similar trianglesFind the image point using similar triangles y p /-d = y/z, or y p = -yd/z x p /-d = x/z, or x p = -xd/z Align the distance to the image plane along the z axis, giving z = -dAlign the distance to the image plane along the z axis, giving z = -d Find the image point using similar trianglesFind the image point using similar triangles y p /-d = y/z, or y p = -yd/z x p /-d = x/z, or x p = -xd/z The point (x p,y p,,d) is the projection point of (x,y,z)

29 Pinhole camera The field of view (or angle), , is the angle made by the largest object whose projection will fit on the view planeThe field of view (or angle), , is the angle made by the largest object whose projection will fit on the view plane  = 2 tan -1 h/2d

30 Pinhole camera The depth of field is the distance from the lens that an object is in focusThe depth of field is the distance from the lens that an object is in focus In a pinhole camera, the depth of field is infinite (a perfect lens!), but there is little light (an exception is sunlight) so images are darkIn a pinhole camera, the depth of field is infinite (a perfect lens!), but there is little light (an exception is sunlight) so images are dark If we replace the pinhole with a lensIf we replace the pinhole with a lens -we can capture brighter images -we can vary the focal length, d, which in turns changes the field of view (the zoom of a camera), which affects the depth of field The depth of field is the distance from the lens that an object is in focusThe depth of field is the distance from the lens that an object is in focus In a pinhole camera, the depth of field is infinite (a perfect lens!), but there is little light (an exception is sunlight) so images are darkIn a pinhole camera, the depth of field is infinite (a perfect lens!), but there is little light (an exception is sunlight) so images are dark If we replace the pinhole with a lensIf we replace the pinhole with a lens -we can capture brighter images -we can vary the focal length, d, which in turns changes the field of view (the zoom of a camera), which affects the depth of field

31 Synthetic camera model The camera system we’ll use in CG is analogous to the other imaging systems The focal length determines the projection plane (the image plane), e.g. the plane z = -10The focal length determines the projection plane (the image plane), e.g. the plane z = -10 The image is projected point by point onto the projection planeThe image is projected point by point onto the projection plane The field of view is simulated using a clipping window, or frustumThe field of view is simulated using a clipping window, or frustum Depth of field is more difficult to simulateDepth of field is more difficult to simulate The camera system we’ll use in CG is analogous to the other imaging systems The focal length determines the projection plane (the image plane), e.g. the plane z = -10The focal length determines the projection plane (the image plane), e.g. the plane z = -10 The image is projected point by point onto the projection planeThe image is projected point by point onto the projection plane The field of view is simulated using a clipping window, or frustumThe field of view is simulated using a clipping window, or frustum Depth of field is more difficult to simulateDepth of field is more difficult to simulate

32 How shall we model light? Particle model at large scale Geometrical opticsGeometrical optics RadiometryRadiometry Wave model at small scale Physical opticsPhysical optics Maxwell’s equationsMaxwell’s equations Particle model at large scale Geometrical opticsGeometrical optics RadiometryRadiometry Wave model at small scale Physical opticsPhysical optics Maxwell’s equationsMaxwell’s equations

33 APIs Application program Graphics library (API) Graphics library (API) Hardware Application programming interfaces (APIs) shield users from implementation details Display OpenGL, PHIGS, Direct3D, VRML, and Java3D are all graphics APIs

34 OpenGL For a synthetic camera model, OpenGL API functions must allow user to specify: ObjectsObjects -polygons, points, spheres, curves, surfaces -code that defines a polygon in OpenGL For a synthetic camera model, OpenGL API functions must allow user to specify: ObjectsObjects -polygons, points, spheres, curves, surfaces -code that defines a polygon in OpenGL Object Type XYZ coordinates of the 3D points glBegin(GL_POLYGON); glVertex3f(0.0, 0.0, 0.0); glVertex3f(0.0, 1.0, 0.0); glVertex3f(0.0, 0.0, 1.0); glEnd(GL_POLYGON);

35 OpenGL Note: Both modeling and rendering can be performed in OpenGLBoth modeling and rendering can be performed in OpenGL Often, the two are separate, and performed by different applications, ie. AutoCAD (modeler) and Lightscape (renderer)Often, the two are separate, and performed by different applications, ie. AutoCAD (modeler) and Lightscape (renderer) Rendering packages typically take as input the output of the popular CAD modelersRendering packages typically take as input the output of the popular CAD modelersNote: Both modeling and rendering can be performed in OpenGLBoth modeling and rendering can be performed in OpenGL Often, the two are separate, and performed by different applications, ie. AutoCAD (modeler) and Lightscape (renderer)Often, the two are separate, and performed by different applications, ie. AutoCAD (modeler) and Lightscape (renderer) Rendering packages typically take as input the output of the popular CAD modelersRendering packages typically take as input the output of the popular CAD modelers

36 OpenGL For a synthetic camera model, OpenGL API functions must allow user to specify: Viewer (Camera)Viewer (Camera) -see figure 1.23 on p. 23 in textbook -position, orientation, focal length Light sourcesLight sources -location, color, direction, strength Material propertiesMaterial properties -color, transparency, reflectivity, smoothness, etc. For a synthetic camera model, OpenGL API functions must allow user to specify: Viewer (Camera)Viewer (Camera) -see figure 1.23 on p. 23 in textbook -position, orientation, focal length Light sourcesLight sources -location, color, direction, strength Material propertiesMaterial properties -color, transparency, reflectivity, smoothness, etc.

37 Pipeline architecture Pipelines Data goes to processing step a, the resulting data is passed to processing step b, and so onData goes to processing step a, the resulting data is passed to processing step b, and so onMotivation Raster displays become universalRaster displays become universal In CG, the exact same operations are performed on millions of vertices per sceneIn CG, the exact same operations are performed on millions of vertices per scene Four major steps Transform (4), clip (7), project (5), rasterize (7)Transform (4), clip (7), project (5), rasterize (7)Pipelines Data goes to processing step a, the resulting data is passed to processing step b, and so onData goes to processing step a, the resulting data is passed to processing step b, and so onMotivation Raster displays become universalRaster displays become universal In CG, the exact same operations are performed on millions of vertices per sceneIn CG, the exact same operations are performed on millions of vertices per scene Four major steps Transform (4), clip (7), project (5), rasterize (7)Transform (4), clip (7), project (5), rasterize (7)

38 Rendering pipeline Modeling coords modeling transform modeling transform World coords (object space) World coords (object space) visability determinationvisability determination lightinglighting viewing transformviewing transform View coords (eye space) View coords (eye space) clip to hither and yonclip to hither and yon projection transformprojection transform Modeling coords modeling transform modeling transform World coords (object space) World coords (object space) visability determinationvisability determination lightinglighting viewing transformviewing transform View coords (eye space) View coords (eye space) clip to hither and yonclip to hither and yon projection transformprojection transform

39 Rendering pipeline Normalized device coords (clip space) clip to left,right,top,bottomclip to left,right,top,bottom scale and translate (workstation transform)scale and translate (workstation transform) Device (screen) coords (image space) Device (screen) coords (image space) hidden surface removalhidden surface removal rasterizationrasterization Display Display Normalized device coords (clip space) clip to left,right,top,bottomclip to left,right,top,bottom scale and translate (workstation transform)scale and translate (workstation transform) Device (screen) coords (image space) Device (screen) coords (image space) hidden surface removalhidden surface removal rasterizationrasterization Display Display


Download ppt "Welcome to CSCI 480, Computer Graphics Introduction Instructor: Erin Shaw, TA: TBDInstructor: Erin Shaw, TA: TBD Class web page"

Similar presentations


Ads by Google