Chapter V Vertex Processing

Slides:



Advertisements
Similar presentations
GR2 Advanced Computer Graphics AGR
Advertisements

Defining the Viewing Coordinate System
Graphics Pipeline.
HCI 530 : Seminar (HCI) Damian Schofield.
Gopi -ICS186AW03 - Slide1 Graphics Pipeline: First Pass.
1 Chapter 5 Viewing. 2 Perspective Projection 3 Parallel Projection.
CS 450: Computer Graphics 2D TRANSFORMATIONS
Geometric Objects and Transformations Geometric Entities Representation vs. Reference System Geometric ADT (Abstract Data Types)
COMP 175: Computer Graphics March 24, 2015
Advanced Computer Graphics Three Dimensional Viewing
Week 2 - Wednesday CS361.
Week 5 - Wednesday.  What did we talk about last time?  Project 2  Normal transforms  Euler angles  Quaternions.
MIT EECS 6.837, Durand and Cutler Graphics Pipeline: Projective Transformations.
CS559: Computer Graphics Lecture 9: Projection Li Zhang Spring 2008.
CS 450: COMPUTER GRAPHICS REVIEW: INTRODUCTION TO COMPUTER GRAPHICS – PART 2 SPRING 2015 DR. MICHAEL J. REALE.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Computer Graphics The Rendering Pipeline - Review CO2409 Computer Graphics Week 15.
1Computer Graphics Lecture 4 - Models and Architectures John Shearer Culture Lab – space 2
CAP4730: Computational Structures in Computer Graphics 3D Transformations.
CS 450: COMPUTER GRAPHICS PROJECTIONS SPRING 2015 DR. MICHAEL J. REALE.
Computer Graphics Zhen Jiang West Chester University.
OpenGL The Viewing Pipeline: Definition: a series of operations that are applied to the OpenGL matrices, in order to create a 2D representation from 3D.
Mark Nelson 3d projections Fall 2013
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
12/24/2015 A.Aruna/Assistant professor/IT/SNSCE 1.
Chapter II Vertex Processing
Chapter III Rasterization
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
Coordinate Systems Lecture 1 Fri, Sep 2, The Coordinate Systems The points we create are transformed through a series of coordinate systems before.
CS559: Computer Graphics Lecture 9: 3D Transformation and Projection Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Viewing and Projection. The topics Interior parameters Projection type Field of view Clipping Frustum… Exterior parameters Camera position Camera orientation.
Chapter 1 Graphics Systems and Models Models and Architectures.
CS 551 / 645: Introductory Computer Graphics Viewing Transforms.
GLSL Review Monday, Nov OpenGL pipeline Command Stream Vertex Processing Geometry processing Rasterization Fragment processing Fragment Ops/Blending.
Viewing. Classical Viewing Viewing requires three basic elements - One or more objects - A viewer with a projection surface - Projectors that go from.
Graphics Pipeline Bringing it all together. Implementation The goal of computer graphics is to take the data out of computer memory and put it up on the.
Introduction to Computer Graphics
Visible Surface Detection
Rendering Pipeline Fall, 2015.
- Introduction - Graphics Pipeline
Camera Position (5.6) we specify the position and orientation of the camera to determine what will be seen. use gluLookAt (eye x, y, z, at x, y, z, up.
Computer Graphics Viewing
CSCE 441 Computer Graphics 3-D Viewing
CSC461: Lecture 20 Parallel Projections in OpenGL
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
Graphics Processing Unit
3D Graphics Rendering PPT By Ricardo Veguilla.
CENG 477 Introduction to Computer Graphics
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Models and Architectures
Models and Architectures
Chapters VIII Image Texturing
Models and Architectures
Introduction to Computer Graphics with WebGL
Three Dimensional Viewing
Introduction to Computer Graphics with WebGL
Chapter IV Spaces and Transforms
Chapter VI OpenGL ES and Shader
Chapter IX Bump Mapping
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Chapter III Modeling.
Lecture 13 Clipping & Scan Conversion
Chapter VII Rasterizer
Screen-space Object Manipulation
Models and Architectures
Chapter XIV Normal Mapping
Models and Architectures
Presentation transcript:

Chapter V Vertex Processing

GPU Rendering Pipeline In the GPU, rendering is done in a pipeline architecture, where the output of one stage is taken as the input for the next stage. A shader in the rendering pipeline is a synonym of a program. We have to provide the pipeline with two programs, i.e., the vertex and fragment shaders. In contrast, the rasterizer and output merger are hard-wired stages that perform fixed functions. The vertex shader performs various operations on every input vertex stored in the vertex array. The output is passed to the rasterizer. It assembles triangles from the vertices and converts each triangle into what are called the fragments. A fragment is defined for a pixel location covered by the triangle on the screen and refers to a set of data used to determine a color.

GPU Rendering Pipeline (cont’d) Among the operations performed by the vertex shader, the most essential is applying a series of transforms to the vertex,

World Transform - Revisited Previously, the world transform was applied to vertex positions in the vertex array. How about vertex normals? If the world transform is in [L|t], the normals are affected only by L, not by t. If L includes a non-uniform scaling, it cannot be applied to surface normals. See the triangle normal example. The triangle’s normal scaled by L is not any longer orthogonal to the triangle scaled by L.

World Transform – Revisited (cont’d) Instead, we have to use inverse transpose of L, which is (L-1)T. It is simply denoted by L-T. If L does not contain any non-uniform scaling, n can be transformed by L. However, Ln and L-Tn have the same direction even though their magnitudes may be different. Therefore, we can transform n consistently by L-T regardless of whether L contains a non-uniform scaling or not. Note that the normal transformed by L-T will be finally normalized.

View Transform Camera pose specification in the world space EYE: camera position AT: a reference point toward which the camera is aimed UP: view up vector that describes where the top of the camera is pointing. (In most cases, UP is set to the vertical axis, y-axis, of the world space.) The camera space, {u, v, n, EYE}, can be created. Note that {u, v, n} is an orthonormal basis.

View Transform (cont’d) A point is given different coordinates in distinct spaces. If all the world-space objects can be newly defined in terms of the camera space in the manner of the teapot’s mouth end, it becomes much easier to develop the rendering algorithms. In general, it is called space change. The space change from the world space {e1, e2, e3, O} to the camera space {u, v, n, EYE} is the view transform.

View Transform (cont’d) The space change can be intuitively described by the process of superimposing the camera space, {u, v, n, EYE}, onto the world space, {e1, e2, e3, O}. First, EYE is translated to the origin of the world space. Imagine invisible rods connecting the scene objects and the camera space. The translation is applied to the scene objects.

View Transform (cont’d) The world space and the camera space now share the origin, due to translation. We then need a rotation R that transforms {u, v, n} into {e1, e2, e3}. It’s called basis change. As the world and camera spaces are made identical, the world-space positions of the transformed objects can be taken as the camera-space ones.

View Transform (cont’d) The view matrix Mview is applied to all objects in the world space to transform them into the camera space. The space/basis change plays quite an important role in many algorithms of computer graphics, computer vision, augmented reality, and robotics. It also frequently appears later in this class.

Right-hand System vs. Left-hand System

Right-hand System vs. Left-hand System (cont’d) An inconsistency is observed if an object defined in the RHS is ported to the LHS as is together with the same camera parameters.

Right-hand System vs. Left-hand System (cont’d) In order to avoid the reflected image shown in the previous page, the z-coordinates of both the objects and view parameters should be negated. Note that z-negation is conceptually equivalent to the z-axis inversion.

View Frustum Let us denote the basis of the camera space by {x, y, z} instead of {u, v, n}. Recall that, for constructing the view transform, we defined the external parameters of the camera, i.e., EYE, AT, and UP. Now let us control the camera’s internals. It is analogous to choosing a lens for the camera and controlling zoom-in/zoom-out. The view frustum parameters, fovy, aspect, n, and f, define a truncated pyramid. The near and far planes run counter to the real-world camera or human vision system, but have been introduced for the sake of computational efficiency.

View Frustum (cont’d) The cylinder and sphere would be discarded by the so-called view-frustum culling whereas the teapot would survive. If a polygon intersects the boundary of the view frustum, it is clipped with respect to the boundary, and only the portion inside the view frustum is processed for display.

Projection Transform The pyramidal view frustum is not used for clipping. Instead, a transform is used that deforms the view frustum into the axis-aligned 2x2x2-sized cube centered at the origin. It is called projection transform. The camera-space objects are projection-transformed and then clipped against the cube. The projection-transformed objects are said to be in the clip space, which is just the renamed camera space.

Projection Transform (cont’d) The view frustum can be taken as a convergent pencil of projection lines. The lines converge on the origin, where the camera (EYE) is located. All 3D points on a projection line are mapped onto a single 2D point in the projected image. It brings the effect of perspective projection, where objects farther away look smaller. The projection transform ensures that the projection lines become parallel to the z-axis. Now viewing is done along the universal projection line. The projection transform brings the effect of perspective projection “within a 3D space.”

Projection Transform (cont’d) Projection transform matrix Derivation of the matrix is presented in the textbook.

Projection Transform (cont’d) Projection transform matrix The projection-transformed objects will enter the rasterizer. Unlike the vertex shader, the rasterizer is implemented in hardware, and assumes that the clip space is left-handed. In order for the vertex shader to be compatible with the hard-wired rasterizer, the objects should be z-negated.

Projection Transform (cont’d) Projection transform from the camera space (RHS) to the clip space (LHS)