Presentation is loading. Please wait.

Presentation is loading. Please wait.

UNIT- I 2D PRIMITIVES. Line and Curve Drawing Algorithms.

Similar presentations


Presentation on theme: "UNIT- I 2D PRIMITIVES. Line and Curve Drawing Algorithms."— Presentation transcript:

1 UNIT- I 2D PRIMITIVES

2 Line and Curve Drawing Algorithms

3 Line Drawing y = m. x + b m = (y end – y 0 ) / (x end – x 0 ) b = y 0 – m. x 0 x0x0 y0y0 x end y end

4 DDA Algorithm if |m|<1 x k+1 = x k + 1 y k+1 = y k + m if |m|>1 y k+1 = y k + 1 x k+1 = x k + 1/m x0x0 y0y0 x end y end x0x0 y0y0 x end y end

5 DDA Algorithm #include inline int round (const float a) { return int (a + 0.5); } void lineDDA (int x0, int y0, int xEnd, int yEnd) { int dx = xEnd - x0, dy = yEnd - y0, steps, k; float xIncrement, yIncrement, x = x0, y = y0; if (fabs (dx) > fabs (dy)) steps = fabs (dx);/* |m|<1 */ else steps = fabs (dy);/* |m|>=1 */ xIncrement = float (dx) / float (steps); yIncrement = float (dy) / float (steps); setPixel (round (x), round (y)); for (k = 0; k < steps; k++) { x += xIncrement; y += yIncrement; setPixel (round (x), round (y)); }

6 Bresenham’s Line Algorithm xkxk ykyk x k+1 y k+1 y dudu dldl xkxk x k+1 ykyk y k+1

7 Bresenham’s Line Algorithm #include /* Bresenham line-drawing procedure for |m|<1.0 */ void lineBres (int x0, int y0, int xEnd, int yEnd) { int dx = fabs(xEnd - x0), dy = fabs(yEnd - y0); int p = 2 * dy - dx; int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy - dx); int x, y; /* Determine which endpoint to use as start position. */ if (x0 > xEnd) { x = xEnd; y = yEnd; xEnd = x0; } else { x = x0; y = y0; } setPixel (x, y); while (x < xEnd) { x++; if (p < 0) p += twoDy; else { y++; p += twoDyMinusDx; } setPixel (x, y); }

8 Circle Drawing Pythagorean Theorem: x 2 + y 2 = r 2 (x-x c ) 2 + (y-y c ) 2 = r 2 (x c -r) ≤ x ≤ (x c +r) y = y c ± √r 2 - (x-x c ) 2 xcxc ycyc r (x, y)

9 Circle Drawing change x change y

10 Circle Drawing using polar coordinates x = x c + r. cos θ y = y c + r. sin θ change θ with step size 1/r r (x, y) (x c, y c ) θ

11 Circle Drawing using polar coordinates x = x c + r. cos θ y = y c + r. sin θ change θ with step size 1/r use symmetry if θ>45 0 r (x, y) (x c, y c ) θ (x, y) (x c, y c ) 45 0 (y, x)(y, -x) (-x, y)

12 Midpoint Circle Algorithm f(x,y) = x 2 + y 2 - r 2 <0 if (x,y) is inside circle f(x,y)=0 if (x,y) is on the circle <0 if (x,y) is outside circle use symmetry if x>y xkxk x k+1 y k -1 ykyk y k - 1/2

13 Midpoint Circle Algorithm #include class scrPt { public: GLint x, y; }; void setPixel (GLint x, GLint y) { glBegin (GL_POINTS); glVertex2i (x, y); glEnd ( ); } void circleMidpoint (scrPt circCtr, GLint radius) { scrPt circPt; GLint p = 1 - radius; circPt.x = 0; circPt.y = radius; void circlePlotPoints (scrPt, scrPt); /* Plot the initial point in each circle quadrant. */ circlePlotPoints (circCtr, circPt); /* Calculate next points and plot in each octant. */ while (circPt.x < circPt.y) { circPt.x++; if (p < 0) p += 2 * circPt.x + 1; else { circPt.y--; p += 2 * (circPt.x - circPt.y) + 1; } circlePlotPoints (circCtr, circPt); } void circlePlotPoints (scrPt circCtr, scrPt circPt); { setPixel (circCtr.x + circPt.x, circCtr.y + circPt.y); setPixel (circCtr.x - circPt.x, circCtr.y + circPt.y); setPixel (circCtr.x + circPt.x, circCtr.y - circPt.y); setPixel (circCtr.x - circPt.x, circCtr.y - circPt.y); setPixel (circCtr.x + circPt.y, circCtr.y + circPt.x); setPixel (circCtr.x - circPt.y, circCtr.y + circPt.x); setPixel (circCtr.x + circPt.y, circCtr.y - circPt.x); setPixel (circCtr.x - circPt.y, circCtr.y - circPt.x); }

14 OpenGL #include // (or others, depending on the system in use) void init (void) { glClearColor (1.0, 1.0, 1.0, 0.0); // Set display-window color to white. glMatrixMode (GL_PROJECTION); // Set projection parameters. gluOrtho2D (0.0, 200.0, 0.0, 150.0); } void lineSegment (void) { glClear (GL_COLOR_BUFFER_BIT); // Clear display window. glColor3f (0.0, 0.0, 1.0); // Set line segment color to red. glBegin (GL_LINES); glVertex2i (180, 15); // Specify line-segment geometry. glVertex2i (10, 145); glEnd ( ); glFlush ( ); // Process all OpenGL routines as quickly as possible. } void main (int argc, char** argv) { glutInit (&argc, argv); // Initialize GLUT. glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); // Set display mode. glutInitWindowPosition (50, 100); // Set top-left display-window position. glutInitWindowSize (400, 300); // Set display-window width and height. glutCreateWindow ("An Example OpenGL Program"); // Create display window. init ( ); // Execute initialization procedure. glutDisplayFunc (lineSegment); // Send graphics to display window. glutMainLoop ( ); // Display everything and wait. }

15 OpenGL Point Functions glVertex*( ); * : 2, 3, 4 i (integer) s (short) f (float) d (double) Ex: glBegin(GL_POINTS); glVertex2i(50, 100); glEnd(); int p1[ ]={50, 100}; glBegin(GL_POINTS); glVertex2iv(p1); glEnd();

16 OpenGL Line Functions GL_LINES GL_LINE_STRIP GL_LINE_LOOP Ex: glBegin(GL_LINES); glVertex2iv(p1); glVertex2iv(p2); glEnd();

17 OpenGL glBegin(GL_LINES);GL_LINESGL_LINE_STRIP glVertex2iv(p1); glVertex2iv(p2); glVertex2iv(p3); glVertex2iv(p4); glVertex2iv(p5); glEnd(); GL_LINE_LOOP p1 p2 p3 p2 p4 p3 p5 p4 p5

18 Antialiasing Supersampling Count the number of subpixels that overlap the line path. Set the intensity proportional to this count.

19 Antialiasing Area Sampling Line is treated as a rectangle. Calculate the overlap areas for pixels. Set intensity proportional to the overlap areas. 80%25%

20 Antialiasing Pixel Sampling Micropositioning Electron beam is shifted 1/2, 1/4, 3/4 of a pixel diameter.

21 Line Intensity differences Change the line drawing algorithm: For horizontal and vertical lines use the lowest intensity For 45 o lines use the highest intensity

22 2D Transformations with Matrices

23 Matrices A matrix is a rectangular array of numbers. A general matrix will be represented by an upper-case italicised letter. The element on the ith row and jth column is denoted by a i,j. Note that we start indexing at 1, whereas C indexes arrays from 0.

24 Given two matrices A and B if we want to add B to A (that is form A+B) then if A is (n  m), B must be (n  m), Otherwise, A+B is not defined. The addition produces a result, C = A+B, with elements: Matrices – Addition

25 Given two matrices A and B if we want to multiply B by A (that is form AB) then if A is (n  m), B must be (m  p), i.e., the number of columns in A must be equal to the number of rows in B. Otherwise, AB is not defined. The multiplication produces a result, C = AB, with elements: (Basically we multiply the first row of A with the first column of B and put this in the c 1,1 element of C. And so on…). Matrices – Multiplication

26 Matrices – Multiplication (Examples) 2  6+ 6  3+ 7  2=44 Undefined! 2x2 x 3x2 2!=3 2x2 x 2x4 x 4x4 is allowed. Result is 2x4 matrix

27 Unlike scalar multiplication, AB ≠ BA Matrix multiplication distributes over addition: A(B+C) = AB + AC Identity matrix for multiplication is defined as I. The transpose of a matrix, A, is either denoted A T or A’ is obtained by swapping the rows and columns of A: Matrices -- Basics

28 2D Geometrical Transformations Translate Rotate Scale Shear

29 Translate Points Recall.. We can translate points in the (x, y) plane to new positions by adding translation amounts to the coordinates of the points. For each point P (x, y) to be moved by d x units parallel to the x axis and by d y units parallel to the y axis, to the new point P’(x’, y’ ). The t ranslation has the following form: P(x,y) P’(x’,y’) dxdx dydy In matrix format: If we define the translation matrix, then we have P’ =P + T.

30 Scale Points Points can be scaled (stretched) by s x along the x axis and by s y along the y axis into the new points by the multiplications: We can specify how much bigger or smaller by means of a “scale factor” To double the size of an object we use a scale factor of 2, to half the size of an obejct we use a scale factor of 0.5 P(x,y) P’(x’,y’) x s x  x s y  y y If we define, then we have P’ =SP

31 Rotate Points (cont.) Points can be rotated through an angle  about the origin: P(x,y) P’(x’,y’) x x’ y’ y   l O P’ =RP

32 Review… Translate: P’ = P+T Scale: P’ = SP Rotate: P’ = RP Spot the odd one out… Multiplying versus adding matrix… Ideally, all transformations would be the same.. easier to code Solution: Homogeneous Coordinates

33 Homogeneous Coordinates For a given 2D coordinates (x, y), we introduce a third dimension: [x, y, 1] In general, a homogeneous coordinates for a 2D point has the form: [x, y, W] Two homogeneous coordinates [x, y, W] and [x’, y’, W’] are said to be of the same (or equivalent) if x = kx’eg: [2, 3, 6] = [4, 6, 12] y = ky’for some k ≠ 0 where k=2 W = kW’ Therefore any [x, y, W] can be normalised by dividing each element by W: [x/W, y/W, 1]

34 Homogeneous Transformations Now, redefine the translation by using homogeneous coordinates: Similarly, we have: Scaling Rotation P’ = S  P P’ = R  P

35 Composition of 2D Transformations 1.Additivity of successive translations We want to translate a point P to P’ by T(d x1, d y1 ) and then to P’’ by another T(d x2, d y2 ) On the other hand, we can define T 21 = T(d x1, d y1 ) T(d x2, d y2 ) first, then apply T 21 to P: where

36 T(-1,2) T(1,-1) (2,1) (1,3) (2,2) Examples of Composite 2D Transformations

37 Composition of 2D Transformations (cont.) 2.Multiplicativity of successive scalings where

38 Composition of 2D Transformations (cont.) 3.Additivity of successive rotations where

39 Composition of 2D Transformations (cont.) 4.Different types of elementary transformations discussed above can be concatenated as well. where

40 Consider the following two questions: 1)translate a line segment P 1 P 2, say, by -1 units in the x direction and -2 units in the y direction. 2). Rotate a line segment P 1 P 2, say by  degrees counter clockwise, about P 1. P 1 (1,2) P 2 (3,3) P’ 2 P’ 1 P 1 (1,2) P 2 (3,3) P’ 2 P’ 1 

41 Other Than Point Transformations… Translate Lines:translate both endpoints, then join them. Scale or Rotate Lines:More complex. For example, consider to rotate an arbitrary line about a point P 1, three steps are needed: 1). Translate such that P 1 is at the origin; 2). Rotate; 3). Translate such that the point at the origin returns to P 1. T(-1,-2) R()R() P 1 (1,2) P 2 (3,3) P 2 (2,1) T(1,2) P2P2 P2P2 P1P1 P1P1 P1P1 

42 Another Example. Scale Translate Rotate Translate

43 Order Matters! As we said, the order for composition of 2D geometrical transformations matters, because, in general, matrix multiplication is not commutative. However, it is easy to show that, in the following four cases, commutativity holds: 1).Translation + Translation 2).Scaling + Scaling 3).Rotation + Rotation 4).Scaling (with s x = s y ) + Rotation just to verify case 4: if s x = s y, M 1 = M 2.

44 Rigid-Body vs. Affine Transformations A transformation matrix of the form where the upper 2  2 sub-matrix is orthogonal, preserves angles and lengths. Such transforms are called rigid-body transformations, because the body or object being transformed is not distorted in any way. An arbitrary sequence of rotation and translation matrices creates a matrix of this form. The product of an arbitrary sequence of rotation, translations, and scale matrices will cause an affine transformation, which have the property of preserving parallelism of lines, but not of lengths and angles.

45 Rigid-Body vs. Affine Transformations (cont.) Shear transformation is also affine. Unit cube 45º Scale in x, not in y Rigid- body Transformation Affine Transformation Shear in the x direction Shear in the y direction

46 2D Output Primitives Points Lines Circles Ellipses Other curves Filling areas Text Patterns Polymarkers

47 Filling area Polygons are considered! 1) Scan-Line Filling (between edges) 2) Interactive Filling (using an interior starting point)

48 1) Scan-Line Filling (scan conversion) Problem: Given the vertices or edges of a polygon, which are the pixels to be included in the area filling?

49 Scan-Line filling, cont’d Main idea: locate the intersections between the scan-lines and the edges of the polygon sort the intersection points on each scan-line on increasing x-coordinates generate frame-buffer positions along the current scan-line between pairwise intersection points

50 Main idea

51 Scan-Line filling, cont’d Problems with intersection points that are vertices: Basic rule: count them as if each vertex is being two points (one to each of the two joining edges in the vertex) Exception: if the two edges joining in the vertex are on opposite sides of the scan-line, then count the vertex only once (require some additional processing)

52 Vertex problem

53 Scan-Line filling, cont’d Time-consuming to locate the intersection points! If an edge is crossed by a scan-line, most probably also the next scan-line will cross it (the use of coherence properties)

54 Scan-Line filling, cont’d Each edge is well described by an edge- record: y max x 0 (initially the x related to y min )  x/  y (inverse of the slope) (possibly also  x and  y)  x/  y is used for incremental calculations of the intersection points

55 Edge Records

56 Scan-Line filling, cont’d The intersection point (x n,y n ) between an edge and scan-line y n follows from the line equation of the edge: y n =  y/  x. x n + b (cp. y = m. x + b) The intersection between the same edge and the next scan-line y n+1 is then given from the following: y n+1 =  y/  x. x n+1 + b and also y n+1 = y n + 1 =  y/  x. x n + b +1

57 Scan-Line filling, cont’d This gives us: x n+1 = x n +  x/  y, n = 0, 1, 2, ….. i.e. the new value of x on the next scan- line is given by adding the inverse of the slope to the current value of x

58 Scan-Line filling, cont’d An active list of edge records intersecting with the current scan-line is sorted on increasing x-coordinates The polygon pixels that are written in the frame buffer are those which are calculated to be on the current scan-line between pairwise x-coordinates according to the active list

59 Scan-Line filling, cont’d When changing from one scan-line to the next, the active edge list is updated: a record with y max < ”the next scan-line” is removed in the remaining records, x 0 is incremented and rounded to the nearest integer an edge with y min = ”the next scan-line” is included in the list

60 Scan-Line Filling Example

61 2) Interactive Filling Given the boundaries of a closed surface. By choosing an arbitrary interior point, the complete interior of the surface will be filled with the color of the user’s choice.

62 Interactive Filling, cont’d Definition: An area or a boundary is said to be 4- connected if from an arbitrary point all other pixels within the area or on the boundary can be reached by only moving in horizontal or vertical steps. Furthermore, if it is also allowed to take diagonal steps, the surface or the boundary is said to be 8-connected.

63 4/8-connected

64 Interactive Filling, cont’d A recursive procedure for filling a 4- connected (8-connected) surface can easily be defined. Assume that the surface shall have the same color as the boundary (can easily be modified!). The first interior position (pixel) is choosen by the user.

65 Interactive Filling, algorithm void fill(int x, int y, int fillColor) { int interiorColor; interiorColor = getPixel(x,y); if (interiorColor != fillColor) { setPixel(x, y, fillColor); fill(x + 1, y, fillColor); fill(x, y + 1, fillColor); fill(x - 1, y, fillColor); fill(x, y - 1, fillColor); } }

66 Inside-Outside test When is a point an interior point? Odd-Even Rule Draw conceptually a line from a specified point to a distant point outside the coordinate space; count the number of polygon edges that are crossed if odd => interior if even => exterior Note! The vertices!

67 Text Representation: * bitmapped (raster) + fast - more storage - less good for styles/sizes * outlined (lines and curves) + less storage + good for styles/sizes - slower

68 Other output primitives * pattern (to fill an area) normally, an n x m rectangular color pixel array with a specified reference point * polymarker (marker symbol) a character representing a point * (polyline) a connected sequence of line segments

69 Attributes Influence the way a primitive is displayed Two main ways of introducing attributes: 1) added to primitive’s parameter list e.g. setPixel(x, y, color) 2) a list of current attributes (to be updated when changed) e.g setColor(color); setPixel(x, y);

70 Attributes for lines Lines (and curves) are normally infinitely thin type dashed, dotted, solid, dot-dashed, … pixel mask, e.g. 11100110011 width problem with the joins; line caps are used to adjust shape pen/brush shape color (intensity)

71 Lines with width Line caps Joins

72 Attributes for area fill fill style hollow, solid, pattern, hatch fill, … color pattern tiling

73 Tiling Tiling = filling surfaces (polygons) with a rectangular pattern

74 Attributes for characters/strings style font (typeface) color size (width/height) orientation path spacing alignment

75 Text attributes

76 Text attributes, cont’d

77

78

79 Color as attribute Each color has a numerical value, or intensity, based on some color model. A color model typically consists of three primary colors, in the case of displays Red, Green and Blue (RGB) For each primary color an intensity can be given, either 0-255 (integer) or 0-1 (float) yielding the final color 256 different levels of each primary color means 3x8=24 bits of information to store

80 Color representations Two different ways of storing a color value: 1) a direct color value storage/pixel 2) indirectly via a color look-up table index/pixel (typically 256 or 512 different colors in the table)

81 Color Look-up Table

82 Antialiasing Aliasing ≈ the fact that exact points are approximated by fixed pixel positions Antialiasing = a technique that compensates for this (more than one intensity level/pixel is required)

83 Antialiasing, a method A polygon will be studied (as an example). Area sampling (prefiltering): a pixel that is only partly included in the exact polygon, will be given an intensity that is proportional to the extent of the pixel area that is covered by the true polygon

84 Area sampling P = polygon intensity B = background intensity f = the extent of the pixel area covered by the true polygon  pixel intensity = P*f + B*(1 - f) Note! Time consuming to calculate f

85 Topics Clipping Cohen-Sutherland Line Clipping Algorithm

86 Clipping Why clipping? Not everything defined in the world coordinates is inside the world window Where does clipping take place? OpenGL does it for you BUT, as a CS major, you should know how it is done. Model Viewport Transformation Clipping …

87 Line Clipping int clipSegment(p1, p2, window) Input parameters: p1, p2, window p1, p2: 2D endpoints that define a line window: aligned rectangle Returned value: 1, if part of the line is inside the window 0, otherwise Output parameters: p1, p2 p1 and/or p2’s value might be changed so that both p1 and p2 are inside the window

88 Line Clipping Example Line RetVal Output AB BC CD DE EA o P1 o P2 o P3 o P4 o P1 o P2 o P3 o P4

89 Cohen-Sutherland Line Clipping Algorithm Trivial accept and trivial reject If both endpoints within window  trivial accept If both endpoints outside of same boundary of window  trivial reject Otherwise Clip against each edge in turn Throw away “clipped off” part of line each time How can we do it efficiently (elegantly)?

90 Cohen-Sutherland Line Clipping Algorithm Examples: trivial accept? trivial reject? window L1 L2 L3 L4 L5 L6

91 Cohen-Sutherland Line Clipping Algorithm Use “region outcode”

92 Cohen-Sutherland Line Clipping Algorithm outcode[1]  (x < Window.left) outcode[2]  (y > Window.top) outcode[3]  (x > Window.right) outcode[4]  (y < Window.bottom)

93 Cohen-Sutherland Line Clipping Algorithm Both outcodes are FFFF Trivial accept Logical AND of two outcodes  FFFF Trivial reject Logical AND of two outcodes = FFFF Can’t tell Clip against each edge in turn Throw away “clipped off” part of line each time

94 Cohen-Sutherland Line Clipping Algorithm Examples: outcodes? trivial accept? trivial reject? window L1 L2 L3 L4 L5 L6

95 Cohen-Sutherland Line Clipping Algorithm int clipSegment(Point2& p1, Point2& p2, RealRect W) do if(trivial accept) return 1; else if(trivial reject) return 0; else if(p1 is inside) swap(p1, p2) if(p1 is to the left) chop against the left else if(p1 is to the right) chop against the right else if(p1 is below) chop against the bottom else if(p1 is above) chop against the top while(1);

96 Cohen-Sutherland Line Clipping Algorithm A segment that requires 4 clips

97 Cohen-Sutherland Line Clipping Algorithm How do we chop against each boundary?  Given P1 (outside) and P2, (A.x,A.y)=?

98 Cohen-Sutherland Line Clipping Algorithm Let dx = p1.x - p2.xdy = p1.y - p2.y A.x = w.r d = p1.y - A.y e = p1.x - w.r d/dy = e/dx  p1.y - A.y = (dy/dx)(p1.x - w.r)  A.y = p1.y - (dy/dx)(p1.x - w.r) = p1.y + (dy/dx)(w.r - p1.x) As A is the new P1  p1.y += (dy/dx)(w.r - p1.x) p1.x = w.r Q: Will we have divided-by-zero problem?

99 UNIT-II THREE- DIMENSIONAL CONCEPTS

100 3D VIEWING

101 3D Viewing-contents Viewing pipeline Viewing coordinates Projections View volumes and general projection transformations clipping

102 3D Viewing World coordinate system(where the objects are modeled and defined) Viewing coordinate system(viewing objects with respect to another user defined coordinate system) Scene coordinate system(a viewing coordinate system chosen to be at the centre of a scene) Object coordinate system(a coordinate system specific to an object.)

103 3D viewing Simple camera analogy is adopted

104 3D viewing-pipeline

105 3D viewing Defining the viewing coordinate system and specifying the view plane

106 3D viewing First pick up a world coordinate position called the view reference point. This is the origin of the VC system Pick up the +ve direction for the Z v axis and the orientation of the view plane by specifying the view plane normal vector ‘N’. Choose a world coordinate position and this point establishes the direction for N relative to either the world or VC origin. The view plane normal vector is the directed line segment. steps to establish a Viewing coordinate system or view reference coordinate system and the view plane

107 3D viewing steps to establish a Viewing coordinate system or view reference coordinate system and the view plane Some packages allow us to choose a look at point relative to the view reference point. Or set up a Left handed viewing system and take the N and the +ve Z v axis from the viewing origin to the look- at point.

108 3D viewing steps to establish a Viewing coordinate system or view reference coordinate system and the view plane We now choose the view up vector V. It can be specified as a twist angle  about Z v axis. Using N,V U can be specified. Generally graphics packages allow users to choose a position of the view plane along the Z v axis by specifying the view plane distance from the viewing origin. The view plane is always parallel to the X v Y v plane.

109 3D viewing To obtain a series of views of a scene we can keep the view reference point fixed and change the direction of N or we can fix N direction and move the view reference point around the scene.

110 3D viewing (a) Invert Viewing z Axis (b) Translate Viewing Origin to World Origin (c) Rotate About World x Axis to Bring Viewing z Axis into the xz Plane of the World System (d) Rotate About the World y Axis to Align the Two z Axes (e) Rotate About the World z Axis to Align the Two Viewing Systems Transformation from world to viewing coordinate system M wc,vc =R z R y R x.T

111 What Are Projections? Picture Plane Objects in World Space Our 3-D scenes are all specified in 3-D world coordinates To display these we need to generate a 2-D image - project objects onto a picture plane

112 Converting From 3-D To 2-D Projection is just one part of the process of converting from 3-D world coordinates to a 2-D image Clip against view volume Project onto projection plane Transform to 2-D device coordinates 3-D world coordinate output primitives 2-D device coordinates

113 Types Of Projections There are two broad classes of projection: Parallel: Typically used for architectural and engineering drawings Perspective: Realistic looking and used in computer graphics Perspective Projection Parallel Projection

114 Taxonomy Of Projections

115 Types Of Projections There are two broad classes of projection: Parallel: preserves relative proportions of objects accurate views of various sides of an object can be obtained does not give realistic representations of the appearance of a 3D objective. Perspective: produce realistic views but does not preserve relative proportions projections of distant objects are smaller than the projections of objects of the same size that are closer to the projection plane.

116 Parallel Projections Some examples of parallel projections Orthographic Projection(axonometric) Orthographic oblique

117 Parallel Projections Some examples of parallel projections Isometric projection for a cube The projection plane is aligned so that it intersects each coordinate axes in which the object is defined (principal axes) at the same distance from the origin. All the principal axes are foreshortened equally.

118 Parallel Projections Transformation equations for an orthographic parallel projections is simple Any point (x,y,z) in viewing coordinates is transformed to projection coordinates as Xp=XYp=Y

119 Parallel Projections Transformation equations for oblique projections is as below. Oblique projections

120 Parallel Projections Transformation equations for oblique projections is as below. Oblique projections An orthographic projection is obtained when L1=0. In fact the effect of the projection matrix is to shear planes of constant Z and project them on to the view plane. Two common oblique parallel projections: –Cavalier and Cabinet

121 Parallel Projections Oblique projections 2 common oblique parallel projections: Cavalier projection Cabinet projection All lines perpendicular to the projection plane are projected with no change in length. They are more realistic than cavaliar Lines perpendicular to the viewing surface are projected at one-half their length.

122 Perspective Projections visual effect is similar to human visual system... has 'perspective foreshortening‘ size of object varies inversely with distance from the center of projection. angles only remain intact for faces parallel to projection plane.

123 Perspective Projections Where u varies from o to 1

124 Perspective Projections

125 If the view plane is the UV plane itself then Zvp=0. The projection coordinates become Xp=X(Zprp/(Zprp-Z))=X(1/(1-Z/Zprp)) Yp=Y(Zprp/(Zprp-Z))=Y(1/(1-Z/Zprp)) If the PRP is selected at the viewing cooridinate origin then Zprp=0 The projection coordinates become Xp=X(Zvp/Z) Yp=Y(Zvp/Z)

126 Perspective Projections There are a number of different kinds of perspective views The most common are one-point and two point perspectives One-point perspective projection Two-point perspective projection Coordinate description

127 Perspective Projections Parallel lines that are parallel to the view plane are projected as parallel lines. The point at which a set of projected parallel lines appear to converge is called a vanishing point. If a set of lines are parallel to one of the three principle axes, the vanishing point is called an principal vanishing point. There are at most 3 such points, corresponding to the number of axes cut by the projection plane.

128 View volume

129 Perspective projection Parallel projection The size of the view volume depends on the size of the window but the shape depends on the type of projection to be used. Both near and far planes must be on the same side of the reference point.

130 View volume Often the view plane is positioned at the view reference point or on the front clipping plane while generating parallel projection. Perspective effects depend on the positioning of the projection reference point relative to the view plane

131 Front Clipping Plane View Plane Back Clipping Plane Direction of Propagation Front Clipping Plane View Plane Back Clipping Plane Direction of Propagation Front Clipping Plane View Plane Back Clipping Plane View volume - PHIGS

132 View volume In an animation sequence, we can place the projection reference point at the viewing coordinate origin and put the view plane in front of the scene. We set the field of view by adjusting the size of the window relative to the distance of the view plane from the PRP. We move through the scene by moving the viewing reference frame and the PRP will move with the view reference point.

133 Direction of Projection Near Plane Far Plane Window (a) Original Orientation (b) After Shearing Direction of Projection Window Near Plane Far Plane View Volume View Volume Parallel General parallel projection transformation

134 Shearing Let V p =(a,b,c) be the projection vector in viewing coordinates. The shear transformation can be expressed as V’ p =M parallel.V p Where M parallel is For an orthographic parallel projection M parallel becomes the identity matrix since a 1 =b 1 =0 Parallel

135 Perspective View Volume View Volume Far Near Window Far Near Window Center of Projection Center of Projection (a) Original Orientation (b) After Transformation Shearing Regularization of Clipping (View) Volume (Cont ’ ) General perspective projection transformation

136 Perspective Steps 1.Shear the view volume so that the centerline of the frustum is perpendicular to the view plane 2.Scale the view volume with a scaling factor that depends on 1/z. A shear operation is to align a general perspective view volume with the projection window. The transformation involves a combination of z-axis shear and a translation. M perspective =M scale.M shear

137 Clipping View volume clipping boundaries are planes whose orientations depend on the type of projection, the projection window and the position of the projection reference point The process of finding the intersection of a line with one of the view volume boundaries is simplified if we convert the view volume before clipping to a rectangular parallelepiped. i.e we first perform the projection transformation which converts coordinate values in the view volume to orthographic parallel coordinates. Oblique projection view volumes are converted to a rectangular parallelepiped by the shearing operation and perspective view volumes are converted with a combination of shear and scale transformations.

138 Clipping-normalized view volumes The normalized view volume is a region defined by the planes X=0, x=1, y=0, y=1, z=0, z=1

139 Clipping-normalized view volumes There are several advantages to clipping against the unit cube 1.The normalized view volume provides a standard shape for representing any sized view volume. 2.Clipping procedures are simplified and standardized with unit clipping planes or the viewport planes. 3.Depth cueing and visible-surface determination are simplified, since Z-axis always points towards the viewer. Unit cube 3D viewport Mapping positions within a rectangular view volume to a three-dimensional rectangular viewport is accomplished with a combination of scaling and translation.

140 Clipping-normalized view volumes Unit cube 3D viewport Mapping positions within a rectangular view volume to a three-dimensional rectangular viewport is accomplished with a combination of scaling and translation. Dx00Kx 0Dy0Ky 00DzKz 0001 Where D x =(xv max -xv min )/(xw max -xw min ) and K x = xv min - xw min D x D y = (yv max -yv min )/(yw max -yw min ) and K y = yv min - yw min D y D z = (zv max -zv min )/(zw max -zw min ) and K z = zv min - zw min D z

141 Viewport clipping For a line endpoint at position (x,y,z) we assign the bit positions in the region code from right to left as Bit 1 = 1 if x< xv min (left) Bit 1 = 1 if x< xv max (right) Bit 1 = 1 if y< yv min (below) Bit 1 = 1 if y< yv max (above) Bit 1 = 1 if z< zv min (front) Bit 1 = 1 if z< zv max (back)

142 Viewport clipping For a line segment with endpoints P1(x1,y1,z1) and P2(x2,y2,z2) the parametric equations can be X=x1+(x2-x1)u Y=y1+(y2-y1)u Z=z1+(z2-z1)u

143 Hardware implementations WORLD-COORDINATE Object descriptions Transformation Operations Clipping Operations Conversion to Device Coordinates

144 3D Transformations 2D coordinates 3D coordinates x y x y z x z y Right-handed coordinate system:

145 3D Transformations (cont.) 1.Translation in 3D is a simple extension from that in 2D: 2.Scaling is similarly extended:

146 3D Transformations (cont.) 3.The 2D rotation introduced previously is just a 3D rotation about the z axis. similarly we have: X Y Z

147 Composition of 3D Rotations In 3D transformations, the order of a sequence of rotations matters!

148 More Rotations We have shown how to rotate about one of the principle axes, i.e. the axes constituting the coordinate system. There are more we can do, for example, to perform a rotation about an arbitrary axis: X Y Z P 2 (x 2, y 2, z 2 ) P 1 (x 1, y 1, z 1 ) We want to rotate an object about an axis in space passing through (x 1, y 1, z 1 ) and (x 2, y 2, z 2 ).

149 Rotating About An Arbitrary Axis Y Z P2P2 P1P1 1). Translate the object by (-x 1, - y 1, -z 1 ): T(-x 1, -y 1, -z 1 ) X Y Z P2P2 P1P1 2). Rotate the axis about x so that it lies on the xz plane: R x (  ) X X Y Z P2P2 P1P1 3). Rotate the axis about y so that it lies on z: R y (  ) X Y Z P2P2 P1P1 4). Rotate object about z by  : R z (  )  

150 Rotating About An Arbitrary Axis (cont.) After all the efforts, don’t forget to undo the rotations and the translation! Therefore, the mixed matrix that will perform the required task of rotating an object about an arbitrary axis is given by: M = T(x 1,y 1,z 1 ) Rx(-  )R y (-  ) R z (  ) R y (  ) R x (  )T(-x 1,-y 1,-z 1 ) Finding  is trivial, but what about  ? The angle between the z axis and the projection of P 1 P 2 on yz plane is . X Y Z P2P2  P1P1

151 Composite 3D Transformations

152 Example of Composite 3D Transformations Try to transform the line segments P 1 P 2 and P 1 P 3 from their start position in (a) to their ending position in (b). The first solution is to compose the primitive transformations T, R x, R y, and R z. This approach is easier to illustrate and does offer help on building an understanding. The 2 nd, more abstract approach is to use the properties of special orthogonal matrices. y x z y x z P1P1 P2P2 P3P3 P1P1 P2P2 P3P3 (a) (b)

153 Composition of 3D Transformations Breaking a difficult problem into simpler sub-problems: 1.Translate P 1 to the origin. 2. Rotate about the y axis such that P 1 P 2 lies in the (y, z) plane. 3. Rotate about the x axis such that P 1 P 2 lies on the z axis. 4. Rotate about the z axis such that P 1 P 3 lies in the (y, z) plane. y x z y x z y x z P1P1 P2P2 P3P3 P1P1 P2P2 P3P3 y x z P1P1 P2P2 P3P3 y x z P1P1 P2P2 P3P3 P3P3 P2P2 P1P1 1 2 34

154 Composition of 3D Transformations 1. 2. y x z P1P1 P2P2 P3P3  D1D1

155 Composition of 3D Transformations 3 4. y x z P  1 P  2  D2D2 y x z P  1 P  2  D3D3 P  3 Finally, we have the composite matrix:

156 Vector Rotation x  y x y Rotate the vector u The unit vector along the x axis is [1, 0] T. After rotating about the origin by , the resulting vector is

157 x Vector Rotation (cont.) y Rotate the vector x y  v The above results states that if we try to rotate a vector, originally pointing the direction of the x (or y) axis, toward a new direction, u (or v), the rotation matrix, R, could be simply written as [u | v] without the need of any explicit knowledge of , the actual rotation angle. Similarly, the unit vector along the y axis is [0, 1] T. After rotating about the origin by , the resulting vector is

158 Vector Rotation (cont.) The reversed operation of the above rotation is to rotate a vector that is not originally pointing the x (or y) direction into the direction of the positive x or y axis. The rotation matrix in this case is R(-  ), expressed by R -1 (  ) where T denotes the transpose. x x  y y Rotate the vector u u

159 Example what is the rotation matrix if one wants the vector T in the left figure to be rotated to the direction of u. T (2, 3) u If, on the other hand, one wants the vector u to be rotated to the direction of the positive x axis, the rotation matrix should be

160 Rotation Matrices Rotation matrix is orthonormal: Each row is a unit vector Each row is perpendicular to the other, i.e. their dot product is zero. Each vector will be rotated by R(  ) to lie on the positive x and y axes, respectively. The two column vectors are those into which vectors along the positive x and y axes are rotated. For orthonormal matrices, we have

161 Cross Product The cross product or vector product of two vectors, v 1 and v 2, is another vector: The cross product of two vectors is orthogonal to both Right-hand rule dictates direction of cross product. v1v1 v2v2 v1 v2v1 v2

162 u2u2 Extension to 3D Cases The above examples can be extended to 3D cases…. In 2D, we need to know u, which will be rotated to the direction of the positive x axis. u v x y z u1u1 v=u 1  u 2 In 3D, however, we need to know more than one vector. See in the left figure, for example, two vectors, u 1 and u 2 are given. If after rotation, u 1 is aligned to the positive z axis, this will only give us the third column in the rotation matrix. What about the other two columns?

163 3D Rotation In many cases in 3D, only one vector will be aligned to one of the coordinate axes, and the others are often not explicitly given. Let’s see the example: y x z y x z P1P1 P2P2 P3P3 P1P1 P2P2 P3P3 Note, in this example, vector P 1 P 2 will be rotated to the positive z direction. Hence the fist column vector in the rotation matrix is the normalised P 1 P 2. But what about the other two columns? After all, P 1 P 3 is not perpendi- cular to P 1 P 2. Well, we can find it by taking the cross product of P 1 P 2 and P 1 P 3. Since P 1 P 2  P 1 P 3 is perpendicular to both P 1 P 2 and P 1 P 3, it will be aligned into the direction of the positive x axis.

164 And the third direction is decide by the cross product of the other two directions, which is P 1 P 2  (P 1 P 2  P 1 P 2 ). Therefore, the rotation matrix should be 3D Rotation (cont.) u y x z P1P1 P2P2 P3P3 v w y x z P1P1 P2P2 P3P3 u v

165 Yaw, Pitch, and Roll Imagine three lines running through an airplane and intersecting at right angles at the airplane’s centre of gravity. Roll: rotation around the front-to-back axis. Roll: rotation around the side-to-side axis. Roll: rotation around the vertical axis.

166 An Example of the Airplane Consider the following example. An airplane is oriented such that its nose is pointing in the positive z direction, its right wing is pointing in the positive x direction, its cockpit is pointing in the positive y direction. We want to transform the airplane so that it heads in the direction given by the vector DOF (direction of flight), is centre at P, and is not banked.

167 Solution to the Airplane Example First we are to rotate the positive z p direction into the direction of DOF, which gives us the third column of the rotation matrix: DOF / |DOF|. The x p axis must be transformed into a horizontal vector perpendicular to DOF – that is in the direction of y  DOF. The y p direction is then given by x p  z p = DOF  (y  DOF).

168 Inverses of (2D and) 3D Transformations 1.Translation: 2.Scaling: 3.Rotation: 4.Shear:

169 UNIT-III GRAPHICS PROGRAMMING

170 Color Models

171 Color models,cont’d Different meanings of color: painting wavelength of visible light human eye perception

172 Physical properties of light Visible light is part of the electromagnetic radiation (380- 750 nm) 1 nm (nanometer) = 10 -10 m (=10 -7 cm) 1 Å (angstrom) = 10 nm Radiation can be expressed in wavelength ( ) or frequency (f), c= f, where c=3. 10 10 cm/sec

173 Physical properties of light White light consists of a spectrum of all visible colors

174 Physical properties of light All kinds of light can be described by the energy of each wavelength The distribution showing the relation between energy and wavelength (or frequency) is called energy spectrum

175 Physical properties of light This distribution may indicate: 1) a dominant wavelength (or frequency) which is the color of the light (hue), cp. E D 2) brightness (luminance), intensity of the light (value), cp. the area A 3) purity (saturation), cp. E D - E W

176 Physical properties of light Energy spectrum for a light source with a dominant frequency near the red color

177 Material properties The color of an object depends on the so called spectral curves for transparency and reflection of the material The spectral curves describe how light of different wavelengths are refracted and reflected (cp. the material coefficients introduced in the illumination models)

178 Properties of reflected light Incident white light upon an object is for some wavelengths absorbed, for others reflected E.g. if all light is absorbed => black If all wavelengths but one are absorbed => the one color is observed as the color of the object by the reflection

179 Color definitions Complementary colors - two colors combine to produce white light Primary colors - (two or) three colors used for describing other colors Two main principles for mixing colors: additive mixing subtractive mixing

180 Additive mixing pure colors are put close to each other => a mix on the retina of the human eye (cp. RGB) overlapping gives yellow, cyan, magenta and white the typical technique on color displays

181 Subtractive mixing color pigments are mixed directly in some liquid, e.g. ink each color in the mixture absorbs its specific part of the incident light the color of the mixture is determined by subtraction of colored light, e.g. yellow absorbs blue => only red and green, i.e. yellow, will reach the eye (yellow because of addition)

182 Subtractive mixing,cont’d primary colors: cyan, magenta and yellow, i.e. CMY the typical technique in printers/plotters connection between additive and subtractive primary colors (cp. the color models RGB and CMY)

183 Additive/subtractive mixing

184 Human color seeing The retina of the human eye consists of cones (7-8M),”tappar”, and rods (100-120M), ”stavar”, which are connected with nerve fibres to the brain

185 Human color seeing,cont’d Theory: the cones consist of various light absorbing material The light sensitivity of the cones and rods varies with the wavelength, and between persons The ”sum” of the energy spectrum of the light the reflection spectrum of the object the response spectrum of the eye decides the color perception for a person

186 Overview of color models The human eye can perceive about 382000(!) different colors Necessary with some kind of classification sys- tem; all using three coordinates as a basis: 1) CIE standard 2) RGB color model 3) CMY color model (also, CMYK) 4) HSV color model 5) HLS color model

187 CIE standard Commission Internationale de L’Eclairage (1931) not a computer model each color = a weighted sum of three imaginary primary colors

188 RGB model all colors are generated from the three primaries various colors are obtained by changing the amount of each primary additive mixing (r,g,b), 0≤r,g,b≤1

189 RGB model,cont’d the RGB cube 1 bit/primary => 8 colors, 8 bits/primary => 16M colors

190 CMY model cyan, magenta and yellow are comple- mentary colors of red,green and blue, respectively subtractive mixing the typical printer technique

191 CMY model,cont’d almost the same cube as with RGB; only black white the various colors are obtained by reducing light, e.g. if red is absorbed => green and blue are added, i.e cyan

192 RGB vs CMY If the intensities are represented as 0≤r,g,b≤1 and 0≤c,m,y≤1 (also coordinates 0-255 can be used), then the relation between RGB and CMY can be described as:

193 CMYK model For printing and graphics art industry, CMY is not enough; a fourth primary, K which stands for black, is added. Conversions between RGB and CMYK are possible, although they require some extra processing.

194 HSV model HSV stands for Hue-Saturation-Value described by a hexcone derived from the RGB cube

195 HSV model,cont’d Hue (0-360°); ”the color”, cp. the dominant wave- length (128) Saturation (0-1); ”the amount of white” (130) Value (0-1); ”the amount of black” (23)

196 HSV model,cont’d The numbers given after each ”primary” are estimates of how many levels a human being is capable to distinguish between, which (in theory) gives the total number of color nuances: 128*130*23 = 382720 In Computer Graphics, usually enough with: 128*8*15 = 16384

197 HLS model Another model similar to HSV L stands for Lightness

198 Color models Some more facts about colors: The distance between two colors in the color cube is not a measure of how far apart the colors are perceptionally! Humans are more sensitive to shifts in blue (and green?) than, for instance, in yellow

199 COMPUTER ANIMATIONS

200 Computer Animations Any time sequence of visual changes in a scene. Size, color, transparency, shape, surface texture, rotation, translation, scaling, lighting effects, morphing, changing camera parameters(position, orientation, and focal length), particle animation. Design of animation sequences: Storyboard layout Object definitions Key-frame specifications generation of in-between frames

201 Computer Animations Frame by frame animation Each frame is separately generated. Object defintion Objects are defined interms of basic shapes, such as polygons or splines. In addition the associated movements for each object are specified along with the shape. Storyboard It is an outline of the action Keyframe Detailed drawing of the scene at a particular instance

202 Computer Animations Inbetweens Intermediate frames (3 to 5 inbetweens for each two key frames) Motions can be generated using 2D or 3D transformation Object parameters are stored in database Rendering algorithms are used finally Raster animations: Uses raster operations. Ex: we can animate objects along 2D motion paths using the color table transformations. Here we predefine the object at successive positions along the motion path, and set the successive blocks of pixel values to color table entries

203 Computer Animations Computer animation languages: A typical task in animation specification is Scene description – includes position of objects and light sources, defining the photometric parameters and setting the camera parameters. Action specification – this involves layout of motion paths for the objects and camera. We need viewing and perspective transformations, geometric transformations, visible surface detection, surface rendering, kinematics etc., Keyframe systems – designed simply to generate the in- betweens from the user specified key frames.

204 Computer Animations Computer animation languages: A typical task in animation specification is Parameterized systems – allow object motion characteristics to be specified as part of the object definitions. The adjustable parameters control such object charateristics as degrees of freedom, motion limitations and allowable shape changes. Scripting systems – allow object specifications and animation sequences to be defined with a user-input script. From the script a library of various objects and motions can be constructed.

205 Computer Animations Interpolation techniquesLinear

206 Computer Animations Interpolation techniques Non-linear

207 Key frame systems Morphing – Transformation of object shapes from one form to another is called morphing. Given two keyframes for an object transformation, we first adjust the object specification in one of the frames so that number of polygon edges or vertices is the same for the two frames. Let L k,L k+1 denote the number of line segments in two different frames K,K+1 Let us define L max =max(L k,L k+1 ) L min =min(L k,L k+1 ) N e = L max mod L min N s = int(L max / L min ) Computer Animations 1 2 3’ 2’ 1’ Key frame K Key frame K+1

208 Steps 1. Dividing N e edges of keyframe min into N s +1 sections 2. Dividing the remaining lines of keyframe min into N s sections Computer Animations 1 2 3’ 2’ 1’ Key frame K Key frame K+1

209 Key frame systems Morphing – Transformation of object shapes from one form to another is called morphing. If we equalize the vertex count, then the similar analysis follows Let V k,V k+1 denote the number of vertices in two different frames K,K+1 Let us define V max =max(L k,L k+1 ) V min =min(L k,L k+1 ) N ls = (V max –1)mod (V min –1) N p = int((V max –1) / (V min –1) Steps 1. adding N p points to N ls line sections of keyframe min sections 2. Adding N p -1 points to the remaining edges of keyframe min Computer Animations

210 Simulating accelerations Curve fitting techniques are often used to specify the animation paths between keyframes. To simulate accelerations we can adjust the time spacing for the in- betweens. For constant speed we use equal interval time spacing for the inbetweens. suppose we want ‘n’ in-betweens for keyframes at times t1 and t2. The time intervals between key frames is then divided into n+1 sub intervals, yielding an in-between spacing of  t = t2-t1/(n+1) We can calculate the time for in-betweens as tBj=t1+j  t for j=1,2,…….,n Computer Animations  t

211 Simulating accelerations To model increase or decrease in speeds we use trignometric functions. To model increasing speed, we want the time spacing between frames to increase so that greater changes in position occur as the object moves faster. We can obtain increase in interval size with the function 1-cos ,0<  <  /2 For n-inbetweens the time for the jth inbetween would then be calculated as tBj=t1+  t(1-cosj  /2(n+1)) j=1,2,…….,n For j=1 tB1=t1+  t(1-cos  /2(n+1)) For j=1 tB2=t1+  t(1-cos 2  /2(n+1)) where  t is the time difference between any two key frames. Computer Animations

212 Simulating deccelerations To model increase or decrease in speeds we use trignometric functions. To model decreasing speed, we want the time spacing between frames to decrease. We can obtain increase in interval size with the function sin ,0<  <  /2 For n-inbetweens the time for the jth inbetween would then be calculated as tBj=t1+  t.sinj  /2(n+1)) j=1,2,…….,n Computer Animations

213 Simulating both accelerations and deccelerations To model increase or decrease in speeds we use trignometric functions. A combination of increasing and decreasing speeds can be modeled using ½(1-cos  ) 0<  <  /2 The time for the jth inbetween is calculated as tBj=t1+  t 1-cos j[  (n+1)/2) j=1,2,…….,n Computer Animations

214 Motion specifications Direct motion specifications Here we explicitly give the rotation angles and translation vectors. Then the geometric transformation matrices are applied to transform coordinate positions. A bouncing ball can be approximated by a sine curve y(x)=AI(sin(  x+  0 )Ie -kx A is the initial amplitude  is the angular frequency  0 is the phase angle K is the damping constant Computer Animations

215 Motion specifications Goal directed systems We can specify the motions that are to take place in general terms that abstractly describe the actions, because they determine specific motion paramters given the goals of the animation. Computer Animations

216 Motion specifications Kinematics Kinematic specification of of a motion can also be given by simply describing the motion path which is often done using splines. In inverse kinematics we specify the intital and final positions of objects at specified times and the motion parameters are computed by the system. Computer Animations

217 Motion specifications dynamics specification of the forces that produce the velocities and accelerations. Descriptions of object behavior under the influence of forces are generally referred to as a Physically based modeling (.rigid body systems and non rigid systems such as cloth or plastic) Ex: magnetic, gravitational, frictional etc We can also use inverse dynamics to obtain the forces, given the initial and final position of objects and the type of motion. Computer Animations

218 Ideally suited for: Large volumes of objects – wind effects, liquids, Cloth animation/draping Underlying mechanisms are usually: Particle systems Mass-spring systems Physics based animations

219 Computer Animations Physics based animations

220 Computer Animations Physics based animations

221 Computer Animations Some more animation techniques…………. Anticipation and Staging

222 Computer Animations Some more animation techniques…………. Secondary Motion

223 Computer Animations Some more animation techniques…………. Motion Capture

224 Computer Graphics using OpenGL Initial Steps in Drawing Figures

225 Using Open-GL Files:.h,.lib,.dll The entire folder gl is placed in the Include directory of Visual C++ The individual lib files are placed in the lib directory of Visual C++ The individual dll files are placed in C:\Windows\System32

226 Using Open-GL (2) Includes: (if used) Include in order given. If you use capital letters for any file or directory, use them in your include statement also.

227 Using Open-GL (3) Changing project settings: Visual C++ 6.0 Project menu, Settings entry In Object/library modules move to the end of the line and add glui32.lib glut32.lib glu32.lib opengl32.lib (separated by spaces from last entry and each other) In Project Options, scroll down to end of box and add same set of.lib files Close Project menu and save workspace

228 Using Open-GL (3) Changing Project Settings: Visual C++.NET 2003 Project, Properties, Linker, Command Line In the white space at the bottom, add glui32.lib glut32.lib glu32.lib opengl32.lib Close Project menu and save your solution

229 Getting Started Making Pictures Graphics display: Entire screen (a); windows system (b); [both have usual screen coordinates, with y-axis down]; windows system [inverted coordinates] (c)

230 Basic System Drawing Commands setPixel(x, y, color) Pixel at location (x, y) gets color specified by color Other names: putPixel(), SetPixel(), or drawPoint() line(x1, y1, x2, y2) Draws a line between (x1, y1) and (x2, y2) Other names: drawLine() or Line().

231 Alternative Basic Drawing current position (cp), specifies where the system is drawing now. moveTo(x,y) moves the pen invisibly to the location (x, y) and then updates the current position to this position. lineTo(x,y) draws a straight line from the current position to (x, y) and then updates the cp to (x, y).

232 Example: A Square moveTo(4, 4); //move to starting corner lineTo(-2, 4); lineTo(-2, -2); lineTo(4, -2); lineTo(4, 4); //close the square

233 Device Independent Graphics and OpenGL Allows same graphics program to be run on many different machine types with nearly identical output..dll files must be with program OpenGL is an API: it controls whatever hardware you are using, and you use its functions instead of controlling the hardware directly. OpenGL is open source (free).

234 Event-driven Programs Respond to events, such as mouse click or move, key press, or window reshape or resize. System manages event queue. Programmer provides “call-back” functions to handle each event. Call-back functions must be registered with OpenGL to let it know which function handles which event. Registering function does *not* call it!

235 Skeleton Event-driven Program // include OpenGL libraries void main() { glutDisplayFunc(myDisplay); // register the redraw function glutReshapeFunc(myReshape); // register the reshape function glutMouseFunc(myMouse); // register the mouse action function glutMotionFunc(myMotionFunc); // register the mouse motion function glutKeyboardFunc(myKeyboard); // register the keyboard action function …perhaps initialize other things… glutMainLoop(); // enter the unending main loop } …all of the callback functions are defined here

236 Callback Functions glutDisplayFunc(myDisplay); (Re)draws screen when window opened or another window moved off it. glutReshapeFunc(myReshape); Reports new window width and height for reshaped window. (Moving a window does not produce a reshape event.) glutIdleFunc(myIdle); when nothing else is going on, simply redraws display using void myIdle() {glutPostRedisplay();}

237 Callback Functions (2) glutMouseFunc(myMouse); Handles mouse button presses. Knows mouse location and nature of button (up or down and which button). glutMotionFunc(myMotionFunc); Handles case when the mouse is moved with one or more mouse buttons pressed.

238 Callback Functions (3) glutPassiveMotionFunc(myPassiveMotionFunc ) Handles case where mouse enters the window with no buttons pressed. glutKeyboardFunc(myKeyboardFunc); Handles key presses and releases. Knows which key was pressed and mouse location. glutMainLoop() Runs forever waiting for an event. When one occurs, it is handled by the appropriate callback function.

239 Libraries to Include GL, for which the commands begin with GL; GLUT, the GL Utility Toolkit, opens windows, develops menus, and manages events. GLU, the GL Utility Library, which provides high level routines to handle complex mathematical and drawing operations. GLUI, the User Interface Library, which is completely integrated with the GLUT library. The GLUT functions must be available for GLUI to operate properly. GLUI provides sophisticated controls and menus to OpenGL applications.

240 A GL Program to Open a Window // appropriate #includes go here – see Appendix 1 void main(int argc, char** argv) { glutInit(&argc, argv); // initialize the toolkit glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); // set the display mode glutInitWindowSize(640,480); // set window size glutInitWindowPosition(100, 150); // set window upper left corner position on screen glutCreateWindow("my first attempt"); // open the screen window (Title: my first attempt) // continued next slide

241 Part 2 of Window Program // register the callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); myInit(); // additional initializations as necessary glutMainLoop(); // go into a perpetual loop } Terminate program by closing window(s) it is using.

242 What the Code Does glutInit (&argc, argv) initializes Open-GL Toolkit glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB) allocates a single display buffer and uses colors to draw glutInitWindowSize (640, 480) makes the window 640 pixels wide by 480 pixels high

243 What the Code Does (2) glutInitWindowPosition (100, 150) puts upper left window corner at position 100 pixels from left edge and 150 pixels down from top edge glutCreateWindow (“my first attempt”) opens and displays the window with the title “my first attempt” Remaining functions register callbacks

244 What the Code Does (3) The call-back functions you write are registered, and then the program enters an endless loop, waiting for events to occur. When an event occurs, GL calls the relevant handler function.

245 Effect of Program

246 Drawing Dots in OpenGL We start with a coordinate system based on the window just created: 0 to 679 in x and 0 to 479 in y. OpenGL drawing is based on vertices (corners). To draw an object in OpenGL, you pass it a list of vertices. The list starts with glBegin(arg); and ends with glEnd(); Arg determines what is drawn. glEnd() sends drawing data down the OpenGL pipeline.

247 Example glBegin (GL_POINTS); glVertex2i (100, 50); glVertex2i (100, 130); glVertex2i (150, 130); glEnd(); GL_POINTS is constant built-into Open- GL (also GL_LINES, GL_POLYGON, …) Complete code to draw the 3 dots is in Fig. 2.11.

248 Display for Dots

249 What Code Does: GL Function Construction

250 Example of Construction glVertex2i (…) takes integer values glVertex2d (…) takes floating point values OpenGL has its own data types to make graphics device-independent Use these types instead of standard ones

251 Open-GL Data Types suffixdata typeC/C++ typeOpenGL type name b8-bit integersigned charGLbyte s 16-bit integerShortGLshort i 32-bit integerint or longGLint, GLsizei f 32-bit floatFloatGLfloat, GLclampf d 64-bit floatDoubleGLdouble,GLclampd ub 8-bit unsigned number unsigned charGLubyte,GLboolean us 16-bit unsigned number unsigned shortGLushort ui 32-bit unsigned number unsigned int or unsigned long GLuint,Glenum,GLbitfield

252 Setting Drawing Colors in GL glColor3f(red, green, blue); // set drawing color glColor3f(1.0, 0.0, 0.0); // red glColor3f(0.0, 1.0, 0.0); // green glColor3f(0.0, 0.0, 1.0); // blue glColor3f(0.0, 0.0, 0.0); // black glColor3f(1.0, 1.0, 1.0); // bright white glColor3f(1.0, 1.0, 0.0); // bright yellow glColor3f(1.0, 0.0, 1.0); // magenta

253 Setting Background Color in GL glClearColor (red, green, blue, alpha); Sets background color. Alpha is degree of transparency; use 0.0 for now. glClear(GL_COLOR_BUFFER_BIT); clears window to background color

254 Setting Up a Coordinate System void myInit(void) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(0, 640.0, 0, 480.0); } // sets up coordinate system for window from (0,0) to (679, 479)

255 Drawing Lines glBegin (GL_LINES); //draws one line glVertex2i (40, 100); // between 2 vertices glVertex2i (202, 96); glEnd (); glFlush(); If more than two vertices are specified between glBegin(GL_LINES) and glEnd() they are taken in pairs, and a separate line is drawn between each pair.

256 Line Attributes Color, thickness, stippling. glColor3f() sets color. glLineWidth(4.0) sets thickness. The default thickness is 1.0. a). thin lines b). thick lines c). stippled lines

257 Setting Line Parameters Polylines and Polygons: lists of vertices. Polygons are closed (right); polylines need not be closed (left).

258 Polyline/Polygon Drawing glBegin (GL_LINE_STRIP); // GL_LINE_LOOP to close polyline (make it a polygon) // glVertex2i () calls go here glEnd (); glFlush (); A GL_LINE_LOOP cannot be filled with color

259 Examples Drawing line graphs: connect each pair of (x, f(x)) values Must scale and shift

260 Examples (2) Drawing polyline from vertices in a file # polylines # vertices in first polyline Coordinates of vertices, x y, one pair per line Repeat last 2 lines as necessary File for dinosaur available from Web site Code to draw polylines/polygons in Fig. 2.24.

261 Examples (3)

262 Examples (4) Parameterizing Drawings: allows making them different sizes and aspect ratios Code for a parameterized house is in Fig. 2.27.

263 Examples (5)

264 Examples (6) Polyline Drawing Code to set up an array of vertices is in Fig. 2.29. Code to draw the polyline is in Fig. 2.30.

265 Relative Line Drawing Requires keeping track of current position on screen (CP). moveTo(x, y); set CP to (x, y) lineTo(x, y);draw a line from CP to (x, y), and then update CP to (x, y). Code is in Fig. 2.31. Caution! CP is a global variable, and therefore vulnerable to tampering from instructions at other points in your program.

266 Drawing Aligned Rectangles glRecti (GLint x 1, GLint y 1, GLint x 2, GLint y 2 ); // opposite corners; filled with current color; later rectangles are drawn on top of previous ones

267 Aspect Ratio of Aligned Rectangles Aspect ratio = width/height

268 Filling Polygons with Color Polygons must be convex: any line from one boundary to another lies inside the polygon; below, only D, E, F are convex

269 Filling Polygons with Color (2) glBegin (GL_POLYGON); //glVertex2f (…); calls go here glEnd (); Polygon is filled with the current drawing color

270 Other Graphics Primitives GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN GL_QUADS, GL_QUAD_STRIP

271 Simple User Interaction with Mouse and Keyboard Register functions: glutMouseFunc (myMouse); glutKeyboardFunc (myKeyboard); Write the function(s) NOTE that any drawing you do when you use these functions must be done IN the mouse or keyboard function (or in a function called from within mouse or keyboard callback functions).

272 Example Mouse Function void myMouse(int button, int state, int x, int y); Button is one of GLUT_LEFT_BUTTON, GLUT_MIDDLE_BUTTON, or GLUT_RIGHT_BUTTON. State is GLUT_UP or GLUT_DOWN. X and y are mouse position at the time of the event.

273 Example Mouse Function (2) The x value is the number of pixels from the left of the window. The y value is the number of pixels down from the top of the window. In order to see the effects of some activity of the mouse or keyboard, the mouse or keyboard handler must call either myDisplay() or glutPostRedisplay(). Code for an example myMouse() is in Fig. 2.40.

274 Polyline Control with Mouse Example use:

275 Code for Mouse-controlled Polyline

276 Using Mouse Motion Functions glutMotionFunc(myMovedMouse); // moved with button held down glutPassiveMotionFunc(myMovedMouse ); // moved with buttons up myMovedMouse(int x, int y); x and y are the position of the mouse when the event occurred. Code for drawing rubber rectangles using these functions is in Fig. 2.41.

277 Example Keyboard Function void myKeyboard(unsigned char theKey, int mouseX, int mouseY) { GLint x = mouseX; GLint y = screenHeight - mouseY; // flip y value switch(theKey) {case ‘p’: drawDot(x, y); break; // draw dot at mouse position case ‘E’: exit(-1); //terminate the program default: break; // do nothing }

278 Example Keyboard Function (2) Parameters to the function will always be (unsigned char key, int mouseX, int mouseY). The y coordinate needs to be flipped by subtracting it from screenHeight. Body is a switch with cases to handle active keys (key value is ASCII code). Remember to end each case with a break!

279 Using Menus Both GLUT and GLUI make menus available. GLUT menus are simple, and GLUI menus are more powerful. We will build a single menu that will allow the user to change the color of a triangle, which is undulating back and forth as the application proceeds.

280 GLUT Menu Callback Function Int glutCreateMenu(myMenu); returns menu ID void myMenu(int num); //handles choice num void glutAddMenuEntry(char* name, int value); // value used in myMenu switch to handle choice void glutAttachMenu(int button); // one of GLUT_RIGHT_BUTTON, GLUT_MIDDLE_BUTTON, or GLUT_LEFT_BUTTON Usually GLUT_RIGHT_BUTTON

281 GLUT subMenus Create a subMenu first, using menu commands, then add it to main menu. A submenu pops up when a main menu item is selected. glutAddSubMenu (char* name, int menuID); // menuID is the value returned by glutCreateMenu when the submenu was created Complete code for a GLUT Menu application is in Fig. 2.44. (No submenus are used.)

282 GLUI Interfaces and Menus

283 GLUI Interfaces An example program illustrating how to use GLUI interface options is available on book web site. Most of the work has been done for you; you may cut and paste from the example programs in the GLUI distribution.

284 UNIT-IV RENDERING

285 Polygon shading model Flat shading - compute lighting once and assign the color to the whole (mesh) polygon

286 Flat shading Only use one vertex normaland material property to compute the color for the polygon Benefit: fast to compute Used when: Polygon is small enough Light source is far away (why?) Eye is very far away (why?)

287 Mach Band Effect Flat shading suffers from “mach band effect” Mach band effect – human eyes accentuate the discontinuity at the boundary Side view of a polygonal surface perceived intensity

288 Smooth shading Fix the mach band effect – remove edge discontinuity Compute lighting for more points on each face Flat shading Smooth shading

289 Two popular methods: Gouraud shading (used by OpenGL) Phong shading (better specular highlight, not in OpenGL)

290 Gouraud Shading The smooth shading algorithm used in OpenGL glShadeModel(GL_SMOOTH) Lighting is calculated for each of the polygon vertices Colors are interpolated for interior pixels

291 Gouraud Shading Per-vertex lighting calculation Normal is needed for each vertex Per-vertex normal can be computed by averaging the adjust face normals n n1 n2 n3 n4 n = (n1 + n2 + n3 + n4) / 4.0

292 Gouraud Shading Compute vertex illumination (color) before the projection transformation Shade interior pixels: color interpolation (normals are not needed) C1 C2 C3 Ca = lerp(C1, C2)Cb = lerp(C1, C3) Lerp(Ca, Cb) for all scanlines * lerp: linear interpolation

293 Gouraud Shading Linear interpolation Interpolate triangle color: use y distance to interpolate the two end points in the scanline, and use x distance to interpolate interior pixel colors a b v1v2 x x = a / (a+b) * v1 + b/(a+b) * v2

294 Gouraud Shading Problem Lighting in the polygon interior can be inaccurate

295 Phong Shading Instead of interpolation, we calculate lighting for each pixel inside the polygon (per pixel lighting) Need normals for all the pixels – not provided by user Phong shading algorithm interpolates the normals and compute lighting during rasterization (need to map the normal back to world or eye space though)

296 Phong Shading Normal interpolation Slow – not supported by OpenGL and most graphics hardware n1 n2 n3 nb = lerp(n1, n3) na = lerp(n1, n2) lerp(na, nb)

297 UNIT-V FRACTALS

298 Fractals Fractals are geometric objects. Many real-world objects like ferns are shaped like fractals. Fractals are formed by iterations. Fractals are self-similar. In computer graphics, we use fractal functions to create complex objects.

299 Koch Fractals (Snowflakes) Iteration 0Iteration 1Iteration 2Iteration 3 Generator 1/3 1

300 Fractal Tree Iteration 1 Iteration 2 Iteration 3 Iteration 4 Iteration 5 Generator

301 Fractal Fern Generator Iteration 0 Iteration 1 Iteration 2 Iteration 3

302 Add Some Randomness The fractals we’ve produced so far seem to be very regular and “artificial”. To create some realism and variability, simply change the angles slightly sometimes based on a random number generator. For example, you can curve some of the ferns to one side. For example, you can also vary the lengths of the branches and the branching factor.

303 Terrain (Random Mid-point Displacement) Given the heights of two end-points, generate a height at the mid-point. Suppose that the two end-points are a and b. Suppose the height is in the y direction, such that the height at a is y(a), and the height at b is y(b). Then, the height at the mid-point will be: y mid = (y(a)+y(b))/2 + r, where r is the random offset This is how to generate the random offset r: r = sr g |b-a|, where s is a user-selected “roughness” factor, and r g is a Gaussian random variable with mean 0 and variance 1

304 How to generate a random number with Gaussian (or normal) probability distribution // given random numbers x1 and x2 with equal distribution from -1 to 1 // generate numbers y1 and y2 with normal distribution centered at 0.0 // and with standard deviation 1.0. void Gaussian(float &y1, float &y2) { float x1, x2, w; do { x1 = 2.0 * 0.001*(float)(rand()%1000) - 1.0; x2 = 2.0 * 0.001*(float)(rand()%1000) - 1.0; w = x1 * x1 + x2 * x2; } while ( w >= 1.0 ); w = sqrt( (-2.0 * log( w ) ) / w ); y1 = x1 * w; y2 = x2 * w; }

305 Procedural Terrain Example

306 Building a more realistic terrain Notice that in the real world, valleys and mountains have different shapes. If we have the same terrain-generation algorithm for both mountains and valleys, it will result in unrealistic, alien-looking landscapes. Therefore, use different parameters for valleys and mountains. Also, can manually create ridges, cliffs, and other geographical features, and then use fractals to create detail roughness.

307 Fractals Infinite detail at every point Self similarity between parts and overall features of the object Zoom into Euclidian shape Zoomed shape see more detail eventually smooths Zoom in on fractal See more detail Does not smooth Model Terrain, clouds water, trees, plants, feathers, fur, patterns General equation P1=F(P0), P2 = F(P1), P3=F(P2)… P3=F(F(F(P0)))

308 Self similar fractals Parts are scaled down versions of the entire object use same scaling on subparts use different scaling factors for subparts Statistically self-similar Apply random variation to subparts Trees, shrubs, other vegetation

309 Fractal types Statistically self-affine random variations Sx<>Sy<>Sz terrain, water, clouds Invariant fractal sets Nonlinear transformations Self squaring fractals Julia-Fatou set Squaring function in complex space Mandelbrot set Squaring function in complex space Self-inverse fractals Inversion procedures

310 Julia-Fatou and Mandelbrot x=>x 2 +c x=a+bi Complex number Modulus Sqrt(a 2 +b 2 ) If modulus < 1 Squaring makes it go toward 0 If modulus > 1 Squaring falls towards infinity If modulus=1 Some fall to zero Some fall to infinity Some do neither Boundary between numbers which fall to zero and those which fall to infinity Julia-Fatou Set Foley/vanDam Computer Graphics-Principles and Practices, 2 nd edition Julia-Fatou

311 Julia Fatou and Mandelbrot con’d Shape of the Julia-Fatou set based on c To get Mandelbrot set – set of non-diverging points Correct method Compute the Julia sets for all possible c Color the points black when the set is connected and white when it is not connected Approximate method Foreach value of c, start with complex number 0=0+0i Apply to x=>x 2 +c Process a finite number of times (say 1000) If after the iterations is is outside a disk defined by modulus>100, color the points of c white, otherwise color it black. Foley/vanDam Computer Graphics-Principles and Practices, 2 nd edition

312 Constructing a deterministic self-similar fractal Initiator Given geometric shape Generator Pattern which replaces subparts of initiator Koch Curve Initiator generator First iteration

313 Fractal dimension D=fractal dimension Amount of variation in the structure Measure of roughness or fragmentation of the object Small d-less jagged Large d-more jagged Self similar objects ns d =1 (Some books write this as ns -d =1) s=scaling factor n number of subparts in subdivision d=ln(n)/ln(1/s) [d=ln(n)/ln(s) however s is the number of segments versus how much the main segment was reduced I.e. line divided into 3 segments. Instead of saying the line is 1/3, say instead there are 3 sements. Notice that 1/(1/3) = 3] If there are different scaling factors S k d =1 K=1 n

314 Figuring out scaling factors I prefer: ns -d =1 :d=ln(n)/ln(s) Dimension is a ratio of the (new size)/(old size) Divide line into n identical segments n=s Divide lines on square into small squares by dividing each line into n identical segments n=s 2 small squares Divide cube Get n=s 3 small cubes Koch’s snowflake After division have 4 segments n=4 (new segments) s=3 (old segments) Fractal Dimension D=ln4/ln3 = 1.262 For your reference: Book method n=4 Number of new segments s=1/3 segments reduced by 1/3 d=ln4/ln(1/(1/3))

315 Sierpinski gasket Fractal Dimension Divide each side by 2 Makes 4 triangles We keep 3 Therefore n=3 Get 3 new triangles from 1 old triangle s=2 (2 new segments from one old segment) Fractal dimension D=ln(3)/ln(2) = 1.585

316 Cube Fractal Dimension Apply fractal algorithm Divide each side by 3 Now push out the middle face of each cube Now push out the center of the cube What is the fractal dimension? Well we have 20 cubes, where we used to have 1 n=20 We have divided each side by 3 s=3 Fractal dimension ln(20)/ln(3) = 2.727 Image from Angel book

317 Language Based Models of generating images Typical Alphabet {A,B,[,]} Rules A=> AA B=> A[B]AA[B] Starting Basis=B Generate words Represents sequence of segments in graph structure Branch with brackets Interesting, but I want a tree B A[B]AA[B] AA[A[B]AA[B]]AAAA[A[B]AA[B]] A AAAA B B A AAAA B AA B AAAAAAAA A B B

318 Language Based Models of generating images con’d Modify Alphabet {A,B,[,],(,)} Rules A=> AA B=> A[B]AA(B) [] = left branch () = right branchStarting Basis=B Generate words Represents sequence of segments in graph structure Branch with brackets B A[B]AA(B) AA[A[B]AA(B)]AAAA(A[B]AA(B)) A AAAA B B A AAAA B AA B AAAAAAAA A B B

319 Language Based models have no inherent geometry Grammar based model requires Grammar Geometric interpretation Generating an object from the word is a separate process examples Branches on the tree drawn at upward angles Choose to draw segments of tree as successively smaller lengths The more it branches, the smaller the last branch is Draw flowers or leaves at terminal nodes A AAAA B AA B AAAAAAAA A B B

320 Grammar and Geometry Change branch size according to depth of graph Foley/vanDam Computer Graphics-Principles and Practices, 2 nd edition

321 Particle Systems System is defined by a collection of particles that evolve over time Particles have fluid-like properties Flowing, billowing, spattering, expanding, imploding, exploding Basic particle can be any shape Sphere, box, ellipsoid, etc Apply probabilistic rules to particles generate new particles Change attributes according to age What color is particle when detected? What shape is particle when detected? Transparancy over time? Particles die (disappear from system) Movement Deterministic or stochastic laws of motion Kinematically forces such as gravity

322 Particle Systems modeling Model Fire, fog, smoke, fireworks, trees, grass, waterfall, water spray. Grass Model clumps by setting up trajectory paths for particles Waterfall Particles fall from fixed elevation Deflected by obstacle as splash to ground Eg. drop, hit rock, finish in pool Drop, go to bottom of pool, float back up.

323 Physically based modeling Non-rigid object Rope, cloth, soft rubber ball, jello Describe behavior in terms of external and internal forces Approximate the object with network of point nodes connected by flexible connection Example springs with spring constant k Homogeneous object All k’s equal Hooke’s Law F s =-k x x=displacement, F s = restoring force on spring Could also model with putty (doesn’t spring back) Could model with elastic material Minimize strain energy k k k k

324 “Turtle Graphics” Turtle can F=Move forward a unit L=Turn left R=Turn right Stipulate turtle directions, and angle of turns Equilateral triangle Eg. angle =120 FRFRFR What if change angle to 60 degrees F=> FLFRRFLF Basis F Koch Curve (snowflake) Example taken from Angel book

325 Using turtle graphics for trees Use push and pop for side branches [] F=> F[RF]F[LF]F Angle =27 Note spaces ONLY for readability F[RF]F[LF]F [RF[RF]F[LF]F] F[RF]F[LF]F [LF[RF]F[LF]F] F[RF]F[LF]F


Download ppt "UNIT- I 2D PRIMITIVES. Line and Curve Drawing Algorithms."

Similar presentations


Ads by Google