Presentation is loading. Please wait.

Presentation is loading. Please wait.

EEL Introduction to Computer Graphics

Similar presentations


Presentation on theme: "EEL Introduction to Computer Graphics"— Presentation transcript:

1 EEL 5771-001 Introduction to Computer Graphics
THREE DIMENSIONAL VIEWING By Metilda Susanna Vinod U

2 Outline 3-D Viewing Concepts 3-D Viewing Projections
3-D Viewing Pipeline 3-D Viewing Parameters From World To Viewing Transformation Clipping 3-D Viewing Effects

3 3-D Representation Introduction
The term ‘image quality’ is often used to measure the performance of an imaging system. Research showed however that image quality may not be the most appropriate term to capture the evaluative processes associated with experiencing 3D images. The added value of depth in 3D images is clearly recognized when viewers judge image quality of unimpaired 3D images against their 2D counterparts.

4 3-D Representation Definition
Three-dimensional model that displays a picture or item in a form that appears to be physically present with a designated structure. Essentially, it allows items that appeared flat to the human eye to be display in a form that allows for various dimensions to be represented. These dimensions include width, depth, and height.

5 3-D Viewing Concepts of 3-D Viewing
To represent 3D-objects on a 2D-screen, combination of many techniques is required. First, the projection has to be defined, followed by Wire-Frame-Representation, Depth-Cueing, Correct Visibility, Shading, Illumination Models, Shadows, Reflections, Transparency, Textures, Surface Details, Stereo Images is generated.

6 3-D Viewing Concepts Definitions Projection:
The projection transforms a point from a high‐ dimensional space to a low‐dimensional space. In 3D, the projection means mapping a 3D point onto a 2D projection plane also called as image plane. There are two basic projection types: Parallel orthographic oblique Perspective

7 3-D Viewing Concepts Parallel Projection Orthographic Projection

8 Parallel Projection Orthographic Projection

9 Parallel Projection Orthographic Projection

10 ORTHOGRAPHIC PROJECTION
NORMALIZATION

11 VIEWING VOLUME FOR ORTHOGONAL PROJECTION
Given the specification of the viewing window, we can set up a Viewing Volume Only objects inside the viewing volume will appear in the display, the rest are clipped In order to limit the infinite viewing volume we define two additional planes: Near Plane and Far Plane Only objects in the bounded viewing volume will appear The near and far planes are parallel to the viewing plane and specified by Znear and Zfar A limited viewing volume is defined: – For orthographic: a rectangular parallel piped – For oblique: an oblique parallel piped – For perspective: a frustum

12 VIEWING VOLUME FOR ORTHOGONAL PROJECTION
• Note: In all definitions, near and far are distances (always positive). Near = 50 means that the near plane intersect the z-axis at z = -50

13 3-D Viewing Concepts Parallel Projection b. Oblique Projection

14 NORMALIZATION FOR OBLIQUE PARALLEL PROJECTION
An oblique projection can be characterized by the angle of the projectors with the VP. Top and side views (see left) of a projector and VP z=0. characterize the degree of obliqueness. Considering the top view (a), an be found by

15 NORMALIZATION FOR OBLIQUE PARALLEL PROJECTION

16 CAVALIER AND CABINET PROJECTIONS
Angle between projectors and projection plane is 45degrees. Perpendicular faces are projected at full scale. Cabinet Angle between projectors and projection plane is arctan(2) = 63.4degrees. Perpendicular faces are projected at 50% scale.

17 Properties of Parallel Projection
Projection directions are parallel. Doesn’t look real. Can preserve parallel lines

18 Properties of Parallel Projection
Can preserve ratios CANNOT preserve angles

19 Properties of Parallel Projection
Often used in CAD, architecture drawings, when images can be used for measurement. No foreshortening

20 3-D Viewing Concepts Projection
Perspective Projection Perspective projection has foreshortening

21 Perspective Projection
Images are mapped onto the image plane in different ways Angle of view tells us the mapping from the image to the image plane:

22 Perspective Projection
Not everything will be displayed. The frustum of perspective projection looks like.

23 Perspective Projection
Derivation

24 Perspective Projection
Matrix

25 ONE- TWO- AND THREE-POINT PERSPECTIVE
One- Point Perspective : Lines representing horizontal surfaces - not straight-on to the viewer disappear to one point on an imaginary horizon. Start by drawing in the horizon with one vanishing-point somewhere along its length. All other lines are drawn vertically on the paper/page.

26 ONE- TWO- AND THREE-POINT PERSPECTIVE
TWO- Point Perspective : Start by drawing in the horizon with two vanishing-points VP1 and VP2. Horizontal lines all disappear at these vanishing-points whilst all other upright lines are drawn vertically on the page. The roof of the building seen here would be a special case and can be drawn within a box-frame. Imagine the roof area as being in a rectangle.

27 ONE- TWO- AND THREE-POINT PERSPECTIVE
This differs in that now there are three vanishing points to draw in. Notice how some of them may be off the area of the page. Notice too that as they get closer together the shape they contain becomes much more exaggerated.

28 VIEWING VOLUMES Definition for general Perspective Projection
Definition for a Standard Perspective Projection

29 Normalization for Perspective Projections
Again, find a transformation that distorts the vertices in a way that we can use a simple canonical projection: perspective-normalization transformation Perspective-Normalization Matrix (Nper) converts frustum view volume into canonical orthogonal view volume:

30 Properties of Perspective Projection
Lines are mapped to lines. Parallel lines may not remain parallel . Instead, they may meet at the vanishing point. Ratios are not preserved. It has foreshortening effects. So it looks real. Distances cannot be directly measured, as in parallel projection.

31 3-D Viewing Concepts Definitions Wire-Frame-Representation:
Only the edges of polygons are drawn. Hidden edges may or may not be drawn. Depth-Cueing: Edges or parts which are nearer to the viewer are displayed with more intensity (brighter, broadened, more saturated), edges which are farther away from the viewer are displayed with less intensity(darker, thinner, grayed out).

32 Definitions Correct Visibility:
Surface-elements (edges, polygons), which are occluded by other surface elements, are not drawn so that only visible areas are shown. Shading: Depending on the angle of view or the angle of incident light, surfaces are colored brighter or darker. Illumination Models: Physical simulation of lighting conditions and propagation and their influence on the appearance of surfaces. Stereo Images: A separate image is created and presented (with various techniques) for each eye to generate a 3D-impression.

33 Definitions Shadows: Areas which have no line of sight to the light-source are displayed darker. Reflections, Transparency: Reflecting objects show mirror-images, and through transparent objects the background can be seen. Textures: Patterns or samples are painted on surfaces to give the objects a more complex look (looks much more realistic then). Surface Details: Small geometric structures on surfaces (like orange peel, bark, cobblestones, tire profiles) are simulated using tricks.

34 Examples of 3-D Viewing Concepts
Chemdoodle Chemdoodle lets you see chemical structures in 3D

35 Examples of 3-D Viewing Concepts
The Biodigital Human Lets you see the skeleton and the blood system with Biodigital Human

36 3-D Viewing Concepts Wire-frame Definition
Wire framing is one of the methods used in geometric modelling systems. A wireframe model represents the shape of a solid object with its characteristic lines and points. There are two types of wireframe modelling: Pro's and Con's. In Pro's user gives a simple input to create a shape. It is useful in developing systems. While in Con's wireframe model, it does not include information about inside and outside boundary surfaces.

37 WIRE FRAME AMBIGUITIES
The first true 3D computer model created on CAD systems in the late 1970s was the 3D wireframe model. Computer generated 3D wireframe models contain information about the locations of all the corners and edges in space coordinates. The 3D wireframe models can be viewed from any direction as needed and are in general reasonably good representations of 3D design. But because surface definition is not part of a wireframe model, all wireframe images have the inherent problem of ambiguity. For example, in the figure displayed below, which corner is in front, corner A or corner B? The ambiguity problem becomes much more serious with complex designs that have many edges and corners.

38 WIRE FRAME AMBIGUITIES
The main advantage of using a 3D wireframe modeler to create 3D models is its simplicity. The computer hardware requirements for wireframe modelers are typically much lower than the requirements for surface and solid modelers. A 3D wireframe model, also known as a stick-figure model or a skeleton model, contains only information about the locations of all the corners and edges of the design in space coordinates. You should also realize that, in some cases, it could be quite difficult to locate some of the corner locations while creating a 3D wireframe model. Note that 3D wireframe modelers are usually used in conjunction with surfacing modelers.

39 EXAMPLE FOR WIRE FRAME MODEL
The wireframe model is perhaps the oldest way of representing solids. A wireframe model consists of two tables, the vertex table and the edge table. Each entry of the vertex table records a vertex and its coordinate values, while each entry of the edge table has two components giving the two incident vertices of that edge. A wireframe model does not have face information. For example, to represent a cube defined by eight vertices and 12 edges, one needs the following tables

40 EXAMPLE FOR WIRE FRAME MODEL
The following figure shows all eight vertices and twelve edges and their numbers in white and yellow.

41 EXAMPLE FOR WIRE FRAME AMBIGUITIES
While wireframe uses the simplest data structures, it is ambiguous. The following is a well-known example that consists of 16 vertices and 32 edges. We know it represents a solid and each of the quadrilaterals (some of them are squares) defines a face of the solid. The inner cube represents a hole; but, we cannot tell the direction of the opening of the cube. As shown in the following figure, there are three possibilities for this opening. While the other two can be obtained by rotating the remaining one, in general we will not be so lucky because the outer boundary may be a box of different side lengths and because this model is part of a big one which does not allow free interpretation

42 3-D Viewing Concepts Setup How to setup AutoCAD
In order to understand how to setup the 3D software setup, let us consider AutoCAD as an example. How to setup AutoCAD Starting Up AutoCAD 2012 Select the AutoCAD 2012 option on the Program menu or select the AutoCAD 2012 icon on the Desktop. Once the program is loaded into the memory, the AutoCAD 2012 drawing screen will appear on the screen.

43 Setup Using the Startup Options
In AutoCAD 2012, we can use the Startup dialog box to establish different types of drawing settings. The Startup dialog box can be activated through the use of the STARTUP system variable. The STARTUP system variable can be set to either 0 or 1 1: displays the Create New Drawing dialog box. 0: displays the Select Template dialog box (default).

44 Setup In the command prompt area, enter the system variable name
STARTUP [ENTER] Enter 1 as the new value for the STARTUP system variable

45 Setup To show the effect of the Startup option, exit AutoCAD by clicking on the Close icon as shown. Restart AutoCAD by selecting the AutoCAD 2012 option through the Start menu.

46 Setup The Startup dialog box appears on the screen with different options to assist the creation of drawings. Move the cursor on top of the four icons and notice the four options available: (1) Open a Drawing (2) Start from Scratch (3) Use a Template and (4) Use a Setup Wizard.

47 Setup In the Startup dialog box, select the Start from Scratch option as shown in the figure. Choose Imperial to use the Standard English units setting. Click OK to accept the setting.

48 DEPTH CUEING Definition
Depth Cueing Modeling atmospheric attenuation by rendering distant objects at a lower intensity than near ones, hence giving them the appearance of depth. One can adjust the fog that PyMol overlays on objects in the viewer window. This fog helps to assist in emphasizing what is in the foreground and backgound of the image with respect to the camera. In mouse mode, 3-Button Viewing: Rear clipping plane- hold down the shift key and the right mouse button and drag the mouse to the right, more fog will be added to the background. Drag to the left and more fog will be removed from the background. Front clipping plane- hold down the shift key and the right mouse button and drag the mouse toward you. You should see the clipping plane come into view. Adjust the front and rear clipping plane to focus on the area of the molecule you wish to display.

49 DEPTH CUEING Example Syntax

50 3D-Viewing-Pipeline DEFINITION
The viewing-pipeline in 3 dimensions is almost the same as the 2D-viewing-pipeline. Only after the definition of the viewing direction and orientation (i.e., of the camera) an additional projection step is done, which is the reduction of 3D-data onto a projection plane.

51 3D-Viewing-Pipeline This projection step can be arbitrarily complex, depending on which 3D-viewing concepts should be used.

52 3D-Viewing-Pipeline Object space: Coordinate space where each component is defined. World space: All components put together into the same 3D scene via affine transformation. (camera, lighting defined in this space) Eye space: Camera at the origin, view direction coincides with the z axis. Hither and Yon planes perpendicular to the z axis Clipping space: Do clipping here. All point is in homogeneous coordinate, i.e., each point is represented by (x,y,z,w) 3D image space (Canonical view volume): A parallel pipied shape defined by (-1:1,-1:1,0,1). Objects in this space is distorted Screen space: x and y coordinates are screen pixel coordinates

53 3D-Viewing-Pipeline Spaces
Object Space and World Space Eye Space Clip Space Image Space

54 MODELING TRANSFORMATION TO VIEWPORT
Modeling Transformations Transform from the local coordinate system to the 3d world coordinate system. Build complex models by positioning simple components Viewing Transformations Placing virtual camera in the world Transformation from world coordinates to eye coordinates Animation : Vary transformations over time to create motion

55 MODELING TRANSFORMATION TO VIEWPORT

56 MODELING TRANSFORMATION TO VIEWPORT

57 3-D Viewing Parameters Modelling Transformation
Model coordinates to World coordinates

58 3D Viewing-Coordinate Parameters
Viewing-Coordinates Similar to photography there are certain degrees of freedom when specifying the camera Camera position in space Viewing direction from this position Orientation of the camera (view-up vector) Size of the display window (corresponds to the focal length of a photo- camera)

59 Viewing-Coordinates With these parameters the camera-coordinate system is defined (viewing coordinates). Usually the xy-plane of this viewing-coordinate system is orthogonal to the main viewing direction and the viewing direction is in the direction of the negative z-axis Based on the camera position the usual way to define the viewing-coordinate system is: Choose a camera position (also called eye-point, or view-point). Choose a viewing direction = Choose the z– direction of the viewing- coordinates. Choose a direction „upwards“. From this, the x-axis and y-axis can be calculated: the image-plane is orthogonal to the viewing direction. The parallel projection of the „view-up vector“ onto this image plane defines the y-axis of the viewing coordinates. Calculate the x-axis as vector-product of the z- and y-axis. The distance of the image-plane from the eye-point defines the viewing angle, which is the size of the scene to be displayed.

60 Viewing-Coordinates In animations the camera-definition is often automatically calculated according to certain conditions, e.g. when the camera moves around an object or in flight- simulations, such that desired effects can be achieved in an uncomplicated way. To convert world-coordinates to viewing-coordinates a series of simple transformations is needed: mainly a translation of the coordinate origins onto each other and afterwards 3 rotations, such that the coordinate-axes also coincide (two rotations for the first axis, one for the second axis, and the third axis is already correct then). Of course, all these transformations can be merged by multiplication into one matrix, which looks about like this:

61 HOMOGENEOUS COORDINATES
More generally, we consider the <x, y, z> position vector to be merely a special case of the four-component <x, y, z, w> form. This type of four-component position vector is called a homogeneous position. When we express a vector position as an <x, y, z> quantity, we assume that there is an implicit 1 for its w component. Mathematically, the w value is the value by which you would divide the x, y, and z components to obtain the conventional 3D (nonhomogeneous) position, as shown in the equation. Converting Between Nonhomogeneous and Homogeneous Positions

62 World coordinates to Viewing coordinates: Viewing transformations

63 Viewing Parameters How to define the viewing coordinate system (or view reference coordinate system)? Position of the viewer: P0 (view point or viewing position) Orientation of the viewer: ● View up vector: V ● Viewing direction: N (view plane normal) N and V should be orthogonal; if it is not, V can be adjusted to be orthogonal to N

64 Transformation Between Coordinate Systems
Given the objects in world coordinates, find the transformation matrix to transform them into viewing coordinate system. n, v, u : unit vectors defining the viewing coordinate system. World coordinate system can be aligned with the viewing coordinate system in two steps: (1) Translate P0 to the origin of the world coordinate system, (2) Rotate to align the axes.

65 Step 1 Step 2 Translation: Move view reference point to origin.
Rotation: After we find n, v and u, we can use them to define the rotation matrix for aligning the axes.

66 The transformation matrix from world coordinate to viewing reference frame

67 CLIPPING Clipping planes are used in 3D computer graphics in order to prevent the renderer from calculating surfaces at an extreme distance from the viewer. The plane is perpendicular to the camera, a set distance away (the threshold), and occupies the entire viewport. Used in real-time rendering, clipping planes can help preserve processing for objects within clear sight.

68 CLIPPING The use of clipping planes can result in a detraction from the realism of a scene, as the viewer may notice that everything at the threshold is not rendered correctly or seems to (dis)appear spontaneously. The addition of fog a variably transparent region of color or texture just before the clipping plane can help soften the transition between what should be in plain sight and opaque, and what should be beyond notice and fully transparent, and therefore does not need to be rendered. Finding parts of the objects in the viewing volume. Algorithms from 2D clipping can easily be applied to 3D and used to clip objects against faces of the normalized view volume.

69 CLIPPING The viewing parameters (camera position, view­up vector, view­plane normal) are specified as part of modeling transformations. A matrix is formed and concatenated with the current modelview matrix. So, to set up camera parameters: glMatrixMode (GL_MODELVIEW); gluLookAt (x0, y0, z0, xref, yref, zref, Vx, Vy, Vz); N = P0 ­Pref Viewing direction is along the –zview axis

70 CLIPPING If we do not invoke the gluLookAt function. The default camera parameters are: – P0= (0,0,0) – Pref= (0,0,­1) – V = (0,1,0) Additional clipping planes in OpenGL we may specify new clipping planes and enable clipping of objects with respect to these planes. glClipPlane (id, planeparameters); glEnable (id); // to enable clipping glDisable (id); // to disable clipping id is one of GL_CLIP_PLANE0, GL_CLIP_PLANE1,… the plane parameters are the for constants, A, B, C, D, in the plane equation Ax+By+Cz+D=0 (any object that satisfies Ax+By+Cz+D<0 is clipped).

71 CLIPPING ALGORITHMS Many clipping algorithms in pure form, they are independent of number of dimensions, but for simplicity we will look at the 2D versions General criteria for a good clipping algorithm: – Quickly identify lines (or polygon edges) that need not be drawn at all • These can be skipped and discarded – Quickly identify lines (or polygon edges) that are completely contained within the drawing bounds • These can be drawn without modification – For the lines that are neither (i.e. they cross display boundaries), calculate their intersections as quickly as possible • The less of these we do, the better

72 CLIPPING ALGORITHMS Cohen-Sutherland Clipping
Translate boundary crossings into bit fields; for example, in 2D this is 4 bits, one each to represent in/out the top, bottom, right, or left boundaries (0 = in bounds, 1 = out of bounds) Cohen-Sutherland Algorithm • For every vertex (x, y), initially assign 0000 • If x > R then set bit 1 else if x < L then set bit 0 • If y > T then set bit 3 else if y < B then set bit 2 – Note how when x = R or x = L, or y = T or y = B, the corresponding bit is still zero; equality = in bounds • Now, for every line segment (x1, y1) to (x2, y2): – If both line segments are 0000, then draw line; we’re done (accept) – If code1 & code2 (bitwise and), then skip line; we’re done (reject) – Otherwise, calculate the intersection with a boundary; re-encode then reconsider this new pair of endpoints

73 COHEN-SUTHERLAND CLIPPING

74 LIANG-BARSKY CLIPPING
Liang-Barsky asks: for what values of u does a line segment enter or exit the bounds? • There can be, at most, two of each; we care about the maximum entry value and the minimum exit value – For each line segment, for each boundary, check the value of u at the intersection of the segment’s line with that boundary – If u < 0 on entry and u > 1 on exit — accept – If u > 1 on entry or u < 0 on exit — reject – If u on entry > u on exit — reject – Otherwise, clip and try again — note how we don’t need to perform an extra calculation, because the new point can be derived from u

75 LIANG-BARSKY CLIPPING

76 LIANG-BARSKY ALGORITHM

77 LIANG-BARSKY CODE

78 CLIPPING READY TRANSFORMATION IN PARALLEL AND PERSPECTIVE PROJECTION
Clipping is responsible for eliminating those parts of the scene which do not project onto the window rectangle, because they are outside the viewing volume. It consists of depth, front and back plane, clipping and clipping at the side faces of the volume. For perspective projection, depth clipping is also necessary to solve the wrap-around problem, because it eliminates the objects in the plane parallel to the window and incident to the eye, which are mapped onto the ideal plane by the perspective transformation. For parallel projection, depth clipping can be accomplished in any coordinate system before the projection, where the depth information is still available. The selection of the coordinate system in which the clipping is done may depend on efficiency considerations, or more precisely: The geometry of the clipping region has to be simple in the selected coordinate system in order to minimize the number of necessary operations. 2. The transformation to the selected coordinate system from the world coordinate system and from the selected coordinate system to pixel space should involve the minimum number of operations.

79 CLIPPING READY TRANSFORMATION IN PARALLEL AND PERSPECTIVE PROJECTION
Considering the first requirement, for parallel projection, the brick shaped canonical view volume of the normalized eye coordinate system and the screen coordinate system are the best, but, unlike the screen coordinate system, the normalized eye coordinate system requires a new transformation after clipping to get to pixel space. The screen coordinate system thus ranks as the better option. Similarly, for perspective projection, the pyramid shaped canonical view volume of the normalized eye and the homogeneous coordinate systems require the simplest clipping calculations, but the latter does not require extra transformation before homogeneous division. For side face clipping, the screen coordinate system needs the fewest operations, but separating the depth and side face clipping phases might be disadvantageous for specific hardware realizations. In the next section, the most general case, clipping in homogeneous coordinates, will be discussed. The algorithms for other 3D coordinate systems can be derived from this general case by assuming the homogeneous coordinate h to be constant

80 CLIPPING READY TRANSFORMATION IN PARALLEL AND PERSPECTIVE PROJECTION

81 Homogenous Coordinates
In general, the homogeneous coordinate system is define as: Both homogenous vectors and matrices are scalable

82 USE OF HOMOGENOUS COORDINATE
Homogenous Coordinate are a system of coordinates used in projective geometry, as Cartesian coordinates are used in Euclidean geometry. They have the advantage that the coordinates of points, including points at infinity, can be represented using finite coordinates. Formulas involving homogeneous coordinates are often simpler and more symmetric than their Cartesian counterparts. Homogeneous coordinates have a range of applications, including computer graphics and 3D computer vision, where they allow affine transformations and, in general, projective transformations to be easily represented by a matrix. If the homogeneous coordinates of a point are multiplied by a non-zero scalar then the resulting coordinates represent the same point. Since homogeneous coordinates are also given to points at infinity, the number of coordinates required to allow this extension is one more than the dimension of the projective space being considered. For example, two homogeneous coordinates are required to specify a point on the projective line and three homogeneous coordinates are required to specify a point in the projective plane.

83 LINE AND PLANE CLIPPING INTERSECTIONS

84 LINE AND PLANE CLIPPING INTERSECTIONS
Line and Plane clipping illustration

85 3-D VIEWING EFFECTS WIND ANGLE VIEW PANNING WALK AROUND

86 WIDE ANGLE VIEW In photography and cinematography, a wide-angle lens refers to a lens whose focal length is substantially smaller than the focal length of a normal lens for a given film plane. This type of lens allows more of the scene to be included in the photograph, which is useful in architectural, interior and landscape photography where the photographer may not be able to move farther from the scene to photograph it. A wide angle lens is also one that projects a substantially larger image circle than would be typical for a standard design lens of the same focal length. This large image circle enables either large tilt & shift movements with a view camera, or a wide field of view. By convention, in still photography, the normal lens for a particular format has a focal length approximately equal to the length of the diagonal of the image frame or digital photo sensor. In cinematography, a lens of roughly twice the diagonal is considered "normal“.

87 EXAMPLES OF WIDE ANGLE VIEW

88 PANNING In photography, panning refers to the rotation in a horizontal plane of a still camera or video camera. Panning a camera results in a motion similar to that of someone shaking their head from side to side or of an aircraft moving into a different angle. Or to that of an opening door if the door stays facing one way. Filmmaking and professional video cameras pan by turning horizontally on a vertical axis, but the effect may be enhanced by adding other techniques, such as rails to move the whole camera platform. Slow panning is also combined with zooming in or out on a single subject, leaving the subject in the same portion of the frame, to emphasize or de-emphasize the subject respectively. In still photography, the panning technique is used to suggest fast motion, and bring out the subject from other elements in the frame. In photographic pictures it is usually noted by a foreground subject in action appearing still (i.e. a runner frozen in mid-stride) while the background is streaked and/or skewed in the apparently opposite direction of the subject's travel, similar to speed lines, and is often used in sports photography, primarily of racing. In video display technology, panning refers to the horizontal scrolling of an image that is wider than the display. For 3D modeling in computer graphics, panning means moving parallel to the current view plane. In other words, the camera moves perpendicular to the direction it is pointed, and this direction does not change.

89 EXAMPLES OF PANNING

90 WALK AROUND In order to understand walk around view first let us understand fish eye. Fisheye If the illustration above reminds you of a fish eye, that is no coincidence. The difference of course being that in this case there is a stupid-looking person inside of it pointing all over the place. My scheme could be described as a fisheye projection - something that is often associated with a large field of view. I keep the field of view fairly small however, and so the effect is not so distorted. Is fisheye a realistic characterization? That question makes my brain hurt. Or rather, my retina - because I think it has to do with how light hits the concave surface of the retina, and how the brain then deals with that and re-maps it to a sense of space. At any rate, a spherical surface for plotting points seems natural to me, at least in terms of process. Now Start Walking and Looking at Things Walk around the room (or around the African Savannah - wherever you happen to find yourself) with you arm thrusted in the "Front" direction, index finger extended. As you walk around, turning occasionally, notice that your arm rotates along with you, effectively pointing to things in the environment exactly in front, where your eyes are gazing. You may not want to do this in public, as people would think that you are either an 3D computer graphics engineer, or a nut-case.

91 EXAMPLE FOR WALK AROUND
For architects that do renovations, our tech for creating Walk-Arounds is crazy- exciting. That's because the camera also captures three-dimensional data at the same time that it creates Walk-Around virtual tours.

92


Download ppt "EEL Introduction to Computer Graphics"

Similar presentations


Ads by Google