Presentation is loading. Please wait.

Presentation is loading. Please wait.

Radiosity, Surface detail, Textures Santosh Reddy Gopu U-34723973.

Similar presentations


Presentation on theme: "Radiosity, Surface detail, Textures Santosh Reddy Gopu U-34723973."— Presentation transcript:

1 Radiosity, Surface detail, Textures Santosh Reddy Gopu U-34723973

2 Overview Basic Definitions Radiosity Radiosity Equation & Form Factor Computation Progressive Refinement Environment Mapping Surface Detail Texture Mapping Bump Mapping Non-photorealistic rendering Rendering Techniques Rotoscoping

3 Intensity Intensity is definable only for a source that is a point. An accurate intensity measurement is one that is made at a very large distance, and consequently with a very small signal at the detector. Intensity is an important concept in photometry and, to a much lesser extent, it has some application in radiometry. It is the property of a source, not a detector.

4 Intensity Radiometry is the set of techniques for measuring electromagnetic radiation, including visible light. Radiometric techniques in optics characterize the distribution of the radiation’s power in space. Photometry is the science of the measurement of light in terms of its perceived brightness to the human eye. In modern photometry, the radiant power at each wavelength is weighted by a luminosity function that models human brightness sensitivity.

5 Luminous Intensity In photometry, luminous intensity is a measure of the wavelength-weighted power emitted by a light source in a particular direction per unit solid angle, based on the luminosity function. The human eye can only see light in the visible spectrum and has different sensitivities to light of different wavelengths within the spectrum. The curve which measures the response of the human eye to light is a defined standard known as the luminosity function.

6 Radiant Intensity In Radiometry, radiant intensity is the radiant flux emitted, reflected, transmitted or received per unit solid angle. These are directional quantities. It is the ratio of radiant power leaving a source to an element of solid angle propagated in the given direction. Radiant flux is the radiant energy emitted, reflected, transmitted or received per unit time.

7 Radiosity In 3D Computer Graphics, Radiosity is an application of the finite element method of solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms, typical radiosity only account for paths which leave a light source and are reflected diffusely some number of times before hitting the eye. Radiosity is a global illumination algorithm i.e. illumination arriving on a surface comes not just directly from light sources, but also from other surfaces reflecting light.

8 Radiosity Calculating the overall light propagation within a scene, for short global illumination is a very difficult problem. With a standard ray tracing algorithm, this is a very time consuming task, since a huge number of rays have to be shot. For this reason, the radiosity method was invented. The main idea of the method is to store illumination values on the surfaces of the objects, as the light is propagated starting at the light sources.

9

10

11

12 Diffuse Interreflection Surface = "diffuse reflector" of light energy, means: any light energy which strikes the surface will be reflected in all directions, dependent only on the angle between the surface's normal and the incoming light vector (Lambert's law).

13 Diffuse Interreflection The reflected light energy often is colored, to some small extent, by the color of the surface from which it was reflected. This reflection of light energy in an environment produces a phenomenon known as "color bleeding," where a brightly colored surface's color will "bleed" onto adjacent surfaces.

14 Diffuse Interreflection (radiosity method) Direct Illumination

15 The Radiosity Equation The "radiosity equation" describes the amount of energy which can be emitted from a surface, as the sum of the energy inherent in the surface (a light source, for example) and the energy which strikes the surface, being emitted from some other surface. The energy which leaves a surface (surface "j") and strikes another surface (surface "i") is attenuated by two factors: the "form factor" between surfaces "i" and "j", which accounts for the physical relationship between the two surfaces the reflectivity of surface "i“, which will absorb a certain percentage of light energy which strikes the surface.

16 The Radiosity Equation PATCHES Radiosity systems break surfaces into smaller areas called patches Surfaces are too big for a single radiosity Points are just too small

17 The Radiosity Equation B i - The radiosity of patch i. the rate at which light is emitted. This is units of energy per unit area per time. This is what we are computing. Note that this may be a spectra (color). Do all this stuff for R, G, and B. E i - The emissivity of the patch. This is the light generated by the patch and is of the same units as B i, will be zero for anything that is not a light source.

18 The Radiosity Equation  i - The reflectivity of the surface. This is equivalent to k dr in the Hall model. B j - The radiosity of some other patch j. F j-i - The form factor. What fraction of energy leaving patch j will arrive at patch i. This is dependent upon shape, relative angles, intervening patches, etc. A i and A j are the areas of the patches i and j.

19 What’s that A j /A i ratio? For patch j of area A j, the amount of energy leaving per unit area is B j. Since the area can be of any size, we are interested in the total amount of energy! This is B j A j. But, B i is also energy per unit area. So, we divide the total energy getting to B i from B j by A i. Hence, the ratio A j /A i.

20 Simplification The unit area energy leaving patch j is scaled by F j-i A j. Likewise, unit area energy leaving patch i is scaled by F i-j A i. These are symmetrical paths So, F i-j A i = F j-i A j. So, F j-i A j /A i = F i-j We can plug that into the above equation and ELIMINATE the areas:

21 Now solving the radiosity equation, We have one massive simultaneous equation Note that the problem is O(N 2 ) A variety of “short-cuts” exist. We can solve using Gauss-Seidel iteration, a numerical method for solving simultaneous equations.

22 Solving,

23 Won’t the image look patchy? No... We average the colors for patches to make vertex colors and interpolate

24 Form Factor Computation

25 Surface i Surface j differential area of surface I, j vector from dA i to dA j angle between Normal i and r angle between Normal j and r Between differential areas, the form factor equals:

26 Surface i Surface j Between differential areas, the form factor equals: The overall form factor between i and j is found by integrating

27 Next Step: Learn ways of computing form factors Recall the Radiosity Equation: The F ij are the form factors Form factors independent of radiosities (depend only on scene geometry)

28 Form Factors in (More) Detail where V ij is the visibility (0 or 1)

29 We have two integrals to compute: Surface i Surface j

30 The Nusselt Analog Differentiation of the basic form factor equation is difficult even for simple surfaces! Nusselt developed a geometric analog which allows the simple and accurate calculation of the form factor between a surface and a point on a second surface.

31 The Nusselt Analog The "Nusselt analog" involves placing a hemispherical projection body, with unit radius, at a point on a surface. The second surface is spherically projected onto the projection body, then cylindrically projected onto the base of the hemisphere. The form factor is, then, the area projected on the base of the hemisphere divided by the area of the base of the hemisphere.

32 Numerical Integration: The Nusselt Analog dA i AjAj

33 The Nusselt Analog r qjqj area A j qiqi 1.Project A j along its normal: A j cos q j 2.Project result on sphere: A j cos q j / r 2 3.Project result on unit circle: A j cos q j cos q i /r 2 4.Divide by unit circle area: A j cos q j cos q i / pr 2 5.Integrate for all points on A j :

34 Hemi-cube Approximation

35 Problems with Hemi-Cube Crude sampling method more ideal would be a uniform subdivision of the hemisphere around a patch center more computationally intensive (has been tried). Leads to aliasing - depending on size of ‘ pixels ’. Gives form-factors (and hence radiosities) at patch centers - not ideal for smooth shading.

36 Use Ray Casting

37 Advantages of Ray Casting Can easily obtain the form-factors and radiosities at the vertices (smooth shading) Can vary the sampling scheme Visibility naturally taken into account. See notes for the derivation and nature of the approximation.

38 Radiosity Matrix The "full matrix" radiosity solution calculates the form factors between each pair of surfaces in the environment, then forms a series of simultaneous linear equations. This matrix equation is solved for the "B" values, which can be used as the final intensity (or color) value of each surface.

39 Radiosity - Summary The basic radiosity method is: 1) decompose the scene into patches A i 2) compute the form factors F i-j using hemicubes 3) solve the radiosity system of equations to get B i for each patch i (for R,G,B) 4) render by feeding the B i values into a Gouraud interpolation renderer Note: 1) and 2) depend only on geometry 3) depends on 1) and 2) plus reflection and emission values 4) depends on 1),2),3) and view

40 Radiosity Matrix This method produces a complete solution, at the substantial cost of first calculating form factors between each pair of surfaces and then the solution of the matrix equation. Each of these steps can be quite expensive if the number of surfaces is large: complex environments typically have upwards of ten thousand surfaces, and environments with one million surfaces are not uncommon. This leads to substantial costs not only in computation time but in storage.

41 Solve [F][B] = [E] Direct methods: O(n 3 ) Gaussian elimination Goral, Torrance, Greenberg, Battaile, 1984 Iterative methods: O(n 2 ) Energy conservation  ¨diagonally dominant ¨  iteration converges Gauss-Seidel, Jacobi: Gathering Nishita, Nakamae, 1985 Cohen, Greenberg, 1985 Southwell: Shooting Cohen, Chen, Wallace, Greenberg, 1988

42 Progressive Refinement Aim is to generate sequence of improving images Basic algorithm works against this: large initial start up in generating ALL the form factors (which need to be stored) during one iteration of Gauss-Seidel, we generate B 1, B 2, etc so it takes a long time before we calculate B N - even if B N is significant, say a light source

43 Gathering In a sense, the light leaving patch i is determined by gathering in the light from the rest of the environment

44 Gathering Gathering light through a hemi-cube allows one patch radiosity to be updated.

45

46

47 Shooting Shooting light through a single hemi-cube allows the whole environment's radiosity values to be updated simultaneously. For all j where

48

49

50 Applications The major advantage of radiosity method over ray tracing is – it generates the view independent solution as it precalculates the radiation transfer between surfaces of an environment. It offers a powerful capability that the ray tracing technique cannot – interactive walk throughs of the environment. This method works best for man-made environments such as architectural designs, interiors of offices, factories and lighting designs. It can be a useful tool for light designers and theatrical lighting designers since it accurately simulates global illumination.

51 Examples

52

53 Limitations Typical radiosity methods only account for light paths of the form LD*E i.e. paths which start at a light source and make multiple diffuse bounces before reaching the eye. These are generally not used to solve the complete rendering equation. Basic radiosity has trouble resolving sudden changes in visibility because coarse, regular discretization into piecewise constant elements corresponds to a low box filter of the spatial domain.

54 Environment Mapping In Computer Graphics, environment mapping is an efficient image-based lighting technique for approximating the appearance of a reflective surface by means of a pre-computed texture image. The texture is used to store the image of the distant environment surrounding the rendered object. Simulates the result of ray tracing without going through the expensive ray tracing computation. First proposed by Blinn and Newell in 1976.

55 Approximations made The map should contain a view of the world with the point of interest on the object as the eye We can ’ t store a separate map for each point, so one map is used with the eye at the center of the object Introduces distortions in the reflection, but the eye doesn ’ t notice Distortions are minimized for a small object in a large room The object will not reflect itself The mapping can be computed at each pixel, or only at the vertices

56 Environment maps The environment map may take one of several forms: Cubic mapping Spherical mapping (two variants) Parabolic mapping Describes the shape of the surface on which the map “ resides ” Determines how the map is generated and how it is indexed

57 Cubic Mapping The map resides on the surfaces of a cube around the object Typically, align the faces of the cube with the coordinate axes To generate the map: For each face of the cube, render the world from the center of the object with the cube face as the image plane Rendering can be arbitrarily complex (it ’ s off-line) Or, take 6 photos of a real environment with a camera in the object ’ s position Actually, take many more photos from different places the object might be Warp them to approximate map for all intermediate points

58 Example

59 Indexing Cube Mapping The texture is a set of six 2D images representing the faces of a cube. These targets must be consistent, complete, and have equal width and height. The texture coordinate (s,t,r) can be generated with (1) reflection map, or (2) normal map The (s,t,r) texture coordinates are treated as a direction vector emanating from the center of a cube. The interpolated per-fragment (s,t,r) selects one cube face 2D image based on the largest magnitude coordinate (the major axis). A new 2D (s,t) is calculated by dividing the two other coordinates (the minor axes values) by the major axis value. Then the new (s,t) is used to lookup into the selected 2D texture image face of the cube map.

60 Two TexGen modes nene r nene

61 Applications of Cube Maps Sky illumination Dynamic cube map reflections Per-pixel shading Stable Specular highlights A better alternative over massive over tessellation Limited to distant specular lights

62 Sphere Mapping A sphere map is basically a photograph of a reflective sphere in an environment Original environmental mapping technique Proposed by Blinn and Newell Uses lines of longitude and latitude to map parametric variables to texture coordinates OpenGL supports sphere mapping Requires a circular texture map equivalent to an image taken with a fisheye lens

63 Example

64 Sphere Mapping To generate the map: Take a map point (s,t), cast a ray onto a sphere in the -Z direction, and record what is reflected Equivalent to photographing a reflective sphere with an orthographic camera (long lens, big distance) Again, makes the method suitable for film special effects

65 Indexing Sphere Maps Given the reflection vector: Implemented in hardware Problems: Highly non-uniform sampling Highly non-linear mapping

66

67 Parabolic Mapping Assume the map resides on a parabolic surface Two surfaces, facing each other Improves on sphere maps: Texture coordinate generation is a near linear process Sampling is more uniform Result is more view-independent However, requires multi-passes to implement, so not generally used

68 Surface Detail Incorporates fine details in the scene. Modeling with polygons is impractical. Map an image on the surface. There are basically two methods for adding surface detail to rendered surfaces, Texture mapping Bump mapping

69 Texture Mapping Texture mapping is the process of mapping an image onto a polygon in order to increase the detail of the rendering This allows us to get fine scale details without resorting to rendering tons of polygons. The image that gets mapped onto the polygon is called a texture map or texture and is usually a regular color image,

70 Texture Map and Coordinates We define our texture map as existing in texture space, which is a normal 2D space The lower left corner of the image is the coordinate (0,0) and the upper right of the image is the coordinate (1,1) To render a textured polygon, we must start by assigning a texture coordinate to each vertex A texture coordinate is a 2D point [t x t y ] in texture space that is the coordinate of the image that will get mapped to a particular vertex

71 Texture Interpolation The actual texture mapping computations take place at the scan conversion and pixel rendering stages of the graphics pipeline During scan conversion, as we are looping through the pixels of a polygon, we must interpolate the t x t y texture coordinates in a similar way to how we interpolate the rgb color and z depth values As with all other interpolated values, we must precompute the slopes of each coordinate as they vary across the image pixels in x and y Once we have the interpolated texture coordinate, we look up that pixel in the texture map and use it to color the pixel

72 Types of Texture Mapping

73 Drawbacks

74 Polygon Mesh Texture Mapping It is also called Two-part texture mapping Introduced by Bier and Sloan in the year 1986. The texture is mapped onto an intermediate surface before being mapped onto the object It works as follows, Step 1 : mapping from two-dimensional texture space – simple three dimensional intermediate surface. e.g. : cylinder, sphere. T(u,v)  T ’ (x i,y i,z i ) This is know is S-mapping.

75 Aliasing A digital image is a sampled signal. If the signal is not band limited, aliasing will occur. In texture mapping, at large distances, textures get smaller which results in higher spatial frequencies on the screen which in turn increases risk for aliasing. Aliasing can be reduced by two methods, Filtering Mip-mapping

76 Anti-aliasing There are two problems that need to be addressed Magnification Minification

77 Magnification What happens when we get too close to the textured surface, so that we can see the individual texels up close? This is called magnification We can define various magnification behaviors such as: Point sampling Bilinear sampling Bicubic sampling

78 Magnification With point sampling, each rendered pixel just samples the texture at a single texel, nearest to the texture coordinate. This causes the individual texels to appear as solid colored rectangles With bilinear sampling, each rendered pixel performs a bilinear blend between the nearest 4 texel centers. This causes the texture to appear smoother when viewed up close, however, when viewed too close, the bilinear nature of the blending can become noticeable Bicubic sampling is an enhancement to bilinear sampling that actually samples a small 4x4 grid of texels and performs a smoother bicubic blend. This may improve image quality for up close situations, but adds some memory access costs

79 Minification Minification is the opposite of magnification and refers to how we handle the texture when we view it from far away Ideally, we would see textures from such a distance as to cause each texel to map to roughly one pixel If this were the case, we would never have to worry about magnification and minification However, this is not a realistic case, as we usually have cameras moving throughout an environment constantly getting nearer and farther from different objects When we view a texture from too close, a single texel maps to many pixels. There are several popular magnification rules to address this When we are far away, a many texels may map to a single pixel. Minification addresses this issue Also, when we view flat surfaces from a more edge on point of view, we get situations where texels map to pixels in a very stretched way. This case is often handled in a similar way to minification, but there are other alternatives

80 Minification Just like magnification, we can use a simple point sampling rule for minification However, this can lead to a common texture problem known as shimmering or buzzing which is a form of aliasing Consider a detailed texture map with lots of different colors that is only being sampled at a single point per pixel If some region of this maps to single pixel, the sample point could end up anywhere in that region. If the region has large color variations, this may cause the pixel to change color significantly even if the triangle only moves a minute amount. The result of this is a flickering/shimmering/buzzing effect To fix this problem, we must use some sort of minification technique

81 Minification Ideally, we would look at all of the texels that fall within a single pixel and blend them somehow to get our final color This would be expensive, mainly due to memory access cost, and would get worse the farther we are from the texture A variety of minification techniques have been proposed over the years (and new ones still show up) One of the most popular methods is known as mipmapping

82 Mipmapping Mipmapping was first published in 1983, although the technique had been in use for a couple years at the time It is a reasonable compromise in terms of performance and quality, and is the method of choice for most graphics hardware In addition to storing the texture image itself, several mipmaps are precomputed and stored Each mipmap is a scaled down version of the original image, computed with a medium to high quality scaling algorithm Usually, each mipmap is half the resolution of the previous image in both x and y For example, if we have a 512x512 texture, we would store up to 8 mipmaps from 256x256, 128x128, 64x64, down to 1x1 Altogether, this adds 1/3 extra memory per texture (1/4 + 1/16 + 1/64…= 1/3) Usually, texture have resolutions in powers of 2. If they don ’ t, the first mipmap (the highest res) is usually the original resolution rounded down to the nearest power of 2 instead of being half the original res. This causes a slight memory penalty Non-square textures are no problem, especially if they have power of 2 resolutions in both x and y. For example, an 16x4 texture would have mipmaps of 8x2, 4x1, 2x1, and 1x1

83 Mipmapping When rendering a mipmapped texture, we still compute an interpolated t x,t y per pixel, as before Instead of just using this to do a point sampled (or bilinear…) lookup of the texture, we first determine the appropriate mipmap to use Once we have the right mipmap, we can either point sample or bilinear sample it If we want to get fancy, we can also perform a blend between the 2 nearest mipmaps instead of just picking the 1 nearest. With this method, we perform a bilinear sample of both mips and then do a linear blend of those. This is called trilinear mipmapping and is the preferred method. It requires 8 texels to be sampled per pixel

84 Mipmapping Limitations Mipmapping is a reasonable compromise in terms of performance and quality It requires no more than 8 samples per pixel, so its performance is predictable and parallel friendly However, it tends to have visual quality problems in certain situations Overall, it tends to blur things a little much, especially as the textured surfaces appear more edge-on to the camera The main reason for its problems comes from the fact that the mipmaps themselves are generated using a uniform aspect filter This means that if a roughly 10x10 region of texels maps to a single pixel, we should be fine, as we would blend between the 16x16 and 8x8 mipmaps However, if a roughly 10x3 region maps to a single pixel, the mip selector will generally choose the worst of the two dimensions to be safe, and so will blend between the 4x4 and 2x2 mipmaps

85 Applications Molecular Graphics: Isocontouring of the hydrophobic potential on the solvent-accessible surface of Gramicidin A Solvent-accessible surface of ethanol, color coded against the electrostatic potential, using the traditional color coding (left) and using texture mapping (right).

86 Crash Simulations

87 Wire Frame Mapping

88 Bump Mapping It is a technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during light calculations. The result is a bumpy surface. Introduced by James Blinn in 1978. Bump mapping is much faster and consumes less resources for the same level of detail compared to displacement mapping. The primary limitation with bump mapping is that it perturbs only the surface normals without changing the underlying surface itself.

89 Bump Mapping

90

91 Methods The method uses a height map for simulating the surface displacement yielding the modified normal. The steps of this method are as follows, Before a lighting calculation is performed for each pixel on the object’s surface, Look up the height in the heightmap that corresponds to the position on the surface. Calculate the surface normal of the heightmap. Combine the surface normal from step 2 with the true surface normal so that the combined normal points in a new direction. Calculate the interaction wit the new bumpy surface with lights in the scene using Phong reflection model.

92 Examples

93 Non-photorealistic rendering Non-photorealistic rendering is an area of computer graphics that focuses on enabling a wide variety of expressive styles for digital art. In contrast to traditional computer graphics, which has focused on photorealism, NPR is inspired by artistic styles such as painting, drawing, technical illustration and animated cartoons. NPR has appeared in movies and video games in the form of toon shading as well as scientific visualization., architectural illustration and experimental animation. Term coined by David Salesin and Georges Winkenbach in a 1994 paper.

94 Non-photorealistic rendering The broad range of techniques that can be applied as part of non-photorealistic rendering are as follows, Pen-and-ink illustration Cel Shading Painterly rendering Technical illustration Some of the applications are as follows, Explanation Illustration Storytelling

95 Stipple Rendering Stippling is a style of pen-and-ink illustration, which uses very few lines; instead, dots define and shade objects. Stipple placement should be locally irregular, though bunching and sparseness should be deliberate in order to avoid unintended patterns or tonal variations. These are surprisingly difficult results to achieve by hand. Computer stippling simulation systems need to determine placement and in some cases the size of the stipples. The common struggle between speed and image quality arises.

96 Stipple Rendering There are two methods to solve this. One that can take 20 minutes to produce high quality stipplings of grayscale images. This method initially distributes a set of stipple points. Then the Voronoi diagram partitions the space into cells, each containing one stipple. The center of mass of each cell is then found. Finally, each stipple is moved to the center of mass in its cell and the resulting arrangement of stipples is the stipple drawing. The grayscale tone of the underlying region is used to calculate the centers of mass, so that the stipples become tightly packed in darker areas and sparser in light areas.

97 Example

98 Stipple Rendering The second method can stipple animations. This method pieces together patches of stipple samples. The samples vary tone by varying stipple density and are selected to match the values of the underlying grayscale image. This method achieves interactive rates, but the quality is lower and frame-to-frame incoherence causes shimmering.

99 Point Stippling The simplest rendering style is point stippling, which does not require any directional information. We simply place OpenGL points or scanned stipple textures at the positions obtained.

100 Hatching Hatching is an artistic technique used to create tonal or shading effects by drawing (or painting or scribing) closely spaced parallel lines. It is also used in monochromatic heraldic representations to indicate what the tincture of a "full-colour”emblazon would be. Hatching is especially important in essentially linear media, such as drawing, and many forms of printmaking, such as engraving, etching and woodcut. In Western art, hatching originated in the Middle Ages, and developed further into cross-hatching, especially in the old master prints of the fifteenth century.

101 Technique The main concept is that the quantity, thickness and spacing of the lines will affect the brightness of the overall image, and emphasize forms creating the illusion of volume. Hatching lines should always follow (i.e. wrap around) the form. By increasing quantity, thickness and closeness, a darker area will result. An area of shading next to another area, which has lines going in another direction is often used to create contrast. Line work can be used to represent colors, typically by using the same type of hatch to represent particular tones. For example, red might be made up of lightly spaced lines, whereas green could be made of two layers of perpendicular dense lines, resulting in a realistic image.

102 Hatching

103 Cross Hatching Crosshatching is an extension of hatching, which is the use of fine parallel lines drawn closely together, to create the illusion of shade or texture in a drawing. Crosshatching is the drawing of two layers of hatching at right angles to create a mesh-like pattern. Multiple layers in varying directions can be used to create textures. Crosshatching is often used to create tonal effects, by varying the spacing of lines or by adding additional layers of lines. Crosshatching is used in pencil drawing, but is particularly useful with pen and ink drawing, to create the impression of areas of tone, since the pen can only create a solid black line.

104 Cross Hatching

105 Pen and Ink Techniques This was one of the first NPR techniques and it has become quite famous. The basic idea is to render object and images as they were drawn by an artist just using pen and ink.

106 Pen and Ink Techniques The different elements that are involved as part of pen- and-ink illustrations are Strokes – curved lines of varying thickness and density Texture – character conveyed by collection of strokes Tone – perceived gray level across image or segment Outline – boundary lines that disambiguate structure

107 Pen and Ink Techniques Generally, strokes are generated by moving along the straight path. Moreover, each stroke could also gain expressiveness by adding small irregularities in its path and pressure. The above figure depicts some of the stroke textures. They vary based on tone, resolution and orientation.

108 Pen and Ink Techniques The tone basically depends on the diffuse light. In order to achieve a better effect, we make use of the technique called “indication”. Large areas of consistent texture in (a) are textured only sparingly i.e. indication is used. By doing this, the image is made more interesting as only fewer strokes need to be used.

109 Pen and Ink Techniques Another thing in the pen-and-ink illustration is Rescaling. Enlarging or shrinking images can result in undesired changes in tone and sharpness. This issue is addressed by assigning a hierarchical structure to textures, so the amount of detail drawn varies with the size of the image. Lower priority elements of a texture are not drawn when the image is small, so the strokes do not become crowded and appear only dark instead of defining the object.

110 Pen and Ink Techniques

111 Painterly Rendering Another technique often used to produce images for graphic design and illustrations employs brush strokes to simulate a hand-painted effect. This approach is very useful because it allows reproducing various artistic styles and gives emphasis to what is important in a given photograph. All the algorithms that are present try to address the various challenges of this approach; the first of these is physical simulation.

112 Painterly Rendering

113 Another challenge is to provide artists with user-friendly tools that take care of the mechanical details of image creation while providing the user parametric styles to produce various effects. This is useful especially for animation, where the animator can select the style to be applied and the algorithms take care of the tedious work.

114 Painterly Rendering Basically, this technique works in a way similar to how artists paint, namely starting with rough strokes and going back over the painting with smaller strokes to add details. Each layer is painted with different stroke size and this helps to emphasize certain details that the artist wants to be highlighted. Each layer starts upon a blurred version of the original photograph and the brushes are applied to areas that are as large as the brush size. According to this method, importance is given to the area of the image that contains most detail since it is where the smaller brush strokes are applied.

115 Painterly Rendering

116 Rendering with mosaics Image (and panoramas) are 2D Lumigraph is 4D What happened to 3D? 3D Lumigraph subset Concentric mosaics

117 3D Lumigraph One row of s,t plane i.e., hold t constant s,t u,v

118 3D Lumigraph One row of s,t plane i.e., hold t constant thus s,u,v a “ row of images ” s u,v

119 Concentric Mosaics

120 A set of manifold mosaics constructed from slit images taken by cameras rotating on concentric circles

121 Sample Images 11/18/2003

122 Rendering a Novel view

123 Construction of Concentric Mosaics Synthetic scenes uniform angular direction sampling square root sampling in radial direction

124 Real Scenes Construction of Concentric Mosaics

125 Problems with single camera: Limited horizontal fov Non-uniform spatial horizontal resolution Video sequence can be compressed with VQ and entropy encoding (25X) Compressed stream gives 20fps on PII300 Construction of Concentric Mosaics

126 Results

127

128 Rotoscoping Rotoscoping is an animation technique in which animators trace over footage, frame by frame, for use in live-action and animated films. Originally, recorded live-action film images were projected onto a frosted glass panel and re-drawn by an animator. This projection equipment is called a rotoscope. Although this device was eventually replaced by computers, the process is still referred to as rotoscoping. In the visual effects industry, the term rotoscoping refers to the technique of manually creating a matte for an element on a live-action plate so it may be composited over another background.

129 Rotoscoping Animation of live-action film Images projected onto screen and redrawn Frame by frame reanimation

130 Rotoscoping 1915: invented by Max Fleischer 1920s: Fleischer Studios Goal to make more fluid animation style Created in response to mechanical form of previous animation techniques First cartoons of Koko the Clown, Popeye, and Betty Boop Allowed multiple artists working on same characters in Snow White

131 Technique Rotoscope output can have slight deviations from the true line that differs from frame to frame, which when animated cause the animated line to shake unnaturally, or "boil". Avoiding boiling requires considerable skill in the person performing the tracing, though causing the "boil" intentionally is a stylistic technique sometimes used to emphasize the surreal quality of rotoscoping.

132 Applications Rotoscoping (often abbreviated as "roto") has often been used as a tool for visual effects in live-action movies. By tracing an object, a silhouette (called a matte) is created that can be used to extract that object from a scene for use on a different background. While blue and green screen techniques have made the process of layering subjects in scenes easier, rotoscoping still plays a large role in the production of visual effects imagery. Rotoscoping in the digital domain is often aided by motion tracking and onion-skinning software. Rotoscoping is often used in the preparation of garbage mattes for other matte-pulling processes.

133 Applications Rotoscoping has also been used to allow a special visual effect (such as a glow, for example) to be guided by the matte or rotoscoped line. One classic use of traditional rotoscoping was in the original three Star Wars films. It was used to create the glowing lightsaber effect, by creating a matte based on sticks held by the actors. To achieve this, editors traced a line over each frame with the prop, then enlarged each line and added the glow.

134 References https://en.wikipedia.org/wiki/Luminous_intensity https://en.wikipedia.org/wiki/Radiant_intensity http://photonics.intec.ugent.be/education/ivpv/res_handbook/v2ch24.pdf https://en.wikipedia.org/wiki/Radiosity_(computer_graphics) https://en.wikipedia.org/wiki/Radiosity_(computer_graphics) http://www0.cs.ucl.ac.uk/staff/ucacajs/book_tmp/CGVE/slides/radiosity_all.ppt. http://www0.cs.ucl.ac.uk/staff/ucacajs/book_tmp/CGVE/slides/radiosity_all.ppt. http://orca.st.usm.edu/~jchen/courses/graphics/lectures/Radiosity.pdf http://www.cs.bath.ac.uk/~pjw/NOTES/75-ACG/ch5-radiosity.pdf ‪ https://www.cse.msu.edu/~cse872/lecture11.ppt. ‪ https://www.cse.msu.edu/~cse872/lecture11.ppt. https://en.wikipedia.org/wiki/Radiosity_(computer_graphics) https://en.wikipedia.org/wiki/Reflection_mapping http://web.cs.wpi.edu/~emmanuel/courses/cs543/f13/slides/lecture09_p1.pdf

135 References ‪ http://research.cs.wisc.edu/graphics/Courses/638-f2001/lectures/cs638-4.ppt. ‪ http://research.cs.wisc.edu/graphics/Courses/638-f2001/lectures/cs638-4.ppt. ‪ http://140.129.20.249/~jmchen/cg/slides/handout/environmap.ppt. ‪ http://140.129.20.249/~jmchen/cg/slides/handout/environmap.ppt. ‪ http://graphics.ucsd.edu/courses/cse167_f05/CSE167_08.ppt. ‪ http://graphics.ucsd.edu/courses/cse167_f05/CSE167_08.ppt. ‪ http://cs.lamar.edu/faculty/osborne/5335/11-04/Texture3.ppt. ‪ http://cs.lamar.edu/faculty/osborne/5335/11-04/Texture3.ppt. http://computer-graphics.se/TSBK07-files/pdf/PDF09/5.pdf https://en.wikipedia.org/wiki/Bump_mapping http://www.csc.ncsu.edu/faculty/healey/download/cstr.04.pdf https://en.wikipedia.org/wiki/Hatching http://drawsketch.about.com/od/drawingglossary/g/crosshatching.htm ‪ http://web.eecs.utk.edu/~huangj/CS594F08/intro-ibr_yang.ppt. ‪ http://web.eecs.utk.edu/~huangj/CS594F08/intro-ibr_yang.ppt. ‪ http://www-inst.eecs.berkeley.edu/~cs294-7/fa07/lectures/rotoscoping.ppt. ‪ http://www-inst.eecs.berkeley.edu/~cs294-7/fa07/lectures/rotoscoping.ppt.

136 Thank you.!


Download ppt "Radiosity, Surface detail, Textures Santosh Reddy Gopu U-34723973."

Similar presentations


Ads by Google