Radiosity, Surface Detail, Textures

Slides:



Advertisements
Similar presentations
Exploration of bump, parallax, relief and displacement mapping
Advertisements

Computer graphics & visualization Global Illumination Effects.
03/16/2009Dinesh Manocha, COMP770 Texturing Surface’s texture: its look & feel Graphics: a process that takes a surface and modifies its appearance using.
3D Graphics Rendering and Terrain Modeling
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
HCI 530 : Seminar (HCI) Damian Schofield.
Computer Graphics (Fall 2005) COMS 4160, Lecture 16: Illumination and Shading 1
Chapter 4 Digital Multimedia, 2nd edition Vector Graphics.
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Models and Architectures Ed Angel Professor of Computer Science, Electrical and Computer.
Vector vs. Bitmap SciVis V
Week 14 - Wednesday.  What did we talk about last time?  Collision handling  Collision detection  Collision determination  Collision response  BSPs.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
Computer Graphics Shadows
V Obtained from a summer workshop in Guildford County July, 2014
Guilford County Sci Vis V204.01
MULTIMEDIA TECHNOLOGY SMM 3001 MEDIA - GRAPHICS. In this chapter how the computer creates, stores, and displays graphic images how the computer creates,
CS 445 / 645: Introductory Computer Graphics
Modeling and representation 1 – comparative review and polygon mesh models 2.1 Introduction 2.2 Polygonal representation of three-dimensional objects 2.3.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 19 Other Graphics Considerations Review.
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Polygon Shading. Assigning color to a shape to make graphical scenes look realistic, or artistic, or whatever effect we’re attempting to achieve But first.
COMPUTER GRAPHICS CS 482 – FALL 2014 AUGUST 27, 2014 FIXED-FUNCTION 3D GRAPHICS MESH SPECIFICATION LIGHTING SPECIFICATION REFLECTION SHADING HIERARCHICAL.
Modelling and Simulation Types of Texture Mapping.
COMP 175: Computer Graphics March 24, 2015
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Vector vs. Bitmap
Introduction to Textures and Skins Chapter 8 & 9 3D Game Programming All-in-One By Ken Finney.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
01/29/03© 2003 University of Wisconsin Last Time Radiosity.
CS 376 Introduction to Computer Graphics 04 / 16 / 2007 Instructor: Michael Eckmann.
02/16/05© 2005 University of Wisconsin Last Time Re-using paths –Irradiance Caching –Photon Mapping.
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
02/18/05© 2005 University of Wisconsin Last Time Radiosity –Converting the LTE into the radiosity equation –Solving with Gauss-Seidel relaxation –Form.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
Rendering Overview CSE 3541 Matt Boggus. Rendering Algorithmically generating a 2D image from 3D models Raster graphics.
Week 10 - Wednesday.  What did we talk about last time?  Shadow volumes and shadow mapping  Ambient occlusion.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Lumo: Illumination for Cel Animation Scott F. Johnston.
INT 840E Computer graphics Introduction & Graphic’s Architecture.
Digital Media Dr. Jim Rowan ITEC 2110 Vector Graphics II.
CHAPTER 8 Color and Texture Mapping © 2008 Cengage Learning EMEA.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Vector Graphics Digital Multimedia Chap 이병희
Radiosity Jian Huang, CS594, Fall 2002 This set of slides reference the text book and slides used at Ohio State.
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
Digital Media Dr. Jim Rowan ITEC 2110 Vector Graphics II.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Graphics Graphics Korea University cgvr.korea.ac.kr 1 Surface Rendering Methods 고려대학교 컴퓨터 그래픽스 연구실.
COMP135/COMP535 Digital Multimedia, 2nd edition Nigel Chapman & Jenny Chapman Chapter 4 Lecture 4 - Vector Graphics.
RENDERING Introduction to Shading models – Flat and Smooth shading – Adding texture to faces – Adding shadows of objects – Building a camera in a program.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Real Time Nonphotorealistic Rendering. How to achieve real time NPR? Economy of line: present a lot of information with very few strokes. Silhouettes.
Painterly Rendering for Animation Introduction speaks of focus and detail –Small brush strokes focus and provide detail –Large strokes are abstract and.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Guilford County SciVis V104.03
Graphics Lecture 14: Slide 1 Interactive Computer Graphics Lecture 14: Radiosity - Computational Issues.
CS 376 Introduction to Computer Graphics 04 / 13 / 2007 Instructor: Michael Eckmann.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Digital Media Dr. Jim Rowan ITEC 2110 Vector Graphics II.
Graphics Basic Concepts 1.  A graphic is an image or visual representation of an object.  A visual representation such as a photo, illustration or diagram.
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
Illumination and Shading. Illumination (Lighting) Model the interaction of light with surface points to determine their final color and brightness OpenGL.
Vector vs. Bitmap.
POLYGON MESH Advance Computer Graphics
IMAGES.
3D Graphics Rendering PPT By Ricardo Veguilla.
CSc4820/6820 Computer Graphics Algorithms Ying Zhu Georgia State University Lecture 25 Radiosity.
Dr. Jim Rowan ITEC 2110 Vector Graphics II
Computer Graphics Material Colours and Lighting
Presentation transcript:

Radiosity, Surface Detail, Textures EEL 5771-001 Introduction to Computer Graphics Radiosity, Surface Detail, Textures Saravanan Manoharan

Overview: Radiosity - Intensity - Radiant flux - Form factor & Equation - Radiosity equation - Progressive refinement Surface detail - Texture - Bump mapping - Frame mapping - wire frame Non- photorealistic rendering - Hatching & cross hatching - pen and ink techniques - Artistic style rendering - Mosaic rendering

RADIOSITY: A rendering method that simulates light reflecting off one surface and onto another. It is a more accurate method of rendering light and shadows than ray tracing Radiosity produces the soft shadows from multiple reflect-ions and light sources that exist in the real world Global illumination algorithm: illumination arriving on a surface comes not just directly from the light sources, but also from other surfaces reflecting light

Scene without radiosity Same scene with radiosity

Radiosity is the energy leaving the patch surface per discrete time interval and is the combination of emitted and reflected energy Bi is the radiosity of patch i. Ei is emitted energy. Ri is the reflectivity of the patch, giving reflected energy by multiplying by the incident energy (the energy which arrives from other patches). All j () in the rendered environment are integrated for BjFjidAj, to determine the energy leaving each patch j that arrives at patch i. Fji is a constant form factor for the geometric relation between patch i and each patch j.

Intensity Intensity: RGB components weighted equally Eye is sensitive to ratios of intensity rather than absolute intensity levels Most displays output light intensity: L = kNγ where N is number of electrons in the beam and γ is usually 2.2-2.5 Intensity attenuates in proportion to distance from source SOLID ANGLE: A solid angle is a 3D angular volume that is defined analogously to the definition of a plane angle in two dimensions.

Intensity, I, is the flux per unit solid angle Intensity, I, is the flux per unit solid angle. It is the amount of flux from a point source contained in a small angular volume. Steradian (sr): The solid angle subtended at the center of a sphere by an area on its surface numerically equal to the square of the radius When the solid angle is large enough so that the angle with the surface normal is not the same over the entire solid angle, the total projected solid angle must be computed by integrating the incremental projected solid angles

Radiant Flux: Light “flows” through space, and so radiant power is more commonly referred to as the “time rate of flow of radiant energy,” or radiant flux. Φ = dQ/dt where Q is radiant energy and t is time. In terms of a photographic light meter measuring visible light, the instantaneous magnitude of the electric current is directly proportional to the radiant flux.

Form Factor: In radiosity, the surfaces of the scene to be rendered are each divided up into one or more smaller surfaces (patches). A form factor is computed for each pair of patches; it is a coefficient describing how well the patches can see each other. Patches that are far away from each other, or oriented at oblique angles relative to one another, will have smaller view factors. The form factor in the above equation is defined to be the fraction of energy that leaves surface i and lands on surface j

Radiosity equation: Radiosity B is the energy per unit area leaving the patch surface per discrete time interval and is the combination of emitted and reflected energy: B(x)i dAi - total energy leaving a small area dAi around a point x. E(x)i dAi is the emitted energy. ρ(x) is the reflectivity of the point, giving reflected energy per unit area by multiplying by the incident energy per unit area (the total energy which arrives from other patches). S denotes that the integration variable x' runs over all the surfaces in the scene r is the distance between x and x‘ If the surfaces are approximated by a finite number of planar patches, each of which is taken to have a constant radiosity Bi and reflectivity ρi, the above equation gives the discrete radiosity equation,

Form Factor Equation: Polar angles and ’ between normals and ray between x and y Visibility function v(x,y) = 0 if ray from x to y is occluded, v(x,y) = 1 otherwise Distance r between x and y Form factors are symmetric: Divide radiosity equation by A

Progressive Refinement: Instead of computing all form-factors and then solving a complete set of linear radiosity equations, the progressive refinement method computes and distributes the light energy one patch per refinement iteration step. Progressive refinement is a ray tracing algorithm that quickly reveals coarse structure of an image, and gradually reveals additional detail over time. The first pixel is rendered as a single rectangle occupying the entire work area. The second through fourth each occupy a quarter of the work area. Sufficient progression will refine the image until the rendered rectangles correspond to a target resolution Advantage: The main advantage of the technique from the user point of view, is to be able to quickly get a visual feedback about what the frame looks like before it is completely finished. This might be useful when an artist is tweaking the lighting or the materials of the scene.

Example: Screen resolution Working: Each element in the scene maintains two energy values: an accumulated energy value and residual (or "upshot") energy. We choose one of the elements in the scene as the "shooter" . If a receiving element is visible, then we calculate the amount of energy transferred from the shooter to the receiving element, as described by the corresponding form factor and the shooter's residual energy. This energy is multiplied by the receiver's reflectance, which is the fraction of incoming energy that is reflected back out again, and is usually represented by an RGB color.

Radiosity applications: The Radiosity method offers a powerful capability that the ray tracing technique cannot - interactive walk throughs of the environment. dealing with diffuse reflectors in enclosures, the method works best for man-made environments such as architectural designs, interiors of offices, factories and lighting designs useful tool for light designers and theatrical lighting designers since it accurately simulates global illumination

Typical features of a radiosity application for architecture would include • Translation from modeler data formats • Access to material libraries • Access to lighting libraries • Positioning of lights • Assignment of material properties • Positioning of texture maps

Environmental Mapping Approximating the appearance of a reflective surface by means of a precomputed texture image The texture is used to store the image of the distant environment surrounding the rendered object Types: Sphere mapping Cube mapping Healpix mapping

Steps for Environmental mapping Create a 2D environment map Create a 2D environment map For each pixel on a reflective object, compute the normal the normal Compute the reflection vector based on the eye position and surface normal position and surface normal Use the reflection vector to compute an index into the environment texture into the environment texture Use the corresponding texture to color the pixel

Sphere mapping: Represents the sphere of incident illumination as though it were seen in the reflection of a reflective sphere through an orthographic camera a fisheye lens or via prerendering a scene is used Synthetic scene can be generated using ray tracing

Cube mapping: uses the six faces of a cube as the map shape. The environment is projected onto the sides of a cube and stored as six square textures, or unfolded into six regions of a single texture. The cube map is generated by first rendering the scene six times from a viewpoint, with the views defined by an 90 degree view frustum representing each cube face.

HEALPix mapping: HEALPix environment mapping is similar to the other polyhedron mappings, but can be hierarchical, thus providing a unified framework for generating polyhedra that better approximate the sphere. This allows lower distortion at the cost of increased computation Healpix mapping with Grid example

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model Wire frame model: A wireframe model consist of points and curves only, and looks as if its made up with a bunch of wires. This is the simplest CAD model of an object. Advantages- 1. Ease of creation and low level hardware and software requirements. 2. the data storage requirement is low. Disadvantage- very confusing to visualize. For example, a blind hole in a box may look like a solid cylinder, as shown in the figure.

Polygonal modeling: -Polygon mesh: vertex, edges and polygon collection where each edge is shared by two polygons as maximum • vertex: point with coordinates (x,y,z) • edge: line segment that joins two vertices • polygon: close sequence of edges – There are different type of representation that can be used at the same time in a same application • Explicit • Pointers to list of vertices • Pointers to list of edges – Criteria to evaluate different representations: • time • space • topological information

Surface detailing: In computer graphics, the fine surface detail on an object is generated using textures. Three aspects of texture are generally considered: 1.Texture mapping: the addition of a separately specified pattern to a smooth surface. After the pattern is added, the surface remains smooth. Also known as patterns or colour detail 2. Bump mapping: the addition of roughness to the surface. This is obtained perturbation function that changes the geometry of the surface. Also known as roughness 3. Simulating environments: for example, shadows and lighting using textures.

Texture mapping: Texture mapping is the process of mapping an image onto a triangle in order to increase the detail of the rendering This allows us to get fine scale details without resorting to rendering tons of tiny triangles The image that gets mapped onto the triangle is called a texture map or texture and is usually a regular color image We define our texture map as existing in texture space, which is a normal 2D space The lower left corner of the image is the coordinate (0,0) and the upper right of the image is the coordinate (1,1) The actual texture map might be 512 x 256 pixels for example, with a 24 bit color stored per pixel

Texture Coordinates To render a textured triangle, we must start by assigning a texture coordinate to each vertex A texture coordinate is a 2D point [t x t y ] in texture space that is the coordinate of the image that will get mapped to a particular vertex

Mapping functions: s=f(u,v) , t=g(u,v) The mapping functions are frequently assumed to be linear: s=A u+B t=C v+D where the constants A, B, C and D are obtained from the relationship between known points in the two systems.

Texture mapping Image without texture Image with texture

Anti-aliasing: Aliasing is a constant aspect of texture mapping Frequency domain analysis: silhouette edges and perspective can cause high frequency patterns in image space Point sampling (mapping the center of image pixel to texture space and use the nearest texture pixel) a texture pattern can generate lots of aliasing problems: Wide literature on different filters to circumvent the problem

Bump Mapping: A sphere without bump mapping (left). A bump map to be applied to the sphere (middle). The sphere with the bump map applied (right) appears to have a mottled surface resembling an orange. Bump maps achieve this effect by changing how an illuminated surface reacts to light without actually modifying the size or shape of the surface.

Bump mapping: Adding texture patterns to smooth surfaces produces smooth surfaces Using a rough-textured pattern to add the appearance of roughness to a surface is not a good idea. Rough-textured surfaces hava a small random component in the surface normal and hence in the light reflection direction Blinn developed a method to for perturbing the surface normal. At any point of the surface S, the partial derivatives are Su and Sv . The surface normal n is given by the cross-product:

Bump Mapping:

Example:

Non- photorealistic rendering: Non-photorealistic rendering does not reproduce the real object, but focuses on graphic art, personality and style of expression. NPR has recently become a hot research topic in computer graphics. It produce schematic representations of scenes that include only the relevant geometric information, clarifying shapes and focusing attention

Algorithms in NPR: Silhouette and Hidden-Line Algorithms Image-Space Silhouette Algorithms Object-space silhouette algorithms Stroke and Paint Models Interactive Painting and Drawing Automatic Painting Algorithms Automatic Pen-and-Ink Algorithms

Applications: Architectural designs Medical textbooks In paintings Film production industry In animation production

Hatching: Drawing surfaces using hatching strokes simultaneously conveys material, tone, and form Hatching can simultaneously convey lighting, suggest material properties, and reveal shape. Hatching generally refers to groups of strokes with spatially-coherent direction and quality. The density of the strokes controls tone for shading. Their character and aggregate arrangement suggests surface texture Several non-photorealistic rendering (NPR) methods address hatching in still images rendered from 3D scenes. However, these methods lack temporal coherence, and are thus unsuitable for animation.

Halftone: Halftone is the reprographic technique that simulates continuous tone imagery through the use of dots, varying either in size or in spacing, thus generating a gradient like effect.[ "Halftone" can also be used to refer specifically to the image that is produced by this process.

Halftone Dot pattern: Producing halftone dots of different sizes requires not only an appropriate halftone screen but also keeping the screen at a certain distance from a light-sensitive plate. Only then will the final dots have different sizes related to the amount of light penetrating each opening of the screen The halftone process was used for books, newspapers, and magazines, as well as many different forms of printed and visual advertising and product decoration. Halftone in QR scan

Cross hatching: A way of adding cartoon like detail and shading to a scene. It details a scene by adding lines right-angles to create a mesh like appearance. In OpenGL, a rather simple shader exists to create a similar kind of scene. All we need is a fragment shader to handle the cross hatching of our scene need a 2D texture that will tell us what fragment is being rendered and a luminace value to help determine whether or not our current fragment lies on a particular line

NPR techniques:  Pen-and-Ink illustration Techniques: cross-hatching, outlines, line art,etc. Painterly rendering Styles: impressionist, expressionist, pointillist, etc. Cartoons Effects: cartoon shading, distortion, etc. Technical illustrations Characteristics: Matte shading, edge lines, etc. Scientific visualization Methods: splatting, hedgehogs, etc.

Pen and ink techniques: Strokes- Curved lines of varying thickness and density of placement Texture - Character conveyed by collection of strokes, e.g. crisp and clean vs. rough and sketchy Tone - Perceived gray level across the image Outline - Boundary lines which disambiguate structure information

Characteristics of pen-and-ink showing not only region borders (maybe combined w/ sparse lines) also showing shading also showing surface shape also illustrating material abstraction and emphasis Advantages: does not require halftoning, already black-and-white thus images good for printing mark placement & stylization allow abstraction & emphasis precise depiction, good for illustration applications high-quality representation: vector or 1 bit pixel images

Artistic style rendering: Rendering in visual art and technical drawing means the process of formulating, adding color, shading, and texturing of an image The alternative method to rendering an image is capturing an image such as photography or image scanning. Both rendered and captured images can be mixed, edited, or both visual information is interpreted by the artist and displayed accordingly using the chosen art medium and level of abstraction in abstract art

Mosiacs Rendering: Constrain camera motion to planar concentric circles, and create concentric mosaics by composing slit images taken at different locations along each circle. Constrain cam motion to planar concentric circle, create concentric mosaics by composing slit images taken at different locations along each circle.

Construction of a concentric mosaic:

Rotoscoping: Rotoscoping is a technique where images are copied from a moving video into an animation. The animator draws the motion and shape of the object by referring to the video as opposed to imagining in his head. With the help of the rotoscoping one can animate some complex scenes that would be hard to visualize otherwise. The disadvantage is that one will have to hunt for the exact video that one wants to animate.

Advantages: Rotoscoping makes it easier to animate really difficult, complex actions. Rotoscoping motivates students to learn more complex software applications. The process of rotoscoping teaches students about the reality of animating as a very time intensive activity. Things to consider: Style: What type of drawing will you do? Experimenting in advance is encouraged to develop a style that you can consistently replicate throughout the animation Color: You will be re-drawing the action but it does not have to be the same color. What happens when the hue is changed? Time: This can be a very time consuming process, so do not pick a drawing style that will take you 30 minutes per frame to complete Audio: How does the role of audio in your animation affect the style and pace? Continuity or discontinuity in drawing style.

References: http://sensing.konicaminolta.asia/learning-center/light-measurement/radiometry-spectroradiometry-photometry/ http://www.scratchapixel.com/old/lessons/3d-advanced-lessons/adaptive-progressive-refinement-rendering/technique-description/ http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter39.html http://www.multires.caltech.edu/pubs/ffpaper.pdf https://www.siggraph.org/education/materials/HyperGraph/mapping/r_wolfe_mapping.pdf http://jerome.jouvie.free.fr/opengl-tutorials/Tutorial11.php http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.424.3421&rep=rep1&type=pdf