Efficient Complex Shadows from Environment Maps Aner Ben-Artzi – Columbia UniversityRavi Ramamoorthi – Columbia University Maneesh Agrawala – Microsoft.

Slides:



Advertisements
Similar presentations
Heuristic Search techniques
Advertisements

Ray Tracing Depth Maps Using Precomputed Edge Tables Kevin Egan Rhythm & Hues Studios.
Digital Image Processing
Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.
Computer graphics & visualization Global Illumination Effects.
A Coherent Grid Traversal Algorithm for Volume Rendering Ioannis Makris Supervisors: Philipp Slusallek*, Céline Loscos *Computer Graphics Lab, Universität.
Chunhui Yao 1 Bin Wang 1 Bin Chan 2 Junhai Yong 1 Jean-Claude Paul 3,1 1 Tsinghua University, China 2 The University of Hong Kong, China 3 INRIA, France.
Eyes for Relighting Extracting environment maps for use in integrating and relighting scenes (Noshino and Nayar)
The Discrete Ray-casting Algorithm Qiang Xue Jiaoying Shi State Key Lab Of CAD&CG Zhejiang University.
Scalability with many lights II (row-column sampling, visibity clustering) Miloš Hašan.
ATEC Procedural Animation Introduction to Procedural Methods in 3D Computer Animation Dr. Midori Kitagawa.
Computer Graphics - Class 10
Precomputed Local Radiance Transfer for Real-time Lighting Design Anders Wang Kristensen Tomas Akenine-Moller Henrik Wann Jensen SIGGRAPH ‘05 Presented.
Real-Time Rendering Paper Presentation Imperfect Shadow Maps for Efficient Computation of Indirect Illumination T. Ritschel T. Grosch M. H. Kim H.-P. Seidel.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Photon Tracing with Arbitrary Materials Patrick Yau.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Efficient Image-Based Methods for Rendering Soft Shadows
1 View Coherence Acceleration for Ray Traced Animation University of Colorado at Colorado Springs Master’s Thesis Defense by Philip Glen Gage April 19,
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Final Gathering on GPU Toshiya Hachisuka University of Tokyo Introduction Producing global illumination image without any noise.
Reflectance and Texture of Real-World Surfaces KRISTIN J. DANA Columbia University BRAM VAN GINNEKEN Utrecht University SHREE K. NAYAR Columbia University.
Real-Time High Quality Rendering COMS 6160 [Fall 2004], Lecture 4 Shadow and Environment Mapping
Exploiting Temporal Coherence for Incremental All-Frequency Relighting Ryan OverbeckRavi Ramamoorthi Aner Ben-ArtziEitan Grinspun Columbia University Ng.
Time-Dependent Photon Mapping Mike Cammarano Henrik Wann Jensen EGWR ‘02.
Paper by Alexander Keller
Real-Time Ray Tracing 3D Modeling of the Future Marissa Hollingsworth Spring 2009.
Hidden Surface Removal
Erdem Alpay Ala Nawaiseh. Why Shadows? Real world has shadows More control of the game’s feel  dramatic effects  spooky effects Without shadows the.
Direct Illumination with Lazy Visibility Evaluation David Hart Philip Dutré Donald P. Greenberg Cornell University SIGGRAPH 99.
Ray Tracing Primer Ref: SIGGRAPH HyperGraphHyperGraph.
WHAT IS VRAY? V-ray is a rendering engine that is used as an extension of certain 3D computer graphics software. The core developers of V-Ray are Vladimir.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
1 Speeding Up Ray Tracing Images from Virtual Light Field Project ©Slides Anthony Steed 1999 & Mel Slater 2004.
Sebastian Enrique Columbia University Real-Time Rendering Using CUReT BRDF Materials with Zernike Polynomials CS Topics.
Jonathan M Chye Technical Supervisor : Mr Matthew Bett 2010.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Sebastian Enrique Columbia University Relighting Framework COMS 6160 – Real-Time High Quality Rendering Nov 3 rd, 2004.
CS 376 Introduction to Computer Graphics 04 / 16 / 2007 Instructor: Michael Eckmann.
Monte Carlo I Previous lecture Analytical illumination formula This lecture Numerical evaluation of illumination Review random variables and probability.
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
Week 10 - Wednesday.  What did we talk about last time?  Shadow volumes and shadow mapping  Ambient occlusion.
Shadows. Shadows is important in scenes, consolidating spatial relationships “Geometric shadows”: the shape of an area in shadow Early days, just pasted.
Real-time Graphics for VR Chapter 23. What is it about? In this part of the course we will look at how to render images given the constrains of VR: –we.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Rendering Plant Leaves Faithfully Oliver Franzke (Dresden University of Technology) Oliver Deussen (University of Konstanz)
Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography.
Efficient Image-Based Methods for Rendering Soft Shadows SIGGRAPH 2001 Maneesh Agrawala Ravi Ramamoorthi Alan Heirich Laurent Moll Pixar Animation Studios.
Rendering Synthetic Objects into Real- World Scenes by Paul Debevec SIGGRAPH 98 Conference Presented By Justin N. Rogers for Advanced Comp Graphics Spring.
04/30/02(c) 2002 University of Wisconsin Last Time Subdivision techniques for modeling We are now all done with modeling, the standard hardware pipeline.
- Laboratoire d'InfoRmatique en Image et Systèmes d'information
112/5/ :54 Graphics II Image Based Rendering Session 11.
A Theory of Monte Carlo Visibility Sampling
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Hybrid Algorithms K. H. Ko School of Mechatronics Gwangju Institute.
Pure Path Tracing: the Good and the Bad Path tracing concentrates on important paths only –Those that hit the eye –Those from bright emitters/reflectors.
02/12/03© 2003 University of Wisconsin Last Time Intro to Monte-Carlo methods Probability.
Bounding Volume Hierarchy. The space within the scene is divided into a grid. When a ray travels through a scene, it only passes a few boxes within the.
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Stochastic Path Tracing Algorithms K. H. Ko School of Mechatronics Gwangju.
Global Illumination (3) Path Tracing. Overview Light Transport Notation Path Tracing Photon Mapping.
Computer Graphics CC416 Lecture 04: Bresenham Line Algorithm & Mid-point circle algorithm Dr. Manal Helal – Fall 2014.
Toward Real-Time Global Illumination. Global Illumination == Offline? Ray Tracing and Radiosity are inherently slow. Speedup possible by: –Brute-force:
Toward Real-Time Global Illumination. Project Ideas Distributed ray tracing Extension of the radiosity assignment Translucency (subsurface scattering)
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Electronic Visualization Laboratory University of Illinois at Chicago “Fast And Reliable Space Leaping For Interactive Volume Rendering” by Ming Wan, Aamir.
Real-Time Soft Shadows with Adaptive Light Source Sampling
Combining Edges and Points for Interactive High-Quality Rendering
Progressive Photon Mapping Toshiya Hachisuka Henrik Wann Jensen
Chapter XV Shadow Mapping
GEARS: A General and Efficient Algorithm for Rendering Shadows
Presentation transcript:

Efficient Complex Shadows from Environment Maps Aner Ben-Artzi – Columbia UniversityRavi Ramamoorthi – Columbia University Maneesh Agrawala – Microsoft Research Introduction – When adding cast shadows to scenes illuminated by complex lighting such as high dynamic-range environment maps, most of the calculations are spent determining visibility between the many light-source directions and every scene point visible to the camera. We propose a method that selectively samples the visibility function. By leveraging the coherence inherent to the visibility of typical scenes, we predict over 90% of the visibility calculations by leveraging information from previous calculations. The difference from the full calculations is negligible. Visibility coherence has been used in works such as [Guo 1998], [Hart et al. 1999], and [Agrawala et al. 2000], but only for point or area light sources. Those methods do not scale to environment maps. We show how to use the complexity of sampled environment maps to our advantage. no cast shadowswith cast shadows Shadows are important – Both images are illuminated by a sampled environment map (400 directional light sources). The image on the left is rendered without cast shadows. The image on the right is accurately with cast shadows using our method. The rendering time to add shadows is reduced from 120 sec (with full shadow-ray casting method) to 19 sec. “Many samples make for ‘light’ work.” – Environment maps can be represented as a set of directional lights. Recent work, such as [Agarwal et. al. 2003], has shown that high dynamic-range environment maps can be faithfully approximated with several hundred directional lights. Here we see the environment map for Grace Cathedral (left) as sampled into 82 lights. Each light represents a principal direction and is surrounded by the directions nearest to it, as defined by the Voronoi diagram (right). Because of the density and abundance of lights in this sampling, we can us visibility information from a light’s neighbors to predict its own visibility. x1x1 x2x2 a b a b a b Visibility for x 1 is calculated by tracing shadow rays and finding points a and b. Visibility for x 2 is predicted by warping blockers a and b to their positions in the light-space relative to x 2. Leveraging Coherence in Visibility by Reusing Calculations Visibility for most scenes exhibits coherence in both the angular and spatial domains. By reusing calculations form nearby pixels (spatial) and combining knowledge of nearby lights (angular), we can accurately predict over 90% of the shadow rays in a ray-tracer without actually casting them. Here we see how the visibility for x 2 is predicted by reusing blocker points a and b, discovered earlier when the visibility for x 1 was computed. The visibility for x 2 will be used in subsequent predictions at other locations. fix predictions Grid – No warp Uncertainty flooding Grid – No warp Uncertainty flooding Fully shadow-traced Results – The bunny and plant scenes illuminated by a 200 light sampling of their environment maps. Even in the low coherence visibility of the plant scene, our method performs accurately, though less efficiently. Scanline vs. Grid-based pixel order evaluation – Grey pixels have already been evaluated. Blue pixels are being predicted based on visibility informatin conatined in the gray pixels indicated by the arrows. Scanline (left): Traditional scanline evaluation uses visibility information from the three pixels above and to the left of the current pixel. Grid-Based (right): First the image is fully evaluated at regular intervals. This produces a coarse grid. Next the center (shown in blue) of every 4 pixels is evaluated based on its surrounding 4 neighbors (shown in grey). All four are equidistant from their respective finer-grid-level pixel. Again, a pixel is evaluated at the center of 4 previously evaluated pixels. At the end of this step, the image has been regularly sampled at a finer spacing. This becomes the new coarse level, and the process repeats. Warping vs. No Warping – When we use the blockers from previously evaluated pixels, we hav a choice of warping them to the frame of reference of the current pixel, or assuming tht the blocker are distant, and leaving their relative angular direction unchanged. 4 ways to reuse blockers discovered in previously evaluated pixels Fixing Prediction Errors – A naïve approach to predicting the visibility of lights for a particular pixel is to let the visibility information form nearby pixels determine the visibility at the current pixel. Without some method for marking low- confidence predictions, and verifying them, the results contain errors as in the left image. 0. The visibility for all of the lights has been determined at 3 pixels (not shown). We will now use that information to predict the visibility for lights at new pixel. 1. For simplicity, we will not consider warping. Each light in our current pixel (as represented by its hexagonal cell in the Voronoi diagram), will receive a blocker (black dots) every time the corresponding light is blocked in one of the previously evaluated pixels. If warping were used, blockers would be warped onto the appropriate light, instead of always being added to the same light. 2A. Lights cells that contain any blocker(s) are predicted as blocked. Others are predicted as visible. 2B. Lights cells that contain the 3 (since we are using 3 previous pixels) are predicted as blocked. Those that contain no blockers are predicted as visible. Light cells containing 1 or 2 blockers are marked as uncertain and shadow- traced (blue center). 3A. Light cells whose neighbors’ visibility differs are shadow-traced (blue center). 3B. The neighboring light cells of all uncertain cells are shadow-traced (blue center). predictions Predictions & uncertainty tracing Path A: Boundary Flooding Path B: Uncertainty Flooding Flood and Shadow-trace Flood and Shadow-trace Boundary Flooding vs. Uncertainty Flooding – Boundary flooding considers any light on the boundary between blocked and visible to be a low-confidence prediction. Uncertainty flooding considers any light for which there is no consensus to be a low-confidence prediction. In both methods, the neighbors of low-confidence lights are shadow-traced. If the trace reveals that a prediction is wrong, the shadow-tracing floods to neighbors of that light; until all shadow-traces return the same visibility as the prediction. [Agrawala et al. 2000] use this for boundary flooding. grid, warping (GW)grid, no warping (GN)scanline, warping (SW)scanline, no warping (SN) uncertainty (U) boundary (B) Fully shadow-traced Trying All Possibilities – We examined all possible combinations of the components of our algorithm. We see that scanline produces errors with uncertainty flooding, whether warping is used, or not. This is because uncertainty flooding works best when it assimilates visibility information from a variety of viewpoints, not just up and to the left. Even with boundary flooding, scanline requires warping. Otherwise, visibility information is erroneously propagated too far along the scanline before it is corrected. Grid-based evaluation actually works better without warping. We recommend GNU as the best combination. Performance of Low-Error Combinations – After eliminating SNB, SNU, and SWU based on the images above, we compared the efficiency or the remaining algorithms. The entries for the table indicate the percentage of shadow rays where N  L > 0 that were traced by each method. For each scene an environment map was chosen, and then sampled as approximately 50, 100, 200, and 400 lights. Notice that the performance of boundary flooding depends on the number of lights. Timings – The actual timings for the full rendering, shown here, are related to, but not equivalent to the reduction in work as measured in shadow-rays traced. Each scene was lit by a 200-light sampling, and timed on an Intel Xeon 3.06GHz computer, running Windows XP. Conclusion – Visibility coherence can be efficiently exploited to make accurate predictions. Such predictions must be tempered by error- correction techniques. Rather than using perceptual errors, low-confidence predictions can be found be leveraging the way in which they were generated. Specifically, we have shown that when predictions are based on a group of predictors, error-correction must be employed whenever the predictors disagree. This is manifested in our framework as grid-based evaluation with uncertainty flooding. For typical scenes, over 90% of shadow-rays can be accurately predicted with negligible errors. The errors that do exist are coherent, leading to smooth animations in our tests. Our techniques are technically simple to implement and can be used directly to render realistic images, or to speed up the precomputation in PRT methods. Higher speedups than presented in this paper may be achieved if some tolerance for errors exists. We plan to explore a controlled process for the tradeoff between predictions and errors in future work. More broadly, we want to explore coherence for sampling other high dimensional functions such as the BRDF, and more generally in global illumination, especially animations. References: Agarwal, S., Ramamoorthi, R., Belongie, S., and Jensen, H. W.. Structured Importance Sampling of Environment Maps. In Proceedings of SIGGRAPH 2003, pages 605–612. Agrawala, M., Ramamoorthi, R., Heirich, A., and Moll, L. Efficient Image-Based Methods for Rendering Soft Shadows. In Proceedings of SIGGRAPH 2000, pages 375–384. Ben-Artzi, A., Ramamoorthi, R., Agrawala, M. Efficient Shadows from Sampled Environment Maps. Columbia University Tech Report CUCS Guo, B. Progressive radiance evaluation using directional coherence maps. In Proceedings of SIGGRAPH 1998, pages Hart, D., Dutre, P., Greenberg, D. Direct illumination with lazy visibility evaluation. In Proceedings of SIGGRAPH 1999, pages Note: All shadows have been enhanced to make them more visible. Actual renderings would contain softer shadows. Animations can be found at More details appear in [Ben-Artzi et al. 2004].