Jiaping Wang1, Shuang Zhao2, Xin Tong1 John Snyder3, Baining Guo1

Slides:



Advertisements
Similar presentations
Physically Based Shading
Advertisements

All-Frequency PRT for Glossy Objects Xinguo Liu, Peter-Pike Sloan, Heung-Yeung Shum, John Snyder Microsoft.
Texture-Mapping Progressive Meshes
Multi-chart Geometry Images Pedro Sander Harvard Harvard Hugues Hoppe Microsoft Research Hugues Hoppe Microsoft Research Steven Gortler Harvard Harvard.
1GR2-00 GR2 Advanced Computer Graphics AGR Lecture 6 Physically Based Reflection Model.
Hongzhi Wu 1,2, Li-Yi Wei 1, Xi Wang 1, and Baining Guo 1 Microsoft Research Asia 1 Fudan University 2 Silhouette Texture.
Hongzhi Wu Julie Dorsey Holly Rushmeier Yale University
Efficient Acquisition and Realistic Rendering of Car Paint Johannes Günther, Tongbo Chen, Michael Goesele, Ingo Wald, and Hans-Peter Seidel MPI Informatik.
Computer Graphics Inf4/MSc 1 Computer Graphics Lecture 4 View Projection Taku Komura.
Cloth Report by LIANG Cheng. Background Cloth Garment Pattern YarnFiber.
Local Reflection Model Jian Huang, CS 594, Fall 2002.
Measuring BRDFs. Why bother modeling BRDFs? Why not directly measure BRDFs? True knowledge of surface properties Accurate models for graphics.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Experimental Analysis of BRDF Models Addy Ngan 1 Frédo Durand 1 Wojciech Matusik 2 MIT CSAIL 1 MERL 2 Eurographics Symposium on Rendering 2005.
Many-light methods – Clamping & compensation
All-Frequency Rendering of Dynamic, Spatially-Varying Reflectance
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 8: Illumination and Reflection Many slides courtesy.
Torrance Sparrow Model of Reflectance + Oren Nayar Model of Reflectance.
Rendering with Environment Maps Jaroslav Křivánek, KSVI, MFF UK
A Signal-Processing Framework for Inverse Rendering Ravi Ramamoorthi Pat Hanrahan Stanford University.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
University of Texas at Austin CS395T - Advanced Image Synthesis Spring 2006 Don Fussell Previous lecture Reflectance I BRDF, BTDF, BSDF Ideal specular.
A Signal-Processing Framework for Forward and Inverse Rendering COMS , Lecture 8.
Manifold Bootstrapping for SVBRDF Capture
Computer Graphics (Spring 2008) COMS 4160, Lecture 20: Illumination and Shading 2
1 Compression and Real-time Rendering of Measured BTFs using local-PCA Mueller, Meseth, Klein Bonn University Computer Graphics Group.
6.1 si31_2001 SI31 Advanced Computer Graphics AGR Lecture 6 Physically Based Reflection Model.
Texture Splicing Yiming Liu, Jiaping Wang, Su Xue, Xin Tong, Sing Bing Kang, Baining Guo.
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 3 Ravi Ramamoorthi
Global Illumination May 7, Global Effects translucent surface shadow multiple reflection.
Jiaping Wang 1 Peiran Ren 1,3 Minmin Gong 1 John Snyder 2 Baining Guo 1,3 1 Microsoft Research Asia 2 Microsoft Research 3 Tsinghua University.
Fast Global-Illumination on Dynamic Height Fields
Measure, measure, measure: BRDF, BTF, Light Fields Lecture #6
Face Relighting with Radiance Environment Maps Zhen Wen 1, Zicheng Liu 2, Thomas Huang 1 Beckman Institute 1 University of Illinois Urbana, IL61801, USA.
Zoltan Szego †*, Yoshihiro Kanamori ‡, Tomoyuki Nishita † † The University of Tokyo, *Google Japan Inc., ‡ University of Tsukuba.
1 Fabricating BRDFs at High Spatial Resolution Using Wave Optics Anat Levin, Daniel Glasner, Ying Xiong, Fredo Durand, Bill Freeman, Wojciech Matusik,
Measuring and Modeling Anisotropic Reflection Gregory J. Ward Lighting Systems Research Group Lawrence Berkeley Laboratory.
Efficient Editing of Aged Object Textures By: Olivier Clément Jocelyn Benoit Eric Paquette Multimedia Lab.
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
Shape Matching for Model Alignment 3D Scan Matching and Registration, Part I ICCV 2005 Short Course Michael Kazhdan Johns Hopkins University.
Validation of Color Managed 3D Appearance Acquisition Michael Goesele Max-Planck-Institut für Informatik (MPI Informatik) Vortrag im Rahmen des V 3 D 2.
A Photometric Approach for Estimating Normals and Tangents Michael Holroyd University of Virginia Jason Lawrence University of Virginia Greg Humphreys.
Combined Central and Subspace Clustering for Computer Vision Applications Le Lu 1 René Vidal 2 1 Computer Science Department, Johns Hopkins University,
Taku KomuraComputer Graphics Local Illumination and Shading Computer Graphics – Lecture 10 Taku Komura Institute for Perception, Action.
View-Dependent Precomputed Light Transport Using Nonlinear Gaussian Function Approximations Paul Green 1 Jan Kautz 1 Wojciech Matusik 2 Frédo Durand 1.
Advanced Illumination Models Chapter 7 of “Real-Time Rendering, 3 rd Edition”
Image-Based Rendering of Diffuse, Specular and Glossy Surfaces from a Single Image Samuel Boivin and André Gagalowicz MIRAGES Project.
Inverse Global Illumination: Recovering Reflectance Models of Real Scenes from Photographs Computer Science Division University of California at Berkeley.
Diffuse Reflections from Rough Surfaces Lecture #5
Reflection models Digital Image Synthesis Yung-Yu Chuang 11/01/2005 with slides by Pat Hanrahan and Matt Pharr.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
2D Texture Synthesis Instructor: Yizhou Yu. Texture synthesis Goal: increase texture resolution yet keep local texture variation.
Controlled-Distortion Constrained Global Parametrization
Computer Graphics (Spring 2003) COMS 4160, Lecture 18: Shading 2 Ravi Ramamoorthi Guest Lecturer: Aner Benartzi.
Discontinuous Displacement Mapping for Volume Graphics, Volume Graphics 2006, July 30, Boston, MA Discontinuous Displacement Mapping for Volume Graphics.
02/2/05© 2005 University of Wisconsin Last Time Reflectance part 1 –Radiometry –Lambertian –Specular.
Local Reflection Models
Physically-based Illumination Models (2) CPSC 591/691.
Reflection Models (1) Physically-Based Illumination Models (2)
Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia.
1 Resolving the Generalized Bas-Relief Ambiguity by Entropy Minimization Neil G. Alldrin Satya P. Mallick David J. Kriegman University of California, San.
Working Group « Pre-Filtering »
Mesh Modelling With Curve Analogies
Advanced Computer Graphics
Genomic Data Clustering on FPGAs for Compression
Previous lecture Reflectance I BRDF, BTDF, BSDF Ideal specular model
Tobias Heimann - DKFZ Ipek Oguz - UNC Ivo Wolf - DKFZ
Interactive photo-realistic 3D digital prototyping
CS5500 Computer Graphics May 29, 2006
20 November 2019 Output maps Normal Diffuse Roughness Specular
Presentation transcript:

Jiaping Wang1, Shuang Zhao2, Xin Tong1 John Snyder3, Baining Guo1 Modeling Anisotropic Surface Reflectance with Example-Based Microfacet Synthesis Jiaping Wang1, Shuang Zhao2, Xin Tong1 John Snyder3, Baining Guo1 Microsoft Research Asia1 Shanghai Jiao Tong University2 Microsoft Research3 Good morning. I am going to talk about an novel technology for modeling real world surface reflectance using only a single camera view of the sample.

Surface Reflectance satin metal wood The real world contains many complex materials that we’d like to capture in our synthetic images. Spatial variation and anisotropy are important effects that are particularly challenging. satin metal wood

Anisotropic Surface Reflectance The back side of this metal watch is a typical example of anisotropic reflectance. isotropic anisotropic

Our Goal modeling spatially-varying anisotropic reflectance We have developed a method to model surface reflectance from real samples including spatial variation and anisotropy. It handles a wide range of materials. modeling spatially-varying anisotropic reflectance

Surface Reflectance in CG 4D BRDF ρ(o,i) Bidirectional Reflectance Distribution Function how much light reflected wrt in/out directions o i Reflectance is described by the BRDF: a 4D function that defines how much light is reflected with respect to the incoming and outgoing directions.

Surface Reflectance in CG 4D BRDF ρ(o,i) Bidirectional Reflectance Distribution Function how much light reflected wrt in/out directions 6D Spatially-Varying BRDF: SVBRDF ρ(x,o,i) BRDF at each surface point x The SVBRDF extends the BRDF to include spatial variation in reflectance. The 6D SVBRDF is simply a BRDF defined at each surface point x.

Related Work I parametric BRDF models compact representation easy acquisition and fitting lack realistic details ground truth BRDF models have been investigated since the beginning of computer graphics history. Many parametric models have been proposed and are widely used for acquisition due to their simplicity and compactness. However these models are not general enough to produce the realistic details of real world surfaces. As shown on the right, the fine details are missing in the rendering result of parametric models. parametric model [Ward 92]

Related Work II tabulated SVBRDF realistic large data set difficult to capture lengthy process expensive hardware image registration Recently, image-based approaches have become popular. Brute-force acquisition of the BRDF samples all combinations of viewing and lighting directions. This captures the material accurately but requires very large computation cost and acquisition time, and produce a huge 6D dataset for a single material. light dome [Gu et al 06]

Related Work II tabulated SVBRDF realistic large data set difficult to capture lengthy process expensive hardware image registration Capturing in many camera views also requires special, expensive device. Image registration leads to tricky calibration and makes it difficult to obtain high resolution SVBRDFs. light dome [Gu et al 06]

Microfacet BRDF Model surface modeled by tiny mirror facets [Cook & Torrance 82] Our approach makes use of a general microfacet model to reduce the problem’s dimensionality and makes capturing easier. With microfacet model, surface microstructure is modeled as a large number of tiny mirror facets with different orientations.

Microfacet BRDF Model surface modeled by tiny mirror facets normal distribution shadow term fresnel term [Cook & Torrance 82] The general BRDF formula shown here was introduced by Cook and Torrance in nineteen eighty two. It is the foundation of most parametric BRDF models. This formula includes the Fresnel term F <>, shadow term S <>, and normal distribution function D <>. The shadow and Fresnel terms are low frequency and have less affect on surface appearance, <> while the normal distribution function can be arbitrary and high frequency.

Microfacet BRDF Model based on Normal Distribution Function (NDF) NDF D is 2D function of the half-way vector h dominates surface appearance The normal distribution function, NDF, describes the distribution of the microfacet orientations. The BRDF is then proportional to the NDF evaluated at the halfway vector h. Note that the NDF is a 2D spherical function while the BRDF is 4D.

Challenge: Partial Domains samples from a single viewing direction i cover only a sub-region h  Ω of NDF How to obtain the full NDF? ? The challenge of our method is that <> a single view capturing determines the NDF only over a partial region. Our work is focusing on how to complete the NDF at each surface point. partial NDF complete NDF partial region

Solution: Exploit Spatial Redundancy find surface points with similar but differently rotated NDFs Our key observation is that many surface points share the same basic NDF and rotated in different angles. Thus different portions of the same NDF are revealed at other surface points. material sample    partial NDF at each surface point

Example-Based Microfacet Synthesis partial NDFs from other surface points Align Based on this observation, we propose a simple algorithm to complete the NDFs that is similar in spirit to texture synthesis. For each surface point, <> we align and search for the best match from other surface points. <> These matched partial NDFs are then merged to obtain the completed NDF. <> We call this method example-based microfacet synthesis. + + = partial NDF to complete rotated partial NDFs completed NDF

Comparison ground truth our model isotropic Ward model Here is a comparison of ground truth, our method, and two parametric models. Our method matches the ground truth, while results of the parametric models look quite different. isotropic Ward model anisotropic Ward model

Overall Pipeline BRDF Slice Capture Partial NDF Recovery Microfacet Synthesis The modeling procedure consists of three steps. First a flat sample is captured with our device to produce a BRDF slice on each surface point. <> Then the partial NDFs are recovered. <> Finally, the NDFs on each surface points are completed via microfacet synthesis.

Overall Pipeline BRDF Slice Capture Partial NDF Recovery Microfacet Synthesis For capturing the BRDF slices in single view, we adapt the capturing device introduced by Gandner et al in siggraph two thousand three.

Device Setup Camera-LED system, based on [Gardner et al 03] Here is the device for capturing the reflectance data in single view. We use an LED array for lighting and a digital camera for capturing images. The LED array is driven by a stepping motor which is controlled by the computer.

Capturing Process During capturing, the LED array is moved step by step and each LED is powered one by one to illuminate the material sample from all directions. In each lighting direction, one image is captured.

Overall Pipeline BRDF Slice Capture Partial NDF Recovery Microfacet Synthesis After the data is captured, Partial NDFs are recovered from the fix-view BRDF slices in each surface points.

NDF Recovery invert the microfacet BRDF model ,   Unknown Unknown NDF Shadow Term Measured BRDF With the measured BRDF slice <> the microfacet BRDF model can be inverted to solve the unknown terms, <> the NDF and the shadow term. <> Given the measured BRDF and shadow term, the NDF can be determined. <> Also the shadow term can be derived from the NDF.  ,  [Ashikhmin et al 00]

NDF Recovery (con’t) iterative approach [Ngan et al 05] ,  1. 2.  solve for NDF, then shadow term works for complete 4D BRDF data [Ngan et al 05] Existing method recovers the NDF by iteratively solving for one term while fixing another in each step. This method works well on the complete four D BRDF data.  , 1. 2.  [Ashikhmin et al 00]

Partial NDF Recovery biased result on incomplete BRDF data ground truth [Ngan et al. 05] But we have incomplete data from only a single view. Applying the iterative technique leads to a biased result for both the NDF and shadow term. NDF shadow term NDF shadow term

Partial NDF Recovery (con’t) minimize the bias isotropically constrain shadow term in each iteration To minimize this bias, we constrain the derived shadow term to be isotropic in each step. This constraint doesn’t make the NDF isotropic, but makes the fitting process more accurate. before constraint after constraint

Recovered Partial NDF [Ngan et al. 05] ground truth our result After introducing the constraint, our result matches the ground truth very well. ground truth our result

Overall Pipeline Capture BRDF slice Partial NDF Recovery Microfacet Synthesis In the last step, the partial NDFs recovered in each surface points are completed by microfacet synthesis.

Microfacet Synthesis Merged partial NDFs completed NDF partial NDF <>Microfacet synthesis complete the partial NDF by searching the best matched NDF on other surface points. <> Then grow the partial region of the NDF by merging the best match. <> The process is repeated to progressively grow the partial region until it is completed. <> Searching of the best match is the bottleneck of the total synthesis time which involves massive comparisons and rotations of spherical function. partial NDF to complete

Microfacet Synthesis (con’t) straightforward implementation: For N NDFs at each surface point Match against (N-1) NDFs at other points In M rotation angles for alignment number of rotations/comparisons: N 2*M ≈ 5×1011 (N ≈ 640k, M ≈ 1k ) A naïve implementation of synthesis is very slow. The number of operations taken by brute force searching is the number of surface points squared times the number of alignment angles. In our experiments, it involves hundreds of billions of spherical function operations including rotation and comparison.

Synthesis Acceleration a straightforward implementation: For N NDFs in each surface point Match with (N -1) NDFs in other location In M rotation angles for alignment times of spherical function rotation and comparison N 2* M ≈ 5×1011 ( N ≈ 640k ) Clustering [Matusik et al 03] complete representative NDFs only (1% of full set) In fact, many NDFs are similar and need not be synthesized again and again. We introduce clustering to find a small set of representative NDFs for completing and searching. <> This provides a speed-up of ten thousand times. N'2* M ≈ 5×107 ( N' ≈ 6.4k )

Synthesis Acceleration a straightforward implementation: For N NDFs in each surface point Match with (N -1) NDFs in other location In M rotation angles for alignment times of spherical function rotation and comparison N 2* M ≈ 5×1011 ( N ≈ 640k ) Clustering [Matusik et al 03] complete representative NDFs only (1% of full set) Search Pruning precompute all rotated candidates prune via hierarchical searching We also introduce search pruning by precomputing the rotation candidates and organizing them in a hierarchical structure for quick search. <> This provides another speed-up of hundred times and eliminates rotation operations in searching. For more details, please refer to our paper. N'2* M ≈ 5×107 ( N' ≈ 6.4k ) N'* log(N'* M) ≈ 5×105

Performance Summary 5-10 hours for BRDF slice acquisition in HDR 1 Hour for acquisition in LDR 2-4 hours for image processing 2-3 hours for partial NDF recovery 2-4 hours for accelerated microfacet synthesis We implemented our modeling system on a PC described here. It takes five to ten hours for capturing a material sample and another five to ten hours for data processing. On a PC with Intel CoreTM2 Quad 2.13GHz CPU and 4GB memory

Model Validation full SVBRDF dataset [Lawrence et al. 06] data from one view for modeling data from other views for validation For validation, we tested our method on the fully measured SVBRDF dataset. We extract BRDF slices in one view as modeling input and leave data in the other views for validation.

Validation Result Here is rendering results from novel views. Our method matches well for materials with rich spatial variation and anisotropy.

Limitations visual modeling, not physical accuracy single-bounce microfacet model retro-reflection not handled spatial redundancy of rotated NDFs easy fix by rotating the sample Our approach has some limitations. Though it’s output is visually plausible, it may not be physically accurate. The microfacet model we use handles single bounce reflections only, multiple bounce effects such as retro-reflection are ignored. We assume rotated NDFs exist at different surface points. If that isn’t the case, an easy fix is to rotate the sample.

Rendering Result: Satin Here are some rendering results of materials we’ve captured with our method. This example is a pillow decorated with yellow satin. Satin exhibits strong anisotropy due to the consistent fiber orientation. Here is a different satin example. the fine details in the needlework are well captured by our method.

Rendering Result: Wood Wood exhibits anisotropic reflectance because the dried cells create regular microstructure with consistent orientations. It can also be handled with our approach.

Rendering Result: Brushed Metal Here is an example of brushed metal mapped onto a dish model. Fan-shaped highlights are a typical feature of anisotropic metal. Fine details of the brushed tracks are accurately preserved by our method.

Conclusions model surface reflectance via microfacet synthesis general and compact representation high resolution (spatial & angular), realistic result easier acquisition: single-view capture cheap device shorter capturing time In conclusion, we have described a novel method to model surface reflectance based on microfacet synthesis with data captured from single camera view. The captured material model is high-resolution over both spatial and angular dimensions, and provides a realistic rendering result. Our method also simplifies acquisition since it is based on single-view capture. This permits a much cheaper device and shorter acquisition times.

Future Work performance optimization extension to non-flat objects capturing and data processing extension to non-flat objects extension to multiple light bounce We also interested in some future work based on this method.

Acknowledgements Le Ma for electronics of the LED array Qiang Dai for capturing device setup Steve Lin, Dong Xu for valuable discussions Paul Debevec for HDR imagery Anonymous reviewers for their helpful suggestions and comments Finally, I’d like to thanks many peoples for their help.

Thank you! And thanks for your attention ~~~