Download presentation

Published byAntonio Emery Modified over 4 years ago

1
**Jiaping Wang1, Shuang Zhao2, Xin Tong1 John Snyder3, Baining Guo1**

Modeling Anisotropic Surface Reflectance with Example-Based Microfacet Synthesis Jiaping Wang1, Shuang Zhao2, Xin Tong1 John Snyder3, Baining Guo1 Microsoft Research Asia1 Shanghai Jiao Tong University2 Microsoft Research3 Good morning. I am going to talk about an novel technology for modeling real world surface reflectance using only a single camera view of the sample.

2
**Surface Reflectance satin metal wood**

The real world contains many complex materials that we’d like to capture in our synthetic images. Spatial variation and anisotropy are important effects that are particularly challenging. satin metal wood

3
**Anisotropic Surface Reflectance**

The back side of this metal watch is a typical example of anisotropic reflectance. isotropic anisotropic

4
**Our Goal modeling spatially-varying anisotropic reflectance**

We have developed a method to model surface reflectance from real samples including spatial variation and anisotropy. It handles a wide range of materials. modeling spatially-varying anisotropic reflectance

5
**Surface Reflectance in CG**

4D BRDF ρ(o,i) Bidirectional Reflectance Distribution Function how much light reflected wrt in/out directions o i Reflectance is described by the BRDF: a 4D function that defines how much light is reflected with respect to the incoming and outgoing directions.

6
**Surface Reflectance in CG**

4D BRDF ρ(o,i) Bidirectional Reflectance Distribution Function how much light reflected wrt in/out directions 6D Spatially-Varying BRDF: SVBRDF ρ(x,o,i) BRDF at each surface point x The SVBRDF extends the BRDF to include spatial variation in reflectance. The 6D SVBRDF is simply a BRDF defined at each surface point x.

7
**Related Work I parametric BRDF models compact representation**

easy acquisition and fitting lack realistic details ground truth BRDF models have been investigated since the beginning of computer graphics history. Many parametric models have been proposed and are widely used for acquisition due to their simplicity and compactness. However these models are not general enough to produce the realistic details of real world surfaces. As shown on the right, the fine details are missing in the rendering result of parametric models. parametric model [Ward 92]

8
**Related Work II tabulated SVBRDF realistic large data set**

difficult to capture lengthy process expensive hardware image registration Recently, image-based approaches have become popular. Brute-force acquisition of the BRDF samples all combinations of viewing and lighting directions. This captures the material accurately but requires very large computation cost and acquisition time, and produce a huge 6D dataset for a single material. light dome [Gu et al 06]

9
**Related Work II tabulated SVBRDF realistic large data set**

difficult to capture lengthy process expensive hardware image registration Capturing in many camera views also requires special, expensive device. Image registration leads to tricky calibration and makes it difficult to obtain high resolution SVBRDFs. light dome [Gu et al 06]

10
**Microfacet BRDF Model surface modeled by tiny mirror facets**

[Cook & Torrance 82] Our approach makes use of a general microfacet model to reduce the problem’s dimensionality and makes capturing easier. With microfacet model, surface microstructure is modeled as a large number of tiny mirror facets with different orientations.

11
**Microfacet BRDF Model surface modeled by tiny mirror facets**

normal distribution shadow term fresnel term [Cook & Torrance 82] The general BRDF formula shown here was introduced by Cook and Torrance in nineteen eighty two. It is the foundation of most parametric BRDF models. This formula includes the Fresnel term F <>, shadow term S <>, and normal distribution function D <>. The shadow and Fresnel terms are low frequency and have less affect on surface appearance, <> while the normal distribution function can be arbitrary and high frequency.

12
**Microfacet BRDF Model based on Normal Distribution Function (NDF)**

NDF D is 2D function of the half-way vector h dominates surface appearance The normal distribution function, NDF, describes the distribution of the microfacet orientations. The BRDF is then proportional to the NDF evaluated at the halfway vector h. Note that the NDF is a 2D spherical function while the BRDF is 4D.

13
**Challenge: Partial Domains**

samples from a single viewing direction i cover only a sub-region h Ω of NDF How to obtain the full NDF? ? The challenge of our method is that <> a single view capturing determines the NDF only over a partial region. Our work is focusing on how to complete the NDF at each surface point. partial NDF complete NDF partial region

14
**Solution: Exploit Spatial Redundancy**

find surface points with similar but differently rotated NDFs Our key observation is that many surface points share the same basic NDF and rotated in different angles. Thus different portions of the same NDF are revealed at other surface points. material sample partial NDF at each surface point

15
**Example-Based Microfacet Synthesis**

partial NDFs from other surface points Align Based on this observation, we propose a simple algorithm to complete the NDFs that is similar in spirit to texture synthesis. For each surface point, <> we align and search for the best match from other surface points. <> These matched partial NDFs are then merged to obtain the completed NDF. <> We call this method example-based microfacet synthesis. + + = partial NDF to complete rotated partial NDFs completed NDF

16
**Comparison ground truth our model isotropic Ward model**

Here is a comparison of ground truth, our method, and two parametric models. Our method matches the ground truth, while results of the parametric models look quite different. isotropic Ward model anisotropic Ward model

17
**Overall Pipeline BRDF Slice Capture Partial NDF Recovery**

Microfacet Synthesis The modeling procedure consists of three steps. First a flat sample is captured with our device to produce a BRDF slice on each surface point. <> Then the partial NDFs are recovered. <> Finally, the NDFs on each surface points are completed via microfacet synthesis.

18
**Overall Pipeline BRDF Slice Capture Partial NDF Recovery**

Microfacet Synthesis For capturing the BRDF slices in single view, we adapt the capturing device introduced by Gandner et al in siggraph two thousand three.

19
**Device Setup Camera-LED system, based on [Gardner et al 03]**

Here is the device for capturing the reflectance data in single view. We use an LED array for lighting and a digital camera for capturing images. The LED array is driven by a stepping motor which is controlled by the computer.

20
Capturing Process During capturing, the LED array is moved step by step and each LED is powered one by one to illuminate the material sample from all directions. In each lighting direction, one image is captured.

21
**Overall Pipeline BRDF Slice Capture Partial NDF Recovery**

Microfacet Synthesis After the data is captured, Partial NDFs are recovered from the fix-view BRDF slices in each surface points.

22
**NDF Recovery invert the microfacet BRDF model , Unknown Unknown**

NDF Shadow Term Measured BRDF With the measured BRDF slice <> the microfacet BRDF model can be inverted to solve the unknown terms, <> the NDF and the shadow term. <> Given the measured BRDF and shadow term, the NDF can be determined. <> Also the shadow term can be derived from the NDF. , [Ashikhmin et al 00]

23
**NDF Recovery (con’t) iterative approach [Ngan et al 05] , 1. 2. **

solve for NDF, then shadow term works for complete 4D BRDF data [Ngan et al 05] Existing method recovers the NDF by iteratively solving for one term while fixing another in each step. This method works well on the complete four D BRDF data. , 1. 2. [Ashikhmin et al 00]

24
**Partial NDF Recovery biased result on incomplete BRDF data**

ground truth [Ngan et al. 05] But we have incomplete data from only a single view. Applying the iterative technique leads to a biased result for both the NDF and shadow term. NDF shadow term NDF shadow term

25
**Partial NDF Recovery (con’t)**

minimize the bias isotropically constrain shadow term in each iteration To minimize this bias, we constrain the derived shadow term to be isotropic in each step. This constraint doesn’t make the NDF isotropic, but makes the fitting process more accurate. before constraint after constraint

26
**Recovered Partial NDF [Ngan et al. 05] ground truth our result**

After introducing the constraint, our result matches the ground truth very well. ground truth our result

27
**Overall Pipeline Capture BRDF slice Partial NDF Recovery**

Microfacet Synthesis In the last step, the partial NDFs recovered in each surface points are completed by microfacet synthesis.

28
**Microfacet Synthesis Merged partial NDFs completed NDF partial NDF**

<>Microfacet synthesis complete the partial NDF by searching the best matched NDF on other surface points. <> Then grow the partial region of the NDF by merging the best match. <> The process is repeated to progressively grow the partial region until it is completed. <> Searching of the best match is the bottleneck of the total synthesis time which involves massive comparisons and rotations of spherical function. partial NDF to complete

29
**Microfacet Synthesis (con’t)**

straightforward implementation: For N NDFs at each surface point Match against (N-1) NDFs at other points In M rotation angles for alignment number of rotations/comparisons: N 2*M ≈ 5× (N ≈ 640k, M ≈ 1k ) A naïve implementation of synthesis is very slow. The number of operations taken by brute force searching is the number of surface points squared times the number of alignment angles. In our experiments, it involves hundreds of billions of spherical function operations including rotation and comparison.

30
**Synthesis Acceleration**

a straightforward implementation: For N NDFs in each surface point Match with (N -1) NDFs in other location In M rotation angles for alignment times of spherical function rotation and comparison N 2* M ≈ 5× ( N ≈ 640k ) Clustering [Matusik et al 03] complete representative NDFs only (1% of full set) In fact, many NDFs are similar and need not be synthesized again and again. We introduce clustering to find a small set of representative NDFs for completing and searching. <> This provides a speed-up of ten thousand times. N'2* M ≈ 5× ( N' ≈ 6.4k )

31
**Synthesis Acceleration**

a straightforward implementation: For N NDFs in each surface point Match with (N -1) NDFs in other location In M rotation angles for alignment times of spherical function rotation and comparison N 2* M ≈ 5× ( N ≈ 640k ) Clustering [Matusik et al 03] complete representative NDFs only (1% of full set) Search Pruning precompute all rotated candidates prune via hierarchical searching We also introduce search pruning by precomputing the rotation candidates and organizing them in a hierarchical structure for quick search. <> This provides another speed-up of hundred times and eliminates rotation operations in searching. For more details, please refer to our paper. N'2* M ≈ 5× ( N' ≈ 6.4k ) N'* log(N'* M) ≈ 5×105

32
**Performance Summary 5-10 hours for BRDF slice acquisition in HDR**

1 Hour for acquisition in LDR 2-4 hours for image processing 2-3 hours for partial NDF recovery 2-4 hours for accelerated microfacet synthesis We implemented our modeling system on a PC described here. It takes five to ten hours for capturing a material sample and another five to ten hours for data processing. On a PC with Intel CoreTM2 Quad 2.13GHz CPU and 4GB memory

33
**Model Validation full SVBRDF dataset [Lawrence et al. 06]**

data from one view for modeling data from other views for validation For validation, we tested our method on the fully measured SVBRDF dataset. We extract BRDF slices in one view as modeling input and leave data in the other views for validation.

34
**Validation Result Here is rendering results from novel views.**

Our method matches well for materials with rich spatial variation and anisotropy.

35
**Limitations visual modeling, not physical accuracy**

single-bounce microfacet model retro-reflection not handled spatial redundancy of rotated NDFs easy fix by rotating the sample Our approach has some limitations. Though it’s output is visually plausible, it may not be physically accurate. The microfacet model we use handles single bounce reflections only, multiple bounce effects such as retro-reflection are ignored. We assume rotated NDFs exist at different surface points. If that isn’t the case, an easy fix is to rotate the sample.

36
**Rendering Result: Satin**

Here are some rendering results of materials we’ve captured with our method. This example is a pillow decorated with yellow satin. Satin exhibits strong anisotropy due to the consistent fiber orientation. Here is a different satin example. the fine details in the needlework are well captured by our method.

37
**Rendering Result: Wood**

Wood exhibits anisotropic reflectance because the dried cells create regular microstructure with consistent orientations. It can also be handled with our approach.

38
**Rendering Result: Brushed Metal**

Here is an example of brushed metal mapped onto a dish model. Fan-shaped highlights are a typical feature of anisotropic metal. Fine details of the brushed tracks are accurately preserved by our method.

39
**Conclusions model surface reflectance via microfacet synthesis**

general and compact representation high resolution (spatial & angular), realistic result easier acquisition: single-view capture cheap device shorter capturing time In conclusion, we have described a novel method to model surface reflectance based on microfacet synthesis with data captured from single camera view. The captured material model is high-resolution over both spatial and angular dimensions, and provides a realistic rendering result. Our method also simplifies acquisition since it is based on single-view capture. This permits a much cheaper device and shorter acquisition times.

40
**Future Work performance optimization extension to non-flat objects**

capturing and data processing extension to non-flat objects extension to multiple light bounce We also interested in some future work based on this method.

41
**Acknowledgements Le Ma for electronics of the LED array**

Qiang Dai for capturing device setup Steve Lin, Dong Xu for valuable discussions Paul Debevec for HDR imagery Anonymous reviewers for their helpful suggestions and comments Finally, I’d like to thanks many peoples for their help.

42
Thank you! And thanks for your attention ~~~

Similar presentations

OK

Combined Central and Subspace Clustering for Computer Vision Applications Le Lu 1 René Vidal 2 1 Computer Science Department, Johns Hopkins University,

Combined Central and Subspace Clustering for Computer Vision Applications Le Lu 1 René Vidal 2 1 Computer Science Department, Johns Hopkins University,

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google