Download presentation

Presentation is loading. Please wait.

Published byRogelio Manson Modified over 2 years ago

1
**Understanding the role of phase function in translucent appearance**

Ioannis Gkioulekas1 Bei Xiao2 Shuang Zhao3 Edward Adelson2 Todd Zickler1 Kavita Bala3 1Harvard 2MIΤ 3Cornell Hi, I’m Yannis Gkioulekas, and I’m going to talk to you about the role of phase function in translucent appearance. This is joint work with Bei Xiao, Shuang Zhao, Ted Adelson, Todd Zickler, and Kavita Bala.

2
**Translucency is everywhere**

food skin jewelry architecture We deal with translucency in many aspects of our everyday life. Our food and skin are translucent; and so are objects as diverse as jewelry and even buildings.

3
**Subsurface scattering**

outgoing direction incident direction isotropic extinction coefficient σt (λ) absorption coefficient σa (λ) radiative transfer equation phase function p (λ) Translucent appearance is caused by scattering of light inside material volumes. This process is described by the radiative transfer equation, and is controlled by three material-dependent parameters. Let’s take a closer look: As light travels through a medium, the extinction coefficient controls the distance before a volume events occurs. Then, at each volume event, light may get absorbed with some probability determined by the absorption coefficient. Otherwise, light is scattered, meaning that it travels in a different direction, as determined by the phase function. In general these parameters are functions of wavelength, but we will be ignoring this dependence dealing only with “grayscale” materials. We will be focusing on the phase function in this talk. The phase function is a probability distribution on the sphere of directions, and is often assumed to be spherically symmetric. We will also use this assumption, and therefore use polar plots to represent them: For light incoming from the left, the plot shows the probability that it will get scattered at each outgoing direction. We show for reference an isotropic phase function, meaning one that scatters equally in all directions. Chandrasekhar 1960

4
**Phase function is important**

thick parts (diffusion) The effect of the phase function in the final appearance of an object depends strongly on its geometry. Consider for example these two renderings of a lucy scene, produced with two different phase functions. In thick object parts, directionality of scatterings is dumpened out, and the difference in the two images is small to negligible. However, in thin object parts, such as the wings, we observe differences that help disambiguate between the two phase functions. As most real objects have both thick and thin parts, we argue that phase function can be critical for translucent appearance. thin parts

5
**Common phase functions**

Henyey-Greenstein (HG) lobes single-parameter family: g∈ −1,1 g=𝜇 1 average cosine 𝜇 1 = cos 𝜃 = −1 1 𝑝 cos 𝜃 cos 𝜃 𝑑 cos 𝜃 The most commonly used phase function in computer graphics is the Henyey-Greenstein family. These are parabola-like lobes, controlled by a single parameter g. For different values of this parameter, we can have backward scattering, isotropic, and forward scattering lobes. Thinking of the phase function as a spherical distribution, the parameter g is equal to the first moment mu1, also commonly referred to as the average cosine Henyey and Greenstein 1941

6
**What can we represent with HG?**

marble white jade microcrystalline wax What materials can we create with HG phase functions? We know that we can use it to render marble. However, there are other materials which HG cannot reproduce, such as white jade and wax. Jensen 2001

7
**Henyey-Greenstein is not enough**

soap microcrystalline wax photo HG setup Here is a simple demonstration of the limitations of HG phase functions. Consider these two materials, soap and wax which, though similar, have subtle differences in their appearance that make it possible to tell them apart. To highlight their different scattering behavior, we use a setup as shown: we illuminate thick material slabs with a laser pointer from above at an angle, and capture pictures with a camera. Take a look at the photographs from this setup for the two materials. We have coarsely quantized the photographs to a small number of gray levels to emphasize the shapes of the iso-brightness contours. The two materials produce clearly very different scattering patterns: soap has a forward ellipsoidal pattern; whereas the pattern of wax is more complex. If we use only HG phase functions to characterize these materials, this is the best qualitative match we can produce. We can only create patterns similar to those of soap and cannot produce the more complex pattern of wax.

8
**? ? Goals expanded phase function space role in translucent appearance**

With this as motivation, I will present our work as answering two related questions. First, how can we usefully expand the space of phase function shapes, to be able to represent more materials. And second, since changes in phase function shape lead to non-trivial changes in the final image, how can we understand the relationship between the two?

9
**Expanded phase function space**

Henyey-Greenstein (HG) lobes von Mises-Fisher (vMF) lobes single-parameter family: single-parameter family: g=𝜇 1 𝜅=2 𝜇 1 / 1− 𝜇 2 average cosine 𝜇 1 = cos 𝜃 = −1 1 𝑝 cos 𝜃 cos 𝜃 𝑑 cos 𝜃 second moment , we can expand the space of phase functions in multiple ways. Here, we have selected to, in addition to HG, also consider lobes from another single-parametric family, the von Mises-Fisher (vMF) family. The parameter k of this family, in addition to the first moment, also depends on the second moment mu2 of the phase function that is a measure of the spread of the distribution of directions. These moments will be important later in our talk. If we compare a vMF and HG lobe of the same average cosine, we observe that vMF lobes have larger variance and scatter more light sideways. 𝜇 2 = −1 1 𝑝 cos 𝜃 cos 𝜃 2 𝑑 cos 𝜃

10
**Expanded phase function space**

soap microcrystalline wax photo HG vMF setup This subtle difference in shape can, however, be very important. Going back to our simple demonstration, we can see that using a vMF phase function, we can better reproduce the complex scattering patterns of wax.

11
**Expanded phase function space**

Henyey-Greenstein (HG) lobes von Mises-Fisher (vMF) lobes single-parameter family: single-parameter family: g=𝜇 1 𝜅=2 𝜇 1 / 1− 𝜇 2 Linear mixtures: HG + HG HG + vMF vMF + vMF In addition to single HG and vMF lobes, we also consider all of their possible linear combinations: HG + HG, HG + vMF, and vMF + vMF. We use this as our expanded phase function space, which is now much larger compared to only HG.

12
**Redundant phase function space**

≈ f( ) ≈ ≠ f( ) Yet, our expanded space is also redundant. Consider for instance these two very different phase functions. Using them to render the same scene produces two visually indistinguishable images. There is a lot of redundancy. So, we would like to find a parameterization of phase function shape that is predictive of its perceptual effect on the final rendered image.

13
**Related work Fleming and Bülthoff 2005, Motoyoshi 2010**

many perceptual cues do not study phase function Pellacini et al. 2000, Wills et al. 2009 gloss perception much smaller space Ngan et al. 2006 gloss perception navigation of appearance space Let me step back to describe some of the work that inspired us. Recently, there’s been some work on visual perception community that, even though they identify important cues for translucency, they don’t study phase functions. We draw inspiration from BRDF studies, and in particular from Pellacini et al. and Wills et al., who both use psychophysical experiments to find low-dimensional embeddings of gloss parameters. However, they study only gloss perception, and deal with much smaller spaces, specifically 10s of BRDFs. Ngan also studied gloss perception, using image-driven metrics. In fact as we will see later we use their proposed metric to study our space of phase functions.

14
**Our approach 1. Computational processing 2. Psychophysical validation**

3. Analysis of results image-driven analysis We combine ideas from these two lines of work, and adopt a dual computational-psychophysical approach. At a first step, we use computation and image-driven metrics to process a large set of images. Then, we use a much smaller set of representative images to run a psychophysical experiment, as validation for the results of the computational analysis. Finally, we analyze the results of the computational stage. We begin now by describing the first stage. tractable experiment visualization, perceptual parameterization

15
**Scene design side-lighting thin parts and fine details**

mostly low-order scattering mostly high-order scattering thick body and base We first design a scene that captures a reasonable subset of features important that have been reported in the past as important for the perception of translucency. Doing so requires selecting a geometry and lighting configuration. We have experimented with multiple geometry choices, as described in more detail in the paper, and we have selected the lucy shape that includes both thin and thick parts. We also experiment with different lighting conditions, and choose to sidelit this shape resulting in the scene shown here, that has regions where either low-order (left) or high-order scattering (right) is dominant in appearance. We also performed experiments for nine other scenes, including one with backlighting, but we will not show these here; please see the paper for details.

16
**Expanded phase function space**

3000 machine hours von Mises-Fisher (vMF) lobes Linear mixtures: HG + HG HG + vMF Henyey-Greenstein (HG) lobes sample 750+ phase functions 750+ HDR images Then we use our expanded space to sample a set of more than 750 phase functions that we use for all our experiments. Using this set and the scene we design, we render one image for each of the phase functions in our set, for a total of more than 750 linear HDR images.

17
**Psychophysics Hmm, left Paired-comparison experiments**

To analyze this set of images, we can use psychophysics. A well-known methodology involves paired-comparison experiments: human subjects are shown triplets of images, a query and two candidates, and are asked to select the candidate image that is most similar to the query.

18
**Psychophysics 750 images = 200 million comparisons**

Directly applying this methodology to our problem is not possible. The reason is that we are dealing with a much larger space: 750 images correspond to two hundred million judgments, making running such a psychophysical experiment intractable.

19
**Image-driven analysis**

ǁ ǁ 𝟑 || d( , ) ≈ Instead, we take a different approach, motivated by previous work on opaque BRDFs. For two images rendered with different phase functions, we can use crude computational metrics based on image differences to roughly approximate how a human subject would compare them. We have experimented with many different such metrics, as we describe in the paper, and selected the one proposed by Ngan et al. that is equal to the L2 -norm difference between the cubic root of linear HDR pixel values.

20
**Computational processing**

≈ ǁ ǁ 𝟑 || multidimensional scaling two-dimensional appearance space two-dimensional embedding 750 HDR images We then use the selected image metric to process the full set of 750 images. Specifically, we perform multidimensional scaling on the set, to find a low-dimensional Euclidean embedding. It turns out that the first two principal directions capture 99% of the variance in the dataset, and therefore the embedding is two dimensional. We visualize it here to the right: each of the 750 dots corresponds to one image in our set, and different dots correspond to images rendered with different phase functions. On this embedding, the Euclidean distance between two points is approximately equal to the distance between the corresponding images under our computational image metric. In this sense, our expanded space of phase functions, as described by our chosen image metric, can produce a two-dimensional translucent appearance space.

21
**Our approach 1. Computational processing 2. Psychophysical validation**

3. Analysis of results image-driven analysis We then proceed to the psychophysical validation stage. tractable experiment visualization, perceptual parameterization

22
**Psychophysical validation**

ǁ ǁ 𝟑 || clustering two-dimensional appearance space 40 representative images To check the perceptual validity of the appearance space we derived, we first use the same computational image metric to cluster the full image set and select a small number of exemplar images. The 40 exemplars are representative of the appearance variation in the full set.

23
**Psychophysical validation**

750 phase functions = 200 million comparisons Using the 40 exemplars as stimuli, we can now run a much smaller, tractable psychophysical experiment. 40 phase functions = 30,000 comparisons

24
**Psychophysical validation**

use computational embedding as proxy for psychophysics generalize to all 750 images ≈ perceptual embedding computational embedding This experiment results in the following two-dimensional embedding for the 40 images. By comparing the perceptual and computational embeddings, we observe that they are remarkably similar. Namely, the order of the points corresponding to the exemplar images is almost exactly consistent in the two embeddings. Therefore, the results of the computational analysis reasonably match those of the psychophysical experiments. This has two implications. First, that we can use the computational analysis as a cheap but perceptually valid proxy for psychophysics. And second, that we can now generalize our analysis to all 753 images, and not just the 40 exemplars that we have used for the validation experiment. (non-metric MDS on psych. data) (MDS using image metrics)

25
**Psychophysical validation**

use computational embedding as proxy for psychophysics generalize to all 750 images ≈ perceptual embedding computational embedding This experiment results in the following two-dimensional embedding for the 40 images. By comparing the perceptual and computational embeddings, we observe that they are remarkably similar. Namely, the order of the points corresponding to same the exemplar images is almost exactly consistent in the two embeddings. Therefore, the results of the computational analysis reasonably match those of the psychophysical experiments. This has two implications. First, that we can use the computational analysis as a cheap but perceptually valid proxy for psychophysics. And second, that we can now generalize our analysis to all 750 images, and not just the 40 exemplars that we have used for the validation experiment. (non-metric MDS on psych. data) (MDS using image metrics)

26
**Our approach 1. Computational processing 2. Psychophysical validation**

3. Analysis of results image-driven analysis Let’s see then what the results of the computational analysis tell us about translucency. tractable experiment visualization, perceptual parameterization

27
**What we know so far translucent appearance space two-dimensional**

perceptual consistent across variations of material, shape, illumination see paper for: images, 9 more computational embeddings, 2 more psychophysical experiments including backlighting, analysis and statistics We know so far that: our expanded space of phase functions spans an appearance space that is two-dimensional and perceptually valid. Our space is also consistent across some material, shape, and illumination variations; for this last point, you can look at our paper where we performed a large number of additional computational experiments on 5000 images, as well as an additional psychophysical experiment using backlighting.

28
**Moving around the space**

It is useful to take a look at how images change as we move around the embedding.

29
**Moving around the space**

We can choose to move vertically. In this case, we observe that, going from top to bottom, images become more diffused, a change that is similar to increasing the mean free path. moving vertically more diffused appearance

30
**Moving around the space**

We can choose to move vertically. In this case, we observe that, going from top to bottom, images become more diffused, a change that is similar to increasing the mean free path. moving vertically more diffused appearance

31
**Moving around the space**

If we move horizontally from left to right, we see that images have more glass-like appearance, with increased surface detail. moving horizontally more glass-like appearance

32
**Moving around the space**

If we move horizontally from left to right, we see that images have more glass-like appearance, with increased surface detail. moving horizontally more glass-like appearance

33
**Moving around the space**

We are not constrained to just vertical and horizontal movements. We can move anywhere in the two dimensional space to produce any trade-off between diffused and sharp appearance. we can move anywhere

34
**What can we render with…**

single forward lobes forward + isotropic mixtures forward + backward mixtures What phase functions can we use to reach different points of the appearance space? It turns out that, single forward lobes, like HG, can only reach a one-dimensional slice of the embedding, located at the left-most part of the space. Using mixtures of forward lobes with isotropic phase functions does not help much: the part of the space we can reach is pretty much the same. It is necessary to use a mixture of forward and backward lobes to produce the full, two-dimensional space.

35
**What can we render with…**

single forward lobes forward + isotropic mixtures forward + backward mixtures What phase functions can we use to reach different points of the appearance space? It turns out that, single forward lobes, like HG, can only reach a one-dimensional slice of the embedding, located at the left-most part of the space. Using mixtures of forward lobes with isotropic phase functions does not help much: the part of the space we can reach is pretty much the same. It is necessary to use a mixture of forward and backward lobes to produce the full, two-dimensional space.

36
**What can we render with…**

marble ≠ white jade best approximation with HG + isotropic marble white jade with vMF + vMF This implies that, for a material like marble, which has a very diffused appearance, we can stay at the left part and use a single HG phase function. On the other hand, to render a material such as white jade that is part diffused, part glassy with sharp details, we need to move to the right. In other words, to render white jade it is necessary to use a composite phase function from our expanded space. Trying to approximate white jade with only HG results in an image that lacks this distinctive mixed appearance.

37
**Editing the phase function**

more diffused 1/ 1− 𝜇 2 One can also ask the reverse question, how does the phase function change as we move on the embedding? We observe that horizontal movements from left to right correspond to an increase in the variance of the phase function. Vertical movements from top to bottom correspond to an increase of the average cosine, that is, going from isotropic to forward scattering. In fact, we performed a correlation analysis between the two embedding coordinates and many functionals of moments of the phase function. We found that we can parameterize the dimensions of the appearance space in a perceptually uniform manner, as functions of moments of the phase function. The horizontal dimension is uniformly parameterized by the inverse square root of the second moment. The vertical dimension is uniformly parameterized by the square of the average cosine. move horizontally move vertically more glass-like 𝜇 1 2

38
**Perceptual parameterization**

HG: g= 𝜇 1 0.4 0.8 Uniform parameterization allows us to interpolate directly in phase function space, in a way such that the appearance of the resulting images is also perceptually uniformly interpolated. As an example of this, recall that HG phase functions have a single parameter g equal to the average cosine. For two extreme values of this parameter, using a HG of g equal to their average produces an image that is not a good visual midpoint between the two extremes. move vertically g

39
**Perceptual parameterization**

HG: g 2 0.32 0.64 If, on the other hand, we use g squared, we see that this parameterization is much more uniform in appearance. move vertically g2

40
**Perceptual parameterization**

move vertically HG: g= 𝜇 1 HG: g 2 g 0.8 0.4 0.32 0.64 g2 We flip between the two for demonstration. We see then that a simple reparameterization of the HG family in terms of g-squared is perceptually uniform.

41
**Discussion handling other parameters of appearance: σt, σa, color**

need to (further) scale up methodology more general or data-driven phase function models see our SIGGRAPH Asia 2013 paper! use in translucency editing and design user interfaces Our study has some limitations, which also provide directions for future research: We only studied the effect of phase function, but other parameters are also important for translucent appearance, namely the extinction and scattering coefficients, and their spectral dependency. To handle these additional parameters, we would need to further scale up our methodology. Second, we considered one of many possible ways to expand the space of phase functions. It would be interesting to also consider other parametric families, or even phase functions measured from real materials. And for that, you can look at our forthcoming SIGGRAPH Asia 2013 paper, where we measure the phase functions of all these materials. And finally, it would be interesting to test the utility of the perceptual uniform axes we derived in user interfaces for translucent material editing and design.

42
**Three take-home messages**

white jade marble HG is not enough expanded space computation + psychophysics large-scale perceptual studies 2D appearance space uniform parameterization To conclude, we can summarize our contributions in three points: We have shown that HG is not enough and proposed an expanded space of phase functions that can better represent more translucent materials. We introduced a dual methodology combining computation and psychophysics that can be used to run large-scale perceptual studies. We used this methodology to study our expanded space of phase functions, and derived a two-dimensional translucent appearance space, which can be parameterized uniformly using moments of the phase function.

43
**Acknowledgements Wenzel Jakob Bonhams Funding:**

white jade marble Wenzel Jakob Bonhams Funding: NSF NIH Amazon Dataset of images: We thank Wenzel Jakob for providing the mitsuba renderer, Bonhams for the white jade photos, and of course our funding agencies. Don’t forget to check the project website, where we provide our full dataset of more than 5000 rendered images. Thank you for your attention.

44
**Computational embeddings**

5000+ more HDR images material variation shape variation lighting variation

45
Scene design

46
**Psychophysical validation**

≈ perceptual embedding computational embedding (non-metric MDS on psych. data) (MDS using image metrics)

47
**Computational metrics**

cubic root L2-norm L1-norm

48
**Perceptual image metrics**

material variation shape variation lighting variation

49
**Embedding stability original perturbation 1 perturbation 2**

50
**sample 750+ phase functions**

Distance metric 𝑑 𝑤 𝑝 1 , 𝑝 2 = 0 π 0 π 𝑤 θ 1 , θ 𝑝 1 θ 1 − 𝑝 2 θ 𝑑 θ 1 𝑑 θ 2 MDS sample 750+ phase functions MDS Davis et al. 2007

51
**Non-metric MDS d >d Learning from relative comparisons**

min 𝐾≥0 λ 𝐾 ∗ + 1 𝑆 𝑠=1 𝑆 𝐿 𝑑 𝐾 𝑖 𝑠 , 𝑘 𝑠 − 𝑑 𝐾 𝑖 𝑠 , 𝑗 𝑠 +𝑏 non-metric MDS Hmm, left d >d Wills et al. 2009

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google