Presentation on theme: "Understanding the role of phase function in translucent appearance"— Presentation transcript:
1Understanding the role of phase function in translucent appearance Ioannis Gkioulekas1Bei Xiao2Shuang Zhao3Edward Adelson2Todd Zickler1Kavita Bala31Harvard2MIΤ3CornellHi, I’m Yannis Gkioulekas, and I’m going to talk to you about the role of phase function in translucent appearance. This is joint work with Bei Xiao, Shuang Zhao, Ted Adelson, Todd Zickler, and Kavita Bala.
2Translucency is everywhere foodskinjewelryarchitectureWe deal with translucency in many aspects of our everyday life. Our food and skin are translucent; and so are objects as diverse as jewelry and even buildings.
3Subsurface scattering outgoing directionincident directionisotropicextinction coefficient σt(λ)absorption coefficient σa(λ)radiative transfer equationphase function p(λ)Translucent appearance is caused by scattering of light inside material volumes. This process is described by the radiative transfer equation, and is controlled by three material-dependent parameters. Let’s take a closer look: As light travels through a medium, the extinction coefficient controls the distance before a volume events occurs. Then, at each volume event, light may get absorbed with some probability determined by the absorption coefficient. Otherwise, light is scattered, meaning that it travels in a different direction, as determined by the phase function. In general these parameters are functions of wavelength, but we will be ignoring this dependence dealing only with “grayscale” materials. We will be focusing on the phase function in this talk.The phase function is a probability distribution on the sphere of directions, and is often assumed to be spherically symmetric. We will also use this assumption, and therefore use polar plots to represent them: For light incoming from the left, the plot shows the probability that it will get scattered at each outgoing direction. We show for reference an isotropic phase function, meaning one that scatters equally in all directions.Chandrasekhar 1960
4Phase function is important thick parts (diffusion)The effect of the phase function in the final appearance of an object depends strongly on its geometry. Consider for example these two renderings of a lucy scene, produced with two different phase functions. In thick object parts, directionality of scatterings is dumpened out, and the difference in the two images is small to negligible. However, in thin object parts, such as the wings, we observe differences that help disambiguate between the two phase functions.As most real objects have both thick and thin parts, we argue that phase function can be critical for translucent appearance.thin parts
5Common phase functions Henyey-Greenstein (HG) lobessingle-parameter family:g∈ −1,1g=𝜇 1average cosine𝜇 1 = cos 𝜃 = −1 1 𝑝 cos 𝜃 cos 𝜃 𝑑 cos 𝜃The most commonly used phase function in computer graphics is the Henyey-Greenstein family. These are parabola-like lobes, controlled by a single parameter g. For different values of this parameter, we can have backward scattering, isotropic, and forward scattering lobes.Thinking of the phase function as a spherical distribution, the parameter g is equal to the first moment mu1, also commonly referred to as the average cosineHenyey and Greenstein 1941
6What can we represent with HG? marblewhite jademicrocrystalline waxWhat materials can we create with HG phase functions? We know that we can use it to render marble. However, there are other materials which HG cannot reproduce, such as white jade and wax.Jensen 2001
7Henyey-Greenstein is not enough soapmicrocrystalline waxphotoHGsetupHere is a simple demonstration of the limitations of HG phase functions. Consider these two materials, soap and wax which, though similar, have subtle differences in their appearance that make it possible to tell them apart. To highlight their different scattering behavior, we use a setup as shown: we illuminate thick material slabs with a laser pointer from above at an angle, and capture pictures with a camera.Take a look at the photographs from this setup for the two materials. We have coarsely quantized the photographs to a small number of gray levels to emphasize the shapes of the iso-brightness contours. The two materials produce clearly very different scattering patterns: soap has a forward ellipsoidal pattern; whereas the pattern of wax is more complex. If we use only HG phase functions to characterize these materials, this is the best qualitative match we can produce. We can only create patterns similar to those of soap and cannot produce the more complex pattern of wax.
8? ? Goals expanded phase function space role in translucent appearance With this as motivation, I will present our work as answering two related questions.First, how can we usefully expand the space of phase function shapes, to be able to represent more materials.And second, since changes in phase function shape lead to non-trivial changes in the final image, how can we understand the relationship between the two?
9Expanded phase function space Henyey-Greenstein (HG) lobesvon Mises-Fisher (vMF) lobessingle-parameter family:single-parameter family:g=𝜇 1𝜅=2 𝜇 1 / 1− 𝜇 2average cosine𝜇 1 = cos 𝜃 = −1 1 𝑝 cos 𝜃 cos 𝜃 𝑑 cos 𝜃second moment, we can expand the space of phase functions in multiple ways. Here, we have selected to, in addition to HG, also consider lobes from another single-parametric family, the von Mises-Fisher (vMF) family.The parameter k of this family, in addition to the first moment, also depends on the second moment mu2 of the phase function that is a measure of the spread of the distribution of directions. These moments will be important later in our talk.If we compare a vMF and HG lobe of the same average cosine, we observe that vMF lobes have larger variance and scatter more light sideways.𝜇 2 = −1 1 𝑝 cos 𝜃 cos 𝜃 2 𝑑 cos 𝜃
10Expanded phase function space soapmicrocrystalline waxphotoHGvMFsetupThis subtle difference in shape can, however, be very important. Going back to our simple demonstration, we can see that using a vMF phase function, we can better reproduce the complex scattering patterns of wax.
11Expanded phase function space Henyey-Greenstein (HG) lobesvon Mises-Fisher (vMF) lobessingle-parameter family:single-parameter family:g=𝜇 1𝜅=2 𝜇 1 / 1− 𝜇 2Linear mixtures:HG + HGHG + vMFvMF + vMFIn addition to single HG and vMF lobes, we also consider all of their possible linear combinations: HG + HG, HG + vMF, and vMF + vMF. We use this as our expanded phase function space, which is now much larger compared to only HG.
12Redundant phase function space ≈f( )≈≠f( )Yet, our expanded space is also redundant. Consider for instance these two very different phase functions. Using them to render the same scene produces two visually indistinguishable images. There is a lot of redundancy. So, we would like to find a parameterization of phase function shape that is predictive of its perceptual effect on the final rendered image.
13Related work Fleming and Bülthoff 2005, Motoyoshi 2010 many perceptual cuesdo not study phase functionPellacini et al. 2000, Wills et al. 2009gloss perceptionmuch smaller spaceNgan et al. 2006gloss perceptionnavigation of appearance spaceLet me step back to describe some of the work that inspired us.Recently, there’s been some work on visual perception community that, even though they identify important cues for translucency, they don’t study phase functions.We draw inspiration from BRDF studies, and in particular from Pellacini et al. and Wills et al., who both use psychophysical experiments to find low-dimensional embeddings of gloss parameters. However, they study only gloss perception, and deal with much smaller spaces, specifically 10s of BRDFs.Ngan also studied gloss perception, using image-driven metrics. In fact as we will see later we use their proposed metric to study our space of phase functions.
14Our approach 1. Computational processing 2. Psychophysical validation 3. Analysis of resultsimage-driven analysisWe combine ideas from these two lines of work, and adopt a dual computational-psychophysical approach. At a first step, we use computation and image-driven metrics to process a large set of images. Then, we use a much smaller set of representative images to run a psychophysical experiment, as validation for the results of the computational analysis. Finally, we analyze the results of the computational stage.We begin now by describing the first stage.tractable experimentvisualization, perceptual parameterization
15Scene design side-lighting thin parts and fine details mostly low-order scatteringmostly high-order scatteringthick body and baseWe first design a scene that captures a reasonable subset of features important that have been reported in the past as important for the perception of translucency. Doing so requires selecting a geometry and lighting configuration. We have experimented with multiple geometry choices, as described in more detail in the paper, and we have selected the lucy shape that includes both thin and thick parts. We also experiment with different lighting conditions, and choose to sidelit this shape resulting in the scene shown here, that has regions where either low-order (left) or high-order scattering (right) is dominant in appearance. We also performed experiments for nine other scenes, including one with backlighting, but we will not show these here; please see the paper for details.
16Expanded phase function space 3000 machine hoursvon Mises-Fisher (vMF) lobesLinear mixtures:HG + HGHG + vMFHenyey-Greenstein (HG) lobessample 750+ phase functions750+ HDR imagesThen we use our expanded space to sample a set of more than 750 phase functions that we use for all our experiments. Using this set and the scene we design, we render one image for each of the phase functions in our set, for a total of more than 750 linear HDR images.
17Psychophysics Hmm, left Paired-comparison experiments To analyze this set of images, we can use psychophysics. A well-known methodology involves paired-comparison experiments: human subjects are shown triplets of images, a query and two candidates, and are asked to select the candidate image that is most similar to the query.
18Psychophysics 750 images = 200 million comparisons Directly applying this methodology to our problem is not possible. The reason is that we are dealing with a much larger space: 750 images correspond to two hundred million judgments, making running such a psychophysical experiment intractable.
19Image-driven analysis ǁ ǁ𝟑 ||d( , )≈Instead, we take a different approach, motivated by previous work on opaque BRDFs. For two images rendered with different phase functions, we can use crude computational metrics based on image differences to roughly approximate how a human subject would compare them. We have experimented with many different such metrics, as we describe in the paper, and selected the one proposed by Ngan et al. that is equal to the L2 -norm difference between the cubic root of linear HDR pixel values.
20Computational processing ≈ǁ ǁ𝟑 ||multidimensional scalingtwo-dimensional appearance spacetwo-dimensional embedding750 HDR imagesWe then use the selected image metric to process the full set of 750 images. Specifically, we perform multidimensional scaling on the set, to find a low-dimensional Euclidean embedding.It turns out that the first two principal directions capture 99% of the variance in the dataset, and therefore the embedding is two dimensional. We visualize it here to the right: each of the 750 dots corresponds to one image in our set, and different dots correspond to images rendered with different phase functions. On this embedding, the Euclidean distance between two points is approximately equal to the distance between the corresponding images under our computational image metric. In this sense, our expanded space of phase functions, as described by our chosen image metric, can produce a two-dimensional translucent appearance space.
21Our approach 1. Computational processing 2. Psychophysical validation 3. Analysis of resultsimage-driven analysisWe then proceed to the psychophysical validation stage.tractable experimentvisualization, perceptual parameterization
22Psychophysical validation ǁ ǁ𝟑 ||clusteringtwo-dimensional appearance space40 representative imagesTo check the perceptual validity of the appearance space we derived, we first use the same computational image metric to cluster the full image set and select a small number of exemplar images. The 40 exemplars are representative of the appearance variation in the full set.
23Psychophysical validation 750 phase functions = 200 million comparisonsUsing the 40 exemplars as stimuli, we can now run a much smaller, tractable psychophysical experiment.40 phase functions = 30,000 comparisons
24Psychophysical validation use computational embedding as proxy for psychophysicsgeneralize to all 750 images≈perceptual embeddingcomputational embeddingThis experiment results in the following two-dimensional embedding for the 40 images. By comparing the perceptual and computational embeddings, we observe that they are remarkably similar. Namely, the order of the points corresponding to the exemplar images is almost exactly consistent in the two embeddings. Therefore, the results of the computational analysis reasonably match those of the psychophysical experiments.This has two implications. First, that we can use the computational analysis as a cheap but perceptually valid proxy for psychophysics. And second, that we can now generalize our analysis to all 753 images, and not just the 40 exemplars that we have used for the validation experiment.(non-metric MDS on psych. data)(MDS using image metrics)
25Psychophysical validation use computational embedding as proxy for psychophysicsgeneralize to all 750 images≈perceptual embeddingcomputational embeddingThis experiment results in the following two-dimensional embedding for the 40 images. By comparing the perceptual and computational embeddings, we observe that they are remarkably similar. Namely, the order of the points corresponding to same the exemplar images is almost exactly consistent in the two embeddings. Therefore, the results of the computational analysis reasonably match those of the psychophysical experiments.This has two implications. First, that we can use the computational analysis as a cheap but perceptually valid proxy for psychophysics. And second, that we can now generalize our analysis to all 750 images, and not just the 40 exemplars that we have used for the validation experiment.(non-metric MDS on psych. data)(MDS using image metrics)
26Our approach 1. Computational processing 2. Psychophysical validation 3. Analysis of resultsimage-driven analysisLet’s see then what the results of the computational analysis tell us about translucency.tractable experimentvisualization, perceptual parameterization
27What we know so far translucent appearance space two-dimensional perceptualconsistent across variations of material, shape, illuminationsee paper for: images, 9 more computational embeddings, 2 more psychophysical experiments including backlighting, analysis and statisticsWe know so far that: our expanded space of phase functions spans an appearance space that is two-dimensional and perceptually valid. Our space is also consistent across some material, shape, and illumination variations; for this last point, you can look at our paper where we performed a large number of additional computational experiments on 5000 images, as well as an additional psychophysical experiment using backlighting.
28Moving around the space It is useful to take a look at how images change as we move around the embedding.
29Moving around the space We can choose to move vertically. In this case, we observe that, going from top to bottom, images become more diffused, a change that is similar to increasing the mean free path.moving verticallymore diffused appearance
30Moving around the space We can choose to move vertically. In this case, we observe that, going from top to bottom, images become more diffused, a change that is similar to increasing the mean free path.moving verticallymore diffused appearance
31Moving around the space If we move horizontally from left to right, we see that images have more glass-like appearance, with increased surface detail.moving horizontallymore glass-like appearance
32Moving around the space If we move horizontally from left to right, we see that images have more glass-like appearance, with increased surface detail.moving horizontallymore glass-like appearance
33Moving around the space We are not constrained to just vertical and horizontal movements. We can move anywhere in the two dimensional space to produce any trade-off between diffused and sharp appearance.we can move anywhere
34What can we render with… single forward lobesforward + isotropic mixturesforward + backward mixturesWhat phase functions can we use to reach different points of the appearance space? It turns out that, single forward lobes, like HG, can only reach a one-dimensional slice of the embedding, located at the left-most part of the space. Using mixtures of forward lobes with isotropic phase functions does not help much: the part of the space we can reach is pretty much the same. It is necessary to use a mixture of forward and backward lobes to produce the full, two-dimensional space.
35What can we render with… single forward lobesforward + isotropic mixturesforward + backward mixturesWhat phase functions can we use to reach different points of the appearance space? It turns out that, single forward lobes, like HG, can only reach a one-dimensional slice of the embedding, located at the left-most part of the space. Using mixtures of forward lobes with isotropic phase functions does not help much: the part of the space we can reach is pretty much the same. It is necessary to use a mixture of forward and backward lobes to produce the full, two-dimensional space.
36What can we render with… marble≠white jadebest approximationwith HG + isotropicmarblewhite jadewith vMF + vMFThis implies that, for a material like marble, which has a very diffused appearance, we can stay at the left part and use a single HG phase function. On the other hand, to render a material such as white jade that is part diffused, part glassy with sharp details, we need to move to the right. In other words, to render white jade it is necessary to use a composite phase function from our expanded space. Trying to approximate white jade with only HG results in an image that lacks this distinctive mixed appearance.
37Editing the phase function more diffused1/ 1− 𝜇 2One can also ask the reverse question, how does the phase function change as we move on the embedding?We observe that horizontal movements from left to right correspond to an increase in the variance of the phase function.Vertical movements from top to bottom correspond to an increase of the average cosine, that is, going from isotropic to forward scattering.In fact, we performed a correlation analysis between the two embedding coordinates and many functionals of moments of the phase function. We found that we can parameterize the dimensions of the appearance space in a perceptually uniform manner, as functions of moments of the phase function. The horizontal dimension is uniformly parameterized by the inverse square root of the second moment. The vertical dimension is uniformly parameterized by the square of the average cosine.move horizontallymove verticallymore glass-like𝜇 1 2
38Perceptual parameterization HG: g= 𝜇 10.40.8Uniform parameterization allows us to interpolate directly in phase function space, in a way such that the appearance of the resulting images is also perceptually uniformly interpolated.As an example of this, recall that HG phase functions have a single parameter g equal to the average cosine. For two extreme values of this parameter, using a HG of g equal to their average produces an image that is not a good visual midpoint between the two extremes.move verticallyg
39Perceptual parameterization HG: g 20.320.64If, on the other hand, we use g squared, we see that this parameterization is much more uniform in appearance.move verticallyg2
40Perceptual parameterization move verticallyHG: g= 𝜇 1HG: g 2g0.80.40.320.64g2We flip between the two for demonstration. We see then that a simple reparameterization of the HG family in terms of g-squared is perceptually uniform.
41Discussion handling other parameters of appearance: σt, σa, color need to (further) scale up methodologymore general or data-driven phase function modelssee our SIGGRAPH Asia 2013 paper!use in translucency editing and design user interfacesOur study has some limitations, which also provide directions for future research:We only studied the effect of phase function, but other parameters are also important for translucent appearance, namely the extinction and scattering coefficients, and their spectral dependency. To handle these additional parameters, we would need to further scale up our methodology.Second, we considered one of many possible ways to expand the space of phase functions. It would be interesting to also consider other parametric families, or even phase functions measured from real materials. And for that, you can look at our forthcoming SIGGRAPH Asia 2013 paper, where we measure the phase functions of all these materials.And finally, it would be interesting to test the utility of the perceptual uniform axes we derived in user interfaces for translucent material editing and design.
42Three take-home messages white jademarbleHG is not enoughexpanded spacecomputation + psychophysicslarge-scale perceptual studies2D appearance spaceuniform parameterizationTo conclude, we can summarize our contributions in three points:We have shown that HG is not enough and proposed an expanded space of phase functions that can better represent more translucent materials.We introduced a dual methodology combining computation and psychophysics that can be used to run large-scale perceptual studies.We used this methodology to study our expanded space of phase functions, and derived a two-dimensional translucent appearance space, which can be parameterized uniformly using moments of the phase function.
43Acknowledgements Wenzel Jakob Bonhams Funding: white jademarbleWenzel JakobBonhamsFunding:NSFNIHAmazonDataset of images:We thank Wenzel Jakob for providing the mitsuba renderer, Bonhams for the white jade photos, and of course our funding agencies.Don’t forget to check the project website, where we provide our full dataset of more than 5000 rendered images.Thank you for your attention.
44Computational embeddings 5000+ more HDR imagesmaterial variationshape variationlighting variation