Understanding the role of phase function in translucent appearance

Slides:



Advertisements
Similar presentations
Effect of Test Patch Location on Color Appearance, in the Context of 3D Objects Bei Xiao and David H.Brainard University of Pennsylvania.
Advertisements

Multidimensional Lightcuts Bruce Walter Adam Arbree Kavita Bala Donald P. Greenberg Program of Computer Graphics, Cornell University.
Wavelets Fast Multiresolution Image Querying Jacobs et.al. SIGGRAPH95.
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Environment Mapping CSE 781 – Roger Crawfis
High-Order Similarity Relations in Radiative Transfer Shuang Zhao 1, Ravi Ramamoorthi 2, and Kavita Bala 1 1 Cornell University 2 University of California,
A Computational Approach to Simulate Light Diffusion in Arbitrarily Shaped Objects Tom Haber, Tom Mertens, Philippe Bekaert, Frank Van Reeth University.
LIGHT TRANSPORT 25/11/2011 Shinji Ogaki. 4 Papers Progressive Photon Beams Lightslice: Matrix Slice Sampling for Many- Lights Problem Modular Radiance.
Subsurface scattering
Subsurface scattering Model of light transport in translucent materials Marble, jade, milk, skin Light penetrates material and exits at different point.
Automatic scene inference for 3D object compositing Kevin Karsch (UIUC), Sunkavalli, K. Hadap, S.; Carr, N.; Jin, H.; Fonte, R.; Sittig, M., David Forsyth.
Data-driven Visual Similarity for Cross-domain Image Matching
Internet Vision - Lecture 3 Tamara Berg Sept 10. New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision.
A Perceptual Heuristic for Shadow Computation in Photo-Realistic Images Wednesday, 2 August 2006 Peter VangorpOlivier DumontToon LenaertsPhilip Dutré.
Based on slides created by Edward Angel
1 Angel: Interactive Computer Graphics 5E © Addison-Wesley 2009 Shading I.
William Moss Advanced Image Synthesis, Fall 2008.
Manifold Bootstrapping for SVBRDF Capture
An Efficient Representation for Irradiance Environment Maps Ravi Ramamoorthi Pat Hanrahan Stanford University.
Photorealistic Rendering of Rain Streaks Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar SIGGRAPH Conference July 2006,
SVD and PCA COS 323. Dimensionality Reduction Map points in high-dimensional space to lower number of dimensionsMap points in high-dimensional space to.
Materials II Lavanya Sharan March 2nd, Computational thinking about materials Physics-basedPseudo physics-based.
Matrix Row-Column Sampling for the Many-Light Problem Miloš Hašan (Cornell University) Fabio Pellacini (Dartmouth College) Kavita Bala (Cornell University)
BSSRDF: Bidirectional Surface Scattering Reflectance Distribution Functions Jared M. Dunne C95 Adv. Graphics Feb. 7, 2002 Based on: "A Practical Model.
1 Dr. Scott Schaefer Radiosity. 2/38 Radiosity 3/38 Radiosity Physically based model for light interaction View independent lighting Accounts for indirect.
Human Computer Interaction 7. Advanced User Interfaces (I) Data Scattering and RBF Course no. ILE5013 National Chiao Tung Univ, Taiwan By: I-Chen Lin,
Inverse Volume Rendering with Material Dictionaries
CS 480/680 Computer Graphics Shading I Dr. Frederick C Harris, Jr.
Shedding Light on the Weather
Optical Models Jian Huang, CS 594, Spring 2002 This set of slides are modified from slides used by Prof. Torsten Moeller, at Simon Fraser University, BC,
CSE 872 Dr. Charles B. Owen Advanced Computer Graphics1 BSSRDF – Bidirectional surface scattering reflectance distribution function Radiance theory BRDF.
02/25/05© 2005 University of Wisconsin Last Time Meshing Volume Scattering Radiometry (Adsorption and Emission)
Multiple Scattering in Vision and Graphics Lecture #21 Thanks to Henrik Wann Jensen.
02/28/05© 2005 University of Wisconsin Last Time Scattering theory Integrating tranfer equations.
ECSE 4750: Computer Graphics Rensselaer Polytechnic Institute Nov 5, 2012 Texture and Texture Mapping.
Introduction to Visible Watermarking IPR Course: TA Lecture 2002/12/18 NTU CSIE R105.
Analysis of Subsurface Scattering under Generic Illumination Y. Mukaigawa K. Suzuki Y. Yagi Osaka University, Japan ICPR2008.
Inverse Volume Rendering with Material Dictionaries
Real-Time Rendering Digital Image Synthesis Yung-Yu Chuang 01/03/2006 with slides by Ravi Ramamoorthi and Robin Green.
Parallel MDOM for Rendering Participating Media Ajit Hakke Patil – Daniele Bernabei Charly Collin – Ke Chen – Sumanta Pattanaik Fabio Ganovelli.
Efficient Rendering of Local Subsurface Scattering Tom Mertens 1, Jan Kautz 2, Philippe Bekaert 1, Frank Van Reeth 1, Hans-Peter Seidel
Computer Graphics Global Illumination: Photon Mapping, Participating Media Lecture 12 Taku Komura.
Charles University in Prague
Electronic Visualization Laboratory (EVL) University of Illinois at Chicago Paper-4 Interactive Translucent Volume Rendering and Procedural Modeling Joe.
1 Surface Reflectance Estimation and Natural Illumination Statistics Ron Dror, Ted Adelson, Alan Willsky Artificial Intelligence Lab, Lab for Information.
Physical Based Modeling and Animation of Fire 1/25.
Cornell CS465 Spring 2004 Lecture 4© 2004 Steve Marschner 1 Shading CS 465 Lecture 4.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
CDS 301 Fall, 2008 From Graphics to Visualization Chap. 2 Sep. 3, 2009 Jie Zhang Copyright ©
Shape-Dependent Gloss Correction
02/07/03© 2003 University of Wisconsin Last Time Finite element approach Two-pass approaches.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Dimensionality Reduction CS 685: Special Topics in Data Mining Spring 2008 Jinze.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Dimensionality Reduction Part 1: Linear Methods Comp Spring 2007.
Date of download: 7/5/2016 Copyright © 2016 SPIE. All rights reserved. (a) Simulated measurement system scanning a surface element at position P(r→). (b)
Shading CS 465 Lecture 4 © 2004 Steve Marschner • 1.
GI is slow Global illumination is important but slow. So people use fast approximations, … Effects of Global Illumination Approximations on Material Appearance.
Shading Variation in observed color across an object
© 2003 University of Wisconsin
Fabricating Translucent Materials Using Continuous Pigment Mixtures
Radiative Transfer & Volume Path Tracing
Tobias Heimann - DKFZ Ipek Oguz - UNC Ivo Wolf - DKFZ
© 2003 University of Wisconsin
Human Perception 9: Color.
Signal and Image Processing with Generative Models
(c) 2002 University of Wisconsin
Visual Motion and the Perception of Surface Material
Robert O. Duncan, Geoffrey M. Boynton  Neuron 
CS 480/680 Computer Graphics Shading.
A Practical Model for Subsurface Light Transport
Physical Problem of Simultaneous Linear Equations in Image Processing
Presentation transcript:

Understanding the role of phase function in translucent appearance Ioannis Gkioulekas1 Bei Xiao2 Shuang Zhao3 Edward Adelson2 Todd Zickler1 Kavita Bala3 1Harvard 2MIΤ 3Cornell Hi, I’m Yannis Gkioulekas, and I’m going to talk to you about the role of phase function in translucent appearance. This is joint work with Bei Xiao, Shuang Zhao, Ted Adelson, Todd Zickler, and Kavita Bala.

Translucency is everywhere food skin jewelry architecture We deal with translucency in many aspects of our everyday life. Our food and skin are translucent; and so are objects as diverse as jewelry and even buildings.

Subsurface scattering outgoing direction incident direction isotropic extinction coefficient σt (λ) absorption coefficient σa (λ) radiative transfer equation phase function p (λ) Translucent appearance is caused by scattering of light inside material volumes. This process is described by the radiative transfer equation, and is controlled by three material-dependent parameters. Let’s take a closer look: As light travels through a medium, the extinction coefficient controls the distance before a volume events occurs. Then, at each volume event, light may get absorbed with some probability determined by the absorption coefficient. Otherwise, light is scattered, meaning that it travels in a different direction, as determined by the phase function. In general these parameters are functions of wavelength, but we will be ignoring this dependence dealing only with “grayscale” materials. We will be focusing on the phase function in this talk. The phase function is a probability distribution on the sphere of directions, and is often assumed to be spherically symmetric. We will also use this assumption, and therefore use polar plots to represent them: For light incoming from the left, the plot shows the probability that it will get scattered at each outgoing direction. We show for reference an isotropic phase function, meaning one that scatters equally in all directions. Chandrasekhar 1960

Phase function is important thick parts (diffusion) The effect of the phase function in the final appearance of an object depends strongly on its geometry. Consider for example these two renderings of a lucy scene, produced with two different phase functions. In thick object parts, directionality of scatterings is dumpened out, and the difference in the two images is small to negligible. However, in thin object parts, such as the wings, we observe differences that help disambiguate between the two phase functions. As most real objects have both thick and thin parts, we argue that phase function can be critical for translucent appearance. thin parts

Common phase functions Henyey-Greenstein (HG) lobes single-parameter family: g∈ −1,1 g=𝜇 1 average cosine 𝜇 1 = cos 𝜃 = −1 1 𝑝 cos 𝜃 cos 𝜃 𝑑 cos 𝜃 The most commonly used phase function in computer graphics is the Henyey-Greenstein family. These are parabola-like lobes, controlled by a single parameter g. For different values of this parameter, we can have backward scattering, isotropic, and forward scattering lobes. Thinking of the phase function as a spherical distribution, the parameter g is equal to the first moment mu1, also commonly referred to as the average cosine Henyey and Greenstein 1941

What can we represent with HG?    marble white jade microcrystalline wax What materials can we create with HG phase functions? We know that we can use it to render marble. However, there are other materials which HG cannot reproduce, such as white jade and wax. Jensen 2001

Henyey-Greenstein is not enough soap microcrystalline wax photo HG setup Here is a simple demonstration of the limitations of HG phase functions. Consider these two materials, soap and wax which, though similar, have subtle differences in their appearance that make it possible to tell them apart. To highlight their different scattering behavior, we use a setup as shown: we illuminate thick material slabs with a laser pointer from above at an angle, and capture pictures with a camera. Take a look at the photographs from this setup for the two materials. We have coarsely quantized the photographs to a small number of gray levels to emphasize the shapes of the iso-brightness contours. The two materials produce clearly very different scattering patterns: soap has a forward ellipsoidal pattern; whereas the pattern of wax is more complex. If we use only HG phase functions to characterize these materials, this is the best qualitative match we can produce. We can only create patterns similar to those of soap and cannot produce the more complex pattern of wax.

? ? Goals expanded phase function space role in translucent appearance With this as motivation, I will present our work as answering two related questions. First, how can we usefully expand the space of phase function shapes, to be able to represent more materials. And second, since changes in phase function shape lead to non-trivial changes in the final image, how can we understand the relationship between the two?

Expanded phase function space Henyey-Greenstein (HG) lobes von Mises-Fisher (vMF) lobes single-parameter family: single-parameter family: g=𝜇 1 𝜅=2 𝜇 1 / 1− 𝜇 2 average cosine 𝜇 1 = cos 𝜃 = −1 1 𝑝 cos 𝜃 cos 𝜃 𝑑 cos 𝜃 second moment , we can expand the space of phase functions in multiple ways. Here, we have selected to, in addition to HG, also consider lobes from another single-parametric family, the von Mises-Fisher (vMF) family. The parameter k of this family, in addition to the first moment, also depends on the second moment mu2 of the phase function that is a measure of the spread of the distribution of directions. These moments will be important later in our talk. If we compare a vMF and HG lobe of the same average cosine, we observe that vMF lobes have larger variance and scatter more light sideways. 𝜇 2 = −1 1 𝑝 cos 𝜃 cos 𝜃 2 𝑑 cos 𝜃

Expanded phase function space soap microcrystalline wax photo HG vMF setup This subtle difference in shape can, however, be very important. Going back to our simple demonstration, we can see that using a vMF phase function, we can better reproduce the complex scattering patterns of wax.

Expanded phase function space Henyey-Greenstein (HG) lobes von Mises-Fisher (vMF) lobes single-parameter family: single-parameter family: g=𝜇 1 𝜅=2 𝜇 1 / 1− 𝜇 2 Linear mixtures: HG + HG HG + vMF vMF + vMF In addition to single HG and vMF lobes, we also consider all of their possible linear combinations: HG + HG, HG + vMF, and vMF + vMF. We use this as our expanded phase function space, which is now much larger compared to only HG.

Redundant phase function space ≈ f( ) ≈ ≠ f( ) Yet, our expanded space is also redundant. Consider for instance these two very different phase functions. Using them to render the same scene produces two visually indistinguishable images. There is a lot of redundancy. So, we would like to find a parameterization of phase function shape that is predictive of its perceptual effect on the final rendered image.

Related work Fleming and Bülthoff 2005, Motoyoshi 2010 many perceptual cues do not study phase function Pellacini et al. 2000, Wills et al. 2009 gloss perception much smaller space Ngan et al. 2006 gloss perception navigation of appearance space Let me step back to describe some of the work that inspired us. Recently, there’s been some work on visual perception community that, even though they identify important cues for translucency, they don’t study phase functions. We draw inspiration from BRDF studies, and in particular from Pellacini et al. and Wills et al., who both use psychophysical experiments to find low-dimensional embeddings of gloss parameters. However, they study only gloss perception, and deal with much smaller spaces, specifically 10s of BRDFs. Ngan also studied gloss perception, using image-driven metrics. In fact as we will see later we use their proposed metric to study our space of phase functions.

Our approach 1. Computational processing 2. Psychophysical validation 3. Analysis of results image-driven analysis We combine ideas from these two lines of work, and adopt a dual computational-psychophysical approach. At a first step, we use computation and image-driven metrics to process a large set of images. Then, we use a much smaller set of representative images to run a psychophysical experiment, as validation for the results of the computational analysis. Finally, we analyze the results of the computational stage. We begin now by describing the first stage. tractable experiment visualization, perceptual parameterization

Scene design side-lighting thin parts and fine details mostly low-order scattering mostly high-order scattering thick body and base We first design a scene that captures a reasonable subset of features important that have been reported in the past as important for the perception of translucency. Doing so requires selecting a geometry and lighting configuration. We have experimented with multiple geometry choices, as described in more detail in the paper, and we have selected the lucy shape that includes both thin and thick parts. We also experiment with different lighting conditions, and choose to sidelit this shape resulting in the scene shown here, that has regions where either low-order (left) or high-order scattering (right) is dominant in appearance. We also performed experiments for nine other scenes, including one with backlighting, but we will not show these here; please see the paper for details.

Expanded phase function space 3000 machine hours von Mises-Fisher (vMF) lobes Linear mixtures: HG + HG HG + vMF Henyey-Greenstein (HG) lobes sample 750+ phase functions 750+ HDR images Then we use our expanded space to sample a set of more than 750 phase functions that we use for all our experiments. Using this set and the scene we design, we render one image for each of the phase functions in our set, for a total of more than 750 linear HDR images.

Psychophysics Hmm, left Paired-comparison experiments To analyze this set of images, we can use psychophysics. A well-known methodology involves paired-comparison experiments: human subjects are shown triplets of images, a query and two candidates, and are asked to select the candidate image that is most similar to the query.

Psychophysics 750 images = 200 million comparisons Directly applying this methodology to our problem is not possible. The reason is that we are dealing with a much larger space: 750 images correspond to two hundred million judgments, making running such a psychophysical experiment intractable.

Image-driven analysis ǁ - ǁ 𝟑 || d( , ) ≈ Instead, we take a different approach, motivated by previous work on opaque BRDFs. For two images rendered with different phase functions, we can use crude computational metrics based on image differences to roughly approximate how a human subject would compare them. We have experimented with many different such metrics, as we describe in the paper, and selected the one proposed by Ngan et al. that is equal to the L2 -norm difference between the cubic root of linear HDR pixel values.

Computational processing ≈ ǁ - ǁ 𝟑 || multidimensional scaling two-dimensional appearance space two-dimensional embedding 750 HDR images We then use the selected image metric to process the full set of 750 images. Specifically, we perform multidimensional scaling on the set, to find a low-dimensional Euclidean embedding. It turns out that the first two principal directions capture 99% of the variance in the dataset, and therefore the embedding is two dimensional. We visualize it here to the right: each of the 750 dots corresponds to one image in our set, and different dots correspond to images rendered with different phase functions. On this embedding, the Euclidean distance between two points is approximately equal to the distance between the corresponding images under our computational image metric. In this sense, our expanded space of phase functions, as described by our chosen image metric, can produce a two-dimensional translucent appearance space.

Our approach 1. Computational processing 2. Psychophysical validation 3. Analysis of results image-driven analysis We then proceed to the psychophysical validation stage. tractable experiment visualization, perceptual parameterization

Psychophysical validation ǁ - ǁ 𝟑 || clustering two-dimensional appearance space 40 representative images To check the perceptual validity of the appearance space we derived, we first use the same computational image metric to cluster the full image set and select a small number of exemplar images. The 40 exemplars are representative of the appearance variation in the full set.

Psychophysical validation 750 phase functions = 200 million comparisons Using the 40 exemplars as stimuli, we can now run a much smaller, tractable psychophysical experiment. 40 phase functions = 30,000 comparisons

Psychophysical validation use computational embedding as proxy for psychophysics generalize to all 750 images ≈ perceptual embedding computational embedding This experiment results in the following two-dimensional embedding for the 40 images. By comparing the perceptual and computational embeddings, we observe that they are remarkably similar. Namely, the order of the points corresponding to the exemplar images is almost exactly consistent in the two embeddings. Therefore, the results of the computational analysis reasonably match those of the psychophysical experiments. This has two implications. First, that we can use the computational analysis as a cheap but perceptually valid proxy for psychophysics. And second, that we can now generalize our analysis to all 753 images, and not just the 40 exemplars that we have used for the validation experiment. (non-metric MDS on psych. data) (MDS using image metrics)

Psychophysical validation use computational embedding as proxy for psychophysics generalize to all 750 images ≈ perceptual embedding computational embedding This experiment results in the following two-dimensional embedding for the 40 images. By comparing the perceptual and computational embeddings, we observe that they are remarkably similar. Namely, the order of the points corresponding to same the exemplar images is almost exactly consistent in the two embeddings. Therefore, the results of the computational analysis reasonably match those of the psychophysical experiments. This has two implications. First, that we can use the computational analysis as a cheap but perceptually valid proxy for psychophysics. And second, that we can now generalize our analysis to all 750 images, and not just the 40 exemplars that we have used for the validation experiment. (non-metric MDS on psych. data) (MDS using image metrics)

Our approach 1. Computational processing 2. Psychophysical validation 3. Analysis of results image-driven analysis Let’s see then what the results of the computational analysis tell us about translucency. tractable experiment visualization, perceptual parameterization

What we know so far translucent appearance space two-dimensional perceptual consistent across variations of material, shape, illumination see paper for: 5000+ images, 9 more computational embeddings, 2 more psychophysical experiments including backlighting, analysis and statistics We know so far that: our expanded space of phase functions spans an appearance space that is two-dimensional and perceptually valid. Our space is also consistent across some material, shape, and illumination variations; for this last point, you can look at our paper where we performed a large number of additional computational experiments on 5000 images, as well as an additional psychophysical experiment using backlighting.

Moving around the space It is useful to take a look at how images change as we move around the embedding.

Moving around the space We can choose to move vertically. In this case, we observe that, going from top to bottom, images become more diffused, a change that is similar to increasing the mean free path. moving vertically more diffused appearance

Moving around the space We can choose to move vertically. In this case, we observe that, going from top to bottom, images become more diffused, a change that is similar to increasing the mean free path. moving vertically more diffused appearance

Moving around the space If we move horizontally from left to right, we see that images have more glass-like appearance, with increased surface detail. moving horizontally more glass-like appearance

Moving around the space If we move horizontally from left to right, we see that images have more glass-like appearance, with increased surface detail. moving horizontally more glass-like appearance

Moving around the space We are not constrained to just vertical and horizontal movements. We can move anywhere in the two dimensional space to produce any trade-off between diffused and sharp appearance. we can move anywhere

What can we render with… single forward lobes forward + isotropic mixtures forward + backward mixtures What phase functions can we use to reach different points of the appearance space? It turns out that, single forward lobes, like HG, can only reach a one-dimensional slice of the embedding, located at the left-most part of the space. Using mixtures of forward lobes with isotropic phase functions does not help much: the part of the space we can reach is pretty much the same. It is necessary to use a mixture of forward and backward lobes to produce the full, two-dimensional space.

What can we render with… single forward lobes forward + isotropic mixtures forward + backward mixtures What phase functions can we use to reach different points of the appearance space? It turns out that, single forward lobes, like HG, can only reach a one-dimensional slice of the embedding, located at the left-most part of the space. Using mixtures of forward lobes with isotropic phase functions does not help much: the part of the space we can reach is pretty much the same. It is necessary to use a mixture of forward and backward lobes to produce the full, two-dimensional space.

What can we render with… marble ≠ white jade best approximation with HG + isotropic marble white jade with vMF + vMF This implies that, for a material like marble, which has a very diffused appearance, we can stay at the left part and use a single HG phase function. On the other hand, to render a material such as white jade that is part diffused, part glassy with sharp details, we need to move to the right. In other words, to render white jade it is necessary to use a composite phase function from our expanded space. Trying to approximate white jade with only HG results in an image that lacks this distinctive mixed appearance.

Editing the phase function more diffused 1/ 1− 𝜇 2 One can also ask the reverse question, how does the phase function change as we move on the embedding? We observe that horizontal movements from left to right correspond to an increase in the variance of the phase function. Vertical movements from top to bottom correspond to an increase of the average cosine, that is, going from isotropic to forward scattering. In fact, we performed a correlation analysis between the two embedding coordinates and many functionals of moments of the phase function. We found that we can parameterize the dimensions of the appearance space in a perceptually uniform manner, as functions of moments of the phase function. The horizontal dimension is uniformly parameterized by the inverse square root of the second moment. The vertical dimension is uniformly parameterized by the square of the average cosine. move horizontally move vertically more glass-like 𝜇 1 2

Perceptual parameterization HG: g= 𝜇 1 0.4 0.8 Uniform parameterization allows us to interpolate directly in phase function space, in a way such that the appearance of the resulting images is also perceptually uniformly interpolated. As an example of this, recall that HG phase functions have a single parameter g equal to the average cosine. For two extreme values of this parameter, using a HG of g equal to their average produces an image that is not a good visual midpoint between the two extremes. move vertically g

Perceptual parameterization HG: g 2 0.32 0.64 If, on the other hand, we use g squared, we see that this parameterization is much more uniform in appearance. move vertically g2

Perceptual parameterization move vertically HG: g= 𝜇 1 HG: g 2 g 0.8 0.4 0.32 0.64 g2 We flip between the two for demonstration. We see then that a simple reparameterization of the HG family in terms of g-squared is perceptually uniform.

Discussion handling other parameters of appearance: σt, σa, color need to (further) scale up methodology more general or data-driven phase function models see our SIGGRAPH Asia 2013 paper! use in translucency editing and design user interfaces Our study has some limitations, which also provide directions for future research: We only studied the effect of phase function, but other parameters are also important for translucent appearance, namely the extinction and scattering coefficients, and their spectral dependency. To handle these additional parameters, we would need to further scale up our methodology. Second, we considered one of many possible ways to expand the space of phase functions. It would be interesting to also consider other parametric families, or even phase functions measured from real materials. And for that, you can look at our forthcoming SIGGRAPH Asia 2013 paper, where we measure the phase functions of all these materials. And finally, it would be interesting to test the utility of the perceptual uniform axes we derived in user interfaces for translucent material editing and design.

Three take-home messages white jade marble HG is not enough expanded space computation + psychophysics large-scale perceptual studies 2D appearance space uniform parameterization To conclude, we can summarize our contributions in three points: We have shown that HG is not enough and proposed an expanded space of phase functions that can better represent more translucent materials. We introduced a dual methodology combining computation and psychophysics that can be used to run large-scale perceptual studies. We used this methodology to study our expanded space of phase functions, and derived a two-dimensional translucent appearance space, which can be parameterized uniformly using moments of the phase function.

Acknowledgements Wenzel Jakob Bonhams Funding: white jade marble Wenzel Jakob Bonhams Funding: NSF NIH Amazon Dataset of 5000+ images: We thank Wenzel Jakob for providing the mitsuba renderer, Bonhams for the white jade photos, and of course our funding agencies. Don’t forget to check the project website, where we provide our full dataset of more than 5000 rendered images. Thank you for your attention. http://tinyurl.com/s2013-translucency

Computational embeddings 5000+ more HDR images material variation shape variation lighting variation

Scene design

Psychophysical validation ≈ perceptual embedding computational embedding (non-metric MDS on psych. data) (MDS using image metrics)

Computational metrics cubic root L2-norm L1-norm

Perceptual image metrics material variation shape variation lighting variation

Embedding stability original perturbation 1 perturbation 2

sample 750+ phase functions Distance metric 𝑑 𝑤 𝑝 1 , 𝑝 2 = 0 π 0 π 𝑤 θ 1 , θ 2 𝑝 1 θ 1 − 𝑝 2 θ 2 2 𝑑 θ 1 𝑑 θ 2 MDS sample 750+ phase functions MDS Davis et al. 2007

Non-metric MDS d >d Learning from relative comparisons min 𝐾≥0 λ 𝐾 ∗ + 1 𝑆 𝑠=1 𝑆 𝐿 𝑑 𝐾 𝑖 𝑠 , 𝑘 𝑠 − 𝑑 𝐾 𝑖 𝑠 , 𝑗 𝑠 +𝑏 non-metric MDS Hmm, left d >d Wills et al. 2009