Monte Carlo Techniques Basic Concepts

Slides:



Advertisements
Similar presentations
Monte Carlo Methods and Statistical Physics
Advertisements

1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Photorealistic Rendering. Ray tracing v. photorealistic rendering What illumination effects are not captured by ray tracing? What illumination effects.
EE663 Image Processing Histogram Equalization Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Sampling Attila Gyulassy Image Synthesis. Overview Problem Statement Random Number Generators Quasi-Random Number Generation Uniform sampling of Disks,
Monte Carlo Integration Robert Lin April 20, 2004.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lectures 18, 19: Monte Carlo Integration Ravi Ramamoorthi Acknowledgements.
Photon Tracing with Arbitrary Materials Patrick Yau.
Advanced Computer Graphics (Fall 2009) CS 294, Rendering Lecture 5: Monte Carlo Path Tracing Ravi Ramamoorthi
Monte Carlo Integration COS 323 Acknowledgment: Tom Funkhouser.
Rendering General BSDFs and BSSDFs Randy Rauwendaal.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
Advanced Computer Graphics (Spring 2006) COMS 4162, Lecture 20: Monte Carlo Path Tracing Ravi Ramamoorthi Acknowledgements.
1 Monte Carlo Global Illumination Brandon Lloyd COMP 238 December 16, 2002.
Monte Carlo Integration I Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller.
1 Stratified sampling This method involves reducing variance by forcing more order onto the random number stream used as input As the simplest example,
QMS 6351 Statistics and Research Methods Probability and Probability distributions Chapter 4, page 161 Chapter 5 (5.1) Chapter 6 (6.2) Prof. Vera Adamchik.
Lecture II-2: Probability Review
Lecture 12 Monte Carlo Simulations Useful web sites:
Monte Carlo Integration Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller.
02/11/05© 2005 University of Wisconsin Last Time Direct Lighting Light Transport Equation (LTE) Intro.
01/24/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
1 Lesson 3: Choosing from distributions Theory: LLN and Central Limit Theorem Theory: LLN and Central Limit Theorem Choosing from distributions Choosing.
Theory of Probability Statistics for Business and Economics.
-Global Illumination Techniques
Monte Carlo Integration II Digital Image Synthesis Yung-Yu Chuang with slides by Pat Hanrahan and Torsten Moller.
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
Monte Carlo I Previous lecture Analytical illumination formula This lecture Numerical evaluation of illumination Review random variables and probability.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
1 Lesson 8: Basic Monte Carlo integration We begin the 2 nd phase of our course: Study of general mathematics of MC We begin the 2 nd phase of our course:
7821 Monte Carlo Techniques Basic Concepts Chapter (13)14, 15 of “Physically Based Rendering” by Pharr&Humphreys.
02/10/03© 2003 University of Wisconsin Last Time Participating Media Assignment 2 –A solution program now exists, so you can preview what your solution.
Monte Carlo Methods So far we have discussed Monte Carlo methods based on a uniform distribution of random numbers on the interval [0,1] p(x) = 1 0  x.
Chapter 5 Parameter estimation. What is sample inference? Distinguish between managerial & financial accounting. Understand how managers can use accounting.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Stochastic Path Tracing Algorithms K. H. Ko School of Mechatronics Gwangju.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Stochastic Radiosity K. H. Ko School of Mechatronics Gwangju Institute.
Importance Resampling for Global Illumination Justin F. Talbot Master’s Thesis Defense Brigham Young University Provo, UT.
Importance Resampling for Global Illumination Justin Talbot, David Cline, and Parris Egbert Brigham Young University Provo, UT.
1 6. Mean, Variance, Moments and Characteristic Functions For a r.v X, its p.d.f represents complete information about it, and for any Borel set B on the.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Mean, Variance, Moments and.
Random Variable The outcome of an experiment need not be a number, for example, the outcome when a coin is tossed can be 'heads' or 'tails'. However, we.
CS348B Lecture 9Pat Hanrahan, Spring 2005 Overview Earlier lecture Statistical sampling and Monte Carlo integration Last lecture Signal processing view.
On robust Monte Carlo algorithms for multi-pass global illumination Frank Suykens – De Laet 17 September 2002.
Computer simulation Sep. 9, QUIZ 2 Determine whether the following experiments have discrete or continuous out comes A fair die is tossed and the.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
02/12/03© 2003 University of Wisconsin Last Time Intro to Monte-Carlo methods Probability.
Computer Graphics III – Monte Carlo integration Direct illumination
1 6. Mean, Variance, Moments and Characteristic Functions For a r.v X, its p.d.f represents complete information about it, and for any Borel set B on the.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Stochastic Path Tracing Algorithms K. H. Ko School of Mechatronics Gwangju.
Slide 1Lastra, 2/14/2016 Monte-Carlo Methods. Slide 2Lastra, 2/14/2016 Topics Kajiya’s paper –Showed that existing rendering methods are approximations.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Gil McVean, Department of Statistics Thursday February 12 th 2009 Monte Carlo simulation.
01/26/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
Global Illumination (3) Path Tracing. Overview Light Transport Notation Path Tracing Photon Mapping.
02/9/05© 2005 University of Wisconsin Last Time Lights Next assignment – Implement Kubelka-Munk as a BSDF.
Computer graphics III – Multiple Importance Sampling Jaroslav Křivánek, MFF UK
Introduction to Computer Simulation of Physical Systems (Lecture 10) Numerical and Monte Carlo Methods (CONTINUED) PHYS 3061.
Monte Carlo Integration Digital Image Synthesis Yung-Yu Chuang 12/3/2008 with slides by Pat Hanrahan and Torsten Moller.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Lesson 8: Basic Monte Carlo integration
Introduction to Monte Carlo Method
Monte Carlo Integration
Monte Carol Integration
Monte Carlo I Previous lecture Analytical illumination formula
Monte Carlo Rendering Central theme is sampling:
Experiments, Outcomes, Events and Random Variables: A Revisit
Lesson 4: Application to transport distributions
Monte Carlo Integration
Presentation transcript:

Monte Carlo Techniques Basic Concepts CIS782 Advanced Computer Graphics Raghu Machiraju © Machiraju/Möller

Reading Chapter 13, 14, 15 of “Physically Based Rendering” by Pharr&Humphreys Chapter 7 in “Principles of Digital Image Synthesis,” by A. Glassner Please add as you see fit © Machiraju/Möller

Las Vegas Vs Monte Carlo Two kinds: Las Vegas - always give right answer (but use elements of randomness on the way) Monte Carlo - give the right answer on average Very difficult to evaluate No closed forms © Machiraju/Möller

Monte Carlo Integration Pick a set of evaluation points Simplifies evaluation of difficult integrals Accuracy grows with O(N-0.5), i.e. in order to do twice as good we need 4 times as many samples Artifacts manyfest themselves as noise Research - minimize error while minimizing the number of necessary rays © Machiraju/Möller

Basic Concepts X, Y - random variables Example - dice Continuous or discrete Apply function f to get Y from X: Y=f(X) Example - dice Set of events Xi = {1, 2, 3, 4, 5, 6} F - rolling of dice Probability pi = 1/6 x as a continuous, uniformly distributed random variable: What is the dice an example of? Is this using a continuous uniformly distributed random variable to select the roll of a die? © Machiraju/Möller

Basic Concepts Cumulative distribution function (CDF) P(x) of a random variable X: Dice example P(2) = 1/3 P(4) = 2/3 P(6)=1 © Machiraju/Möller

Continuous Variable Example - lighting Probability of sampling illumination based on power Fi: Canonical uniform random variable x Takes on all values in [0,1) with equal probability Easy to create in software (pseudo-random number generator) Can create general random distributions by starting with x Importance sampling © Machiraju/Möller

Probability Distribution Function Relative probability of a random variable taking on a particular value Derivative of CDF: Non-negative Always integrate to one Uniform random variable: © Machiraju/Möller

Cond. Probability, Independence We know that the outcome is in A What is the probability that it is in B? Independence: knowing A does not help: Pr(B|A) = Pr(B) ==> Pr(AB)=Pr(A)Pr(B) B A Pr(B|A) = Pr(AB)/Pr(A) AB Event space © Machiraju/Möller

Expected Value Average value of the function f over some distribution of values p(x) over its domain D Example - cos over [0,p] where p is uniform + - © Machiraju/Möller

Variance Expected deviation of f from its expected value Fundamental concept of quantifying the error in Monte Carlo (MC) methods © Machiraju/Möller

Properties Hence we can write: For independent random variables: © Machiraju/Möller

Uniform MC Estimator All there is to it, really :) Assume we want to compute the integral of f(x) over [a,b] Assuming uniformly distributed random variables Xi in [a,b] (i.e. p(x) = 1/(b-a)) Our MC estimator FN: Makes sense: average of f(x) values (1/N) times the width (b-a) is estimate for area under curve © Machiraju/Möller

Simple Integration 1 © Machiraju/Möller

Trapezoidal Rule 1 © Machiraju/Möller

Uniform MC Estimator FN is equal to the correct integral: (random variable Y=f(X)) p(x) = 1/(b-a) © Machiraju/Möller

General MC Estimator Can relax condition for general PDF Important for efficient evaluation of integral (random variable Y=f(X)/p(X)) And hence: © Machiraju/Möller

Confidence Interval We know we should expect the correct result, but how likely are we going to see it? Strong law of large numbers (assuming that Yi are independent and identically distributed): © Machiraju/Möller

Confidence Interval Rate of convergence: Chebychev Inequality Setting We have Chebyshev inequality comes from markov inequality Q: So, given a desired confidence, we can come up with confidence interval of value of F © Machiraju/Möller

MC Estimator How good is it? What’s our error? Our error (root-mean square) is in the variance, hence Variance of Fn in terms of variance of F © Machiraju/Möller

MC Estimator Hence our overall error: V[F] measures square of RMS error! This result is independent of our dimension So the estimator different from the expected value of the estimator is this © Machiraju/Möller

Distribution of the Average Central limit theorem assuming normal distribution This can be re-arranged as well known Bell curve E[f(x)] N=10 N=40 N=160 What is I? © Machiraju/Möller

Distribution of the Average This can be re-arranged as Hence for t=3 we can conclude I.e. pretty much all results are within three standard deviations (probabilistic error bound - 0.997 confidence) N=160 N=40 N=10 © Machiraju/Möller E[f(x)]

Choosing Samples How to sample random variables? Assume we can do uniform distribution How to do general distributions? Inversion Rejection Transformation © Machiraju/Möller

Inversion Method Idea - we want all the events to be distributed according to y-axis, not x-axis Uniform distribution is easy! PDF x 1 CDF x 1 1 PDF 1 CDF x © Machiraju/Möller x

Inversion Method Compute CDF (make sure it is normalized!) Compute the inverse P-1(y) Obtain a uniformly distributed random number x Compute Xi = P-1(x) PDF x 1 CDF x 1 CDF x 1 1 P-1 Use discrete case as example Stack probabilities on top of each other starting left to right For each event, use accumulated probabilities to its left plus its probability x © Machiraju/Möller

Example - Power Distribution Used in BSDF’s Make sure it is normalized Compute the CDF Invert the CDF Now we can choose a uniform x distribution to get a power distribution! © Machiraju/Möller

Example - Exponential Distrib. E.g. Blinn’s Fresnel Term Make sure it is normalized Compute the CDF Invert the CDF Now we can choose a uniform x distribution to get an exponential distribution! extend to any funcs by piecewise approx. Last equation works because it gives the same distribution © Machiraju/Möller

Rejection Method Sometimes We cannot integrate p(x) We can’t invert a function P(x) (we don’t have the function description) Need to find q(x), such that p(x) < cq(x) Dart throwing Choose a pair of random variables (X, x) test whether x < p(X)/cq(X) We’re just trying to get a distribution of points that fall under the curve p(x) (not integrate p(x)) Cq(x) is an upper bound on p(x) © Machiraju/Möller

Rejection Method Essentially we pick a point (x, xcq(x)) If point lies beneath p(x) then we are ok Not all points do -> expensive method Example - sampling a Circle: p/4=78.5% good samples Sphere: p/6=52.3% good samples Gets worst in higher dimensions 1 p(x) This is how we get the points Generate x, generate a point above x but below cq(x) See if the point is below p(x) – if it is, keep it, if not, toss it Circle and sphere just point out how higher dimensions throw out more points 1 © Machiraju/Möller

Transforming between Distrib. Inversion Method --> transform uniform random distribution to general distribution transform general X (PDF px(x)) to general Y (PDF py(x)) Case 1: Y=y(X) y(x) must be one-to-one, i.e. monotonic hence © Machiraju/Möller

Transforming between Distrib. Hence we have for the PDF’s: Example: px(x) = 2x; Y = sinX Q: why the dy and dx? Where’s that come from – do you need an integration sign? Q: why not just say dx/dy instead of (dy/dx)SUP(-1)? © Machiraju/Möller

Transforming between Distrib. y(x) usually not given However, if CDF’s are the same, we use generalization of inversion method: Q: what do you mean if they are the same? Aren’t the PDFs the same then? © Machiraju/Möller

Multiple Dimensions Easily generalized - using the Jacobian of Y=T(X) Example - polar coordinates Q: ok – what’s T(X)? How do r and theta relate to T(X)? © Machiraju/Möller

Multiple Dimensions Spherical coordinates: Now looking at spherical directions: We want to solid angle to be uniformly distributed Hence the density in terms of f and q: © Machiraju/Möller

Multidimensional Sampling Separable case - independently sample X from px and Y from py: Often times this is not possible - compute the marginal density function p(x) first: Then compute conditional density function (p of y given x) Use 1D sampling with p(x) and p(y|x) © Machiraju/Möller

Sampling of Hemisphere Uniformly, I.e. p(w) = c Sampling q first: Now sampling in f: © Machiraju/Möller

Sampling of Hemisphere Now we use inversion technique in order to sample the PDF’s: Inverting these: © Machiraju/Möller

Sampling of Hemisphere Converting these to Cartesian coords: Similar derivation for a full sphere © Machiraju/Möller

Sampling a Disk Uniformly: Sampling r first: Then sampling in q: Inverting the CDF: © Machiraju/Möller

Sampling a Disk Given method distorts size of compartments Better method © Machiraju/Möller

Cosine Weighted Hemisphere Our scattering equations are cos-weighted!! Hence we would like a sampling distribution, that reflects that! Cos-distributed p(w) = c.cosq © Machiraju/Möller

Cosine Weighted Hemisphere Could use marginal and conditional densities, but use Malley’s method instead: uniformly generate points on the unit disk Generate directions by projecting the points on the disk up to the hemisphere above it dA  dA/cos rejected samples dw dw cos © Machiraju/Möller

Cosine Weighted Hemisphere Why does this work? Unit disk: p(r, f) = r/p Map to hemisphere: r = sin q Jacobian of this mapping (r, f) -> (sin q, f) Hence: © Machiraju/Möller

Performance Measure Key issue of graphics algorithm time-accuracy tradeoff! Efficiency measure of Monte-Carlo: V: variance T: rendering time Better algorithm if Better variance in same time or Faster for same variance Variance reduction techniques wanted! © Machiraju/Möller

Russian Roulette Don’t evaluate integral if the value is small (doesn’t add much!) Example - lighting integral Using N sample direction and a distribution of p(wi) Avoid evaluations where fr is small or q close to 90 degrees © Machiraju/Möller

Russian Roulette cannot just leave these samples out With some probability q we will replace with a constant c With some probability (1-q) we actually do the normal evaluation, but weigh the result accordingly The expected value works out fine © Machiraju/Möller

Russian Roulette Increases variance Improves speed dramatically Don’t pick q to be high though!! © Machiraju/Möller

Stratified Sampling - Revisited domain L consists of a bunch of strata Li Take ni samples in each strata General MC estimator: Expected value and variance (assuming vi is the volume of one strata): Variance for one strata with ni samples: Q: In last equation, shouldn’t the sigma just be sigma and NOT sigma_sub_i © Machiraju/Möller

Stratified Sampling - Revisited Overall estimator / variance: Assuming number of samples proportional to volume of strata - ni=viN: Compared to no-strata (Q is the mean of f over the whole domain L): Look at Veach ’97 for derivation of last equation So stratification can only decrease the variance © Machiraju/Möller

Stratified Sampling - Revisited Stratified sampling never increases variance Right hand side minimized, when strata are close to the mean of the whole function I.e. pick strata so they reflect local behaviour, not global (I.e. compact) Which is better? Stratified sampling – compact strata better © Machiraju/Möller

Stratified Sampling - Revisited Improved glossy highlights Random sampling stratified sampling Image on the right – note better edges to specular reflection in floor © Machiraju/Möller

Stratified Sampling - Revisited Curse of dimensionality Alternative - Latin Hypercubes Better variance than uniform random Worse variance than stratified Q: why to both statements? © Machiraju/Möller

Quasi Monte Carlo Doesn’t use ‘real’ random numbers Replaced by low-discrepancy sequences Works well for many techniques including importance sampling Doesn’t work as well for Russian Roulette and rejection sampling Better convergence rate than regular MC Q: nvWhy doesn’t it work well for russiona roulette and rejection sampling? © Machiraju/Möller

Bias If b is zero - unbiased, otherwise biased Example - pixel filtering Unbiased MC estimator, with distribution p Biased (regular) filtering: The unbiased MC estimator assumes choosing image plane samples uniformly © Machiraju/Möller

Bias typically I.e. the biased estimator is preferred Essentially trading bias for variance © Machiraju/Möller

Importance Sampling MC Can improve our “chances” by sampling areas, that we expect have a great influence called “importance sampling” find a (known) function p, that comes close to the function we want to compute the integral of, then evaluate: © Machiraju/Möller

Importance Sampling MC Crude MC: For importance sampling, actually “probe” a new function f/p. I.e. we compute our new estimates to be: Q: what are the lambda? © Machiraju/Möller

Importance Sampling MC For which p does this make any sense? Well p should be close to f. If p = f, then we would get Hence, if we choose p = f/m, (I.e. p is the normalized distribution function of f) then we’d get: © Machiraju/Möller

Optimal Probability Density Variance V[f(x)/p(x)] should be small Optimal: f(x)/p(x) is constant, variance is 0 p(x)  f (x) and p(x) dx = 1 p(x) = f (x) / f (x) dx Optimal selection is impossible since it needs the integral Practice: where f is large p is large © Machiraju/Möller

Are These Optimal ? Q: What’s going on here? © Machiraju/Möller

Importance Sampling MC Since we are finding random samples distributed by a probability given by p and we are actually evaluating in our experiments f/p, we find the variance of these experiments to be: improves error behavior (just plug in p = f/m) Q: what is I^2? © Machiraju/Möller

Multiple Importance Sampling Importance strategy for f and g, but how to sample f*g?, e.g. Should we sample according to f or according to Li? Either one isn’t good Use Multiple Importance Sampling (MIS) © Machiraju/Möller

Multiple Importance Sampling Importance sampling f Importance sampling L © Machiraju/Möller

Multiple Importance Sampling In order to evaluate Pick nf samples according to pf and ng samples according to pg Use new MC estimator: Balance heuristic vs. power heuristic: Beta = 2 is a good choice © Machiraju/Möller

MC for global illumination We know the basics of MC How to apply for global illumination? How to apply to BxDF’s How to apply to light source © Machiraju/Möller

MC for GI - general case General problem - evaluate: We don’t know much about f and L, hence use cos-weighted sampling of hemisphere in order to find a wi Use Malley’s method Make sure that wo and wi lie in same hemisphere © Machiraju/Möller

MC for GI - microfacet BRDFs Typically based on microfacet distribution (Fresnel and Geometry terms not statistical measures) Example - Blinn: We know how to sample a spherical / power distribution: This sampling is over wh, we need a distribution over wi © Machiraju/Möller

MC for GI - microfacet BRDFs This sampling is over wh, we need a distribution over wi: Which yields to (using that qh=2qh and fh=fh): © Machiraju/Möller

MC for GI - microfacet BRDFs Isotropic microfacet model: © Machiraju/Möller

MC for GI - microfacet BRDFs Anisotropic model (after Ashikhmin and Shirley) for a quarter disk: If ex= ey, then we get Blinn’s model © Machiraju/Möller

MC for GI - Specular Delta-function - special treatment Since p is also a delta function this simplifies to © Machiraju/Möller

MC for GI - Multiple BxDF’s Sum up distribution densities Have three unified samples - the first one determines according to which BxDF to distribute the spherical direction © Machiraju/Möller

Light Sources We need to evaluate Sp: Cone of directions from point p to light (for evaluating the rendering equation for direct illuminations), I.e. wi Sr: Generate random rays from the light source (Bi-directional Path Tracing or Photon Mapping) © Machiraju/Möller

Point Lights Source is a point uniform power in all directions hard shadows Sp: Delta-light source Treat similar to specular BxDF Sr: sampling of a uniform sphere © Machiraju/Möller

Spot Lights Like point light, but only emits light in a cone-like direction Sp: like point light, i.e. delta function Sr: sampling of a cone © Machiraju/Möller

Projection Lights Like spot light, but with a texture in front of it Sp: like spot light, i.e. delta function Sr: like spot light, i.e. sampling of a cone © Machiraju/Möller

Goniophotometric Lights Like point light (hard shadows) Non-uniform power in all directions - given by distribution map Sp: like point-light Delta-light source Treat similar to specular BxDF Sr: like point light, i.e. sampling of a uniform sphere © Machiraju/Möller

Directional Lights Infinite light source, i.e. only one distinct light direction hard shadows Sp: like point-light Delta function Sr: create virtual disk of the size of the scene sample disk uniformly (e.g. Shirley) © Machiraju/Möller

Area Lights Defined by shape Soft shadows Sp: distribution over solid angle qo is the angle between wi and (light) shape normal A is the area of the shape Sr: Sampling over area of the shape Sampling distribution depends on the area of the shape © Machiraju/Möller

Area Lights If v(p,p’) determines visibility: Hence: © Machiraju/Möller

Spherical Lights Special area shape Not all of the sphere is visible from outside of the sphere Only sample the area, that is visible from p Sp: distribution over solid angle Use cone sampling Sr: Simply sample a uniform sphere c r p © Machiraju/Möller

Infinite Area Lights Typically environment light (spherical) Encloses the whole scene Sp: Normal given - cos-weighted sampling Otherwise - uniform spherical distribution Sr: uniformly sample sphere at two points p1 and p2 The direction p1-p2 is the uniformly distributed ray © Machiraju/Möller

Infinite Area Lights Area light + directional light Morning skylight Midday skylight Sunset environment map © Machiraju/Möller