Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)

Similar presentations


Presentation on theme: "Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)"— Presentation transcript:

1 Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C) thacker@ap.smu.ca

2 Todays Lecture Introduction to Monte Carlo methods Introduction to Monte Carlo methods Background Background Integration techniques Integration techniques

3 Introduction Monte Carlo refers to the use of random numbers to model random events that may model a mathematical of physical problem Monte Carlo refers to the use of random numbers to model random events that may model a mathematical of physical problem Typically, MC methods require many millions of random numbers Typically, MC methods require many millions of random numbers Of course, computers cannot actually generate truly random numbers Of course, computers cannot actually generate truly random numbers However, we can make the period of repetition absolutely enormous However, we can make the period of repetition absolutely enormous Such pseudo-random number generators are based on truncation of numbers of their significant digits Such pseudo-random number generators are based on truncation of numbers of their significant digits See Numerical Recipes, p 266-280 (2 nd edition FORTRAN) See Numerical Recipes, p 266-280 (2 nd edition FORTRAN)

4 Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. John von Neumann

5 History of numerical Monte Carlo methods Another contribution to numerical methods related to research at Los Alamos Another contribution to numerical methods related to research at Los Alamos Late 1940s: scientists want to follow paths of neutrons following various sub-atomic collision events Late 1940s: scientists want to follow paths of neutrons following various sub-atomic collision events Ulam & von Neumann suggest using random sampling to estimate this process Ulam & von Neumann suggest using random sampling to estimate this process 100 events can be calculated in 5 hours on ENIAC 100 events can be calculated in 5 hours on ENIAC The method is given the name Monte Carlo by Nicholas Metropolis The method is given the name Monte Carlo by Nicholas Metropolis Explosion of inappropriate use in the 1950s gave the technique a bad name Explosion of inappropriate use in the 1950s gave the technique a bad name Subsequent research illuminated when the method was appropriate Subsequent research illuminated when the method was appropriate

6 Terminology Random deviate – a distribution of numbers choosen uniformly between [0,1] Random deviate – a distribution of numbers choosen uniformly between [0,1] Normal deviate – numbers chosen randomly between (-,) weighted by a Gaussian Normal deviate – numbers chosen randomly between (-,) weighted by a Gaussian

7 Background to MC integration Suppose we have a definite integral Suppose we have a definite integral Given a good set of N sample points {x i } we can estimate the integral as Given a good set of N sample points {x i } we can estimate the integral as a b Sample points e.g. x 3 x 9 Each sample point yields an element of the integral of width (b-a)/N and height f(x i ) f(x)

8 What MC integration really does While the previous explanation is a reasonable interpretation of the way MC integration works, the most popular explanation is below While the previous explanation is a reasonable interpretation of the way MC integration works, the most popular explanation is below ab Height given by random sample of f(x) Average

9 Mathematical Applications Lets formalize this just a little bit… Lets formalize this just a little bit… Since by the mean value theorem Since by the mean value theorem We can approximate the integral by calculating (b-a), and we can calculate by averaging many values of f(x) We can approximate the integral by calculating (b-a), and we can calculate by averaging many values of f(x) Where x i є[a,b] and the values are chosen randomly Where x i є[a,b] and the values are chosen randomly

10 Example Consider evaluating Consider evaluating Lets take N=1000, then evaluate f(x)=e x with xє[0,1] at 1000 random points Lets take N=1000, then evaluate f(x)=e x with xє[0,1] at 1000 random points For this set of points define For this set of points define I 1 =(b-a) N,1 = N,1 since b-a=1 I 1 =(b-a) N,1 = N,1 since b-a=1 Next choose 1000 different xє[0,1] and create a new estimate I 2 = N,2 Next choose 1000 different xє[0,1] and create a new estimate I 2 = N,2 Next choose another 1000 different xє[0,1] and create a new estimate I 3 = N,3 Next choose another 1000 different xє[0,1] and create a new estimate I 3 = N,3

11 Distribution of the estimates We can carry on doing this, say 10,000 times at which point well have 10,000 values estimating the integral, and the distribution of these values will be a normal distribution We can carry on doing this, say 10,000 times at which point well have 10,000 values estimating the integral, and the distribution of these values will be a normal distribution The distribution of the all of the I N integrals constrains the errors we would expect on a single I N estimate The distribution of the all of the I N integrals constrains the errors we would expect on a single I N estimate This is the Central Limit Theorem, for any given I N estimate the sum of the random variables within it will converge toward a normal distribution This is the Central Limit Theorem, for any given I N estimate the sum of the random variables within it will converge toward a normal distribution Specifically, the standard deviation will be the estimate of the error in a single I N estimate Specifically, the standard deviation will be the estimate of the error in a single I N estimate The mean, x 0, will approach e-1 The mean, x 0, will approach e-1 x0x0 x 0 + N x 0 - N 1 1/e

12 Calculating N The formula for the standard deviation of N samples is The formula for the standard deviation of N samples is If there is no deviation in the data then RHS is zero If there is no deviation in the data then RHS is zero Given some deviation as N, the RHS will settle to some constant value > 0 (in this case ~ 0.2420359…) Given some deviation as N, the RHS will settle to some constant value > 0 (in this case ~ 0.2420359…) Thus we can write Thus we can write A rough measure of how good a random number generator is how well does a histogram of the 10,000 estimates fit to a Gaussian.

13 Add mc.ps plot 1000 samples per I integration Standard deviation is 0.491/1000 Increasing the number of integral estimates makes the distribution closer and closer to the infinite limit.

14 Resulting statistics For data that fits a Gaussian, the theory of probability distribution functions asserts that For data that fits a Gaussian, the theory of probability distribution functions asserts that 68.3% of the data ( N ) will fall within ± N of the mean 68.3% of the data ( N ) will fall within ± N of the mean 95.4% of the data (19/20) will fall within ±2 N of the mean 95.4% of the data (19/20) will fall within ±2 N of the mean 99.7% of the data will fall within ±3 N etc… 99.7% of the data will fall within ±3 N etc… Interpretation of poll data: Interpretation of poll data: These results will be accurate to ±4%, (19 times out of 20) These results will be accurate to ±4%, (19 times out of 20) The ±4% corresponds to ±2 The ±4% corresponds to ±2 Since 1/sqrt(N) Since 1/sqrt(N) This highlights one of the difficulties with random sampling, to improve the result by a factor of 2 we must increase N by a factor of 4! This highlights one of the difficulties with random sampling, to improve the result by a factor of 2 we must increase N by a factor of 4!

15 Why would we use this method to evaluate integrals? For 1D it doesnt make a lot of sense For 1D it doesnt make a lot of sense Taking h~1/N then composite trapezoid rule error~h 2 ~1/N 2 =N -2 Taking h~1/N then composite trapezoid rule error~h 2 ~1/N 2 =N -2 Double N, get result 4 times better Double N, get result 4 times better In 2D, we can use an extension of the trapezoid rule to use squares In 2D, we can use an extension of the trapezoid rule to use squares Taking h~1/N 1/2 then error h 2 N -1 Taking h~1/N 1/2 then error h 2 N -1 In 3D we get h~1/N 1/3 then error h 2 N -2/3 In 3D we get h~1/N 1/3 then error h 2 N -2/3 In 4D we get h~1/N 1/4 then error h 2 N -1/2 In 4D we get h~1/N 1/4 then error h 2 N -1/2

16 MC beneficial for N>4 Monte Carlo methods always have N ~N -1/2 regardless of the dimension Monte Carlo methods always have N ~N -1/2 regardless of the dimension Comparing to the 4D convergence behaviour we see that MC integration becomes practical at this point Comparing to the 4D convergence behaviour we see that MC integration becomes practical at this point It wouldnt make any sense for 3D though It wouldnt make any sense for 3D though For anything higher than 4D (e.g. 6D,9D which are possible!) MC methods tend to be the only way of doing these calculations For anything higher than 4D (e.g. 6D,9D which are possible!) MC methods tend to be the only way of doing these calculations MC methods also have the useful property of being comparatively immune to singularities, provided that MC methods also have the useful property of being comparatively immune to singularities, provided that The random generator doesnt hit the singularity The random generator doesnt hit the singularity The integral does indeed exist! The integral does indeed exist!

17 Importance sampling In reality many integrals have functions that vary rapidly in one part of the number line and more slowly in others In reality many integrals have functions that vary rapidly in one part of the number line and more slowly in others To capture this behaviour with MC methods requires that we introduce some way of putting our points where we need them the most To capture this behaviour with MC methods requires that we introduce some way of putting our points where we need them the most We really want to introduce a new function into the problem, one that allows us to put the samples in the right places We really want to introduce a new function into the problem, one that allows us to put the samples in the right places

18 General outline Suppose we have two similar functions g(x) & f(x), and g(x) is easy to integrate, then Suppose we have two similar functions g(x) & f(x), and g(x) is easy to integrate, then

19 General outline cont The integral we have derived has some nice properties: The integral we have derived has some nice properties: Because g(x)~f(x) (i.e. g(x) is a reasonable approximation of f(x) that is easy to integrate) then the integrand should be approximately 1 Because g(x)~f(x) (i.e. g(x) is a reasonable approximation of f(x) that is easy to integrate) then the integrand should be approximately 1 and the integrand shouldnt vary much! and the integrand shouldnt vary much! It should be possible to calculate a good approximation with a fairly small number of samples It should be possible to calculate a good approximation with a fairly small number of samples Thus by applying the change of variables and mapping our sample points we get a better answer with fewer samples Thus by applying the change of variables and mapping our sample points we get a better answer with fewer samples

20 Example Lets look at integrating f(x)=e x again on [0,1] Lets look at integrating f(x)=e x again on [0,1] MC random samples are 0.23,0.69,0.51,0.93 MC random samples are 0.23,0.69,0.51,0.93 Our integral estimate is then Our integral estimate is then

21

22 Apply importance sampling We first need to decide on our g(x) function, as a guess let us take g(x)=1+x We first need to decide on our g(x) function, as a guess let us take g(x)=1+x Well it isnt really a guess – we know this is the first two terms of the Taylor expansion of e x ! Well it isnt really a guess – we know this is the first two terms of the Taylor expansion of e x ! y(x) is thus given by y(x) is thus given by For end points we get y(0)=0, y(1)=3/2 For end points we get y(0)=0, y(1)=3/2 Rearrange y(x) to give x(y): Rearrange y(x) to give x(y):

23 Set up integral & evaluate samples The integral to evaluate is now The integral to evaluate is now We must now choose ys on the interval [0,3/2] We must now choose ys on the interval [0,3/2]y0.3451.038 1.0351.211 0.7651.135 1.3951.324 Close to 1 because g(x)~f(x)

24 Evaluate For the new integral we have For the new integral we have Clearly this technique of ensuring the integrand doesnt vary too much is extremely powerful Clearly this technique of ensuring the integrand doesnt vary too much is extremely powerful Importance sampling is particularly important in multidimensional integrals and can add 1 or 2 significant figures of accuracy for a minimal amount of effort Importance sampling is particularly important in multidimensional integrals and can add 1 or 2 significant figures of accuracy for a minimal amount of effort

25 Rejection technique Thus far weve look in detail at the effect of changing sample points on the overall estimate of the integral Thus far weve look in detail at the effect of changing sample points on the overall estimate of the integral An alternative approach may be necessary when you cannot easily sample the desired region which well call W An alternative approach may be necessary when you cannot easily sample the desired region which well call W Particularly important in multi-dimensional integrals when you can calculate the integral for a simple boundary but not a complex one Particularly important in multi-dimensional integrals when you can calculate the integral for a simple boundary but not a complex one We define a larger region V that includes W We define a larger region V that includes W Note you must also be able to calculate the size of V easily Note you must also be able to calculate the size of V easily The sample function is then redefined to be zero outside the volume, but have its normal value within it The sample function is then redefined to be zero outside the volume, but have its normal value within it

26 Rejection technique diagram Region we want to calculate V W f(x) Area of W=integral of region V multiplied by fraction of points falling below f(x) within V Algorithm: just count the total number of points calculated & the number in W!

27 Better selection of points: sub- random sequences Choosing N points using a uniform deviate produces an error that converges as N -0.5 Choosing N points using a uniform deviate produces an error that converges as N -0.5 If we could choose points better we could make convergence faster If we could choose points better we could make convergence faster For example, using a Cartesian grid of points leads to a method that converges as N -1 For example, using a Cartesian grid of points leads to a method that converges as N -1 Think of Cartesian points as avoiding one another and thus sampling a given region more completely Think of Cartesian points as avoiding one another and thus sampling a given region more completely However, we dont know a priori how fine the grid should be However, we dont know a priori how fine the grid should be We want to avoid short range correlations – particles shouldnt be too close to one another We want to avoid short range correlations – particles shouldnt be too close to one another A better solution is to choose points that attempt to maximally avoid one another A better solution is to choose points that attempt to maximally avoid one another

28 A list of sub-random sequences Tore-SQRT sequences Tore-SQRT sequences Van der Corput & Halton sequences Van der Corput & Halton sequences Faure sequence Faure sequence Generalized Faure sequence Generalized Faure sequence Nets & (t,s)-sequences Nets & (t,s)-sequences Sobol sequence Sobol sequence Niederreiter sequence Niederreiter sequence Well look very briefly at Halton & Sobol sequences, both of which are covered in detail in Numerical Recipes Well look very briefly at Halton & Sobol sequences, both of which are covered in detail in Numerical Recipes Many to choose from!

29 Haltons sequence Suppose in 1d we obtain the jth number in sequence, denoted H j, via Suppose in 1d we obtain the jth number in sequence, denoted H j, via (1) write j as a number in base b, where b is prime (1) write j as a number in base b, where b is prime e.g. 17 in base 3 is 122 e.g. 17 in base 3 is 122 (2) Reverse the digits and place a radix point in front (2) Reverse the digits and place a radix point in front e.g. 0.221 base 3 e.g. 0.221 base 3 It should be clear why this works, adding an additional digit makes the mesh of numbers progressively finer It should be clear why this works, adding an additional digit makes the mesh of numbers progressively finer For a sequence of points in n dimensions (x i 1,…,x i n ) we would typically use the first n primes to generate separate sequences for each of the x i j components For a sequence of points in n dimensions (x i 1,…,x i n ) we would typically use the first n primes to generate separate sequences for each of the x i j components

30 2d Haltons sequence example Pairs of points constructed from base 3 & 5 Halton sequences

31 Sobol (1967) sequence Useful method described in Numerical Recipes as providing close to N -1 convergence rate Algorithms are also available at www.netlib.org From Num. Recipes

32 Summary MC methods are a useful way of numerically integrating systems that are not tractable by other methods MC methods are a useful way of numerically integrating systems that are not tractable by other methods The key part of MC methods is the N -0.5 convergence rate The key part of MC methods is the N -0.5 convergence rate Numerical integration techniques can be greatly improved using importance sampling Numerical integration techniques can be greatly improved using importance sampling If you cannot write down a function easily then the rejection technique can often be employed If you cannot write down a function easily then the rejection technique can often be employed

33 Next Lecture More on MC methods – simulating random walks More on MC methods – simulating random walks


Download ppt "Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)"

Similar presentations


Ads by Google