Presentation is loading. Please wait.

Presentation is loading. Please wait.

May 2001D. Ceperley Simulation Methods1 Simulation Techniques: Outline Lecture 1: Introduction:Statistical Mechanics, ergodicity, statistical errors. Lecture.

Similar presentations


Presentation on theme: "May 2001D. Ceperley Simulation Methods1 Simulation Techniques: Outline Lecture 1: Introduction:Statistical Mechanics, ergodicity, statistical errors. Lecture."— Presentation transcript:

1 May 2001D. Ceperley Simulation Methods1 Simulation Techniques: Outline Lecture 1: Introduction:Statistical Mechanics, ergodicity, statistical errors. Lecture 2: Molecular Dynamics: integrators, boundary conditions in space and time, long range potentials and neighbor tables, thermostats and constraints, single body and two body correlation functions, order parameters, dynamical properties. Lecture 3: Monte Carlo Methods: MC vs. MD, Markov Chain MC, transition rules. NB All material at beginner’s level. Interruptions welcomed!

2 May 2001D. Ceperley Simulation Methods2 Introduction to Simulation: Lecture 1 Why do simulations? Some history Review of statistical mechanics Newton’s equations and ergodicity The FPU “experiment” How to get something from simulations: statistical errors

3 May 2001D. Ceperley Simulation Methods3 Why do simulations? Simulations are the only general method for “solving” many-body problems. Other methods involve approximations and experts. Experiment is limited and expensive. Simulations can complement the experiment. Simulations are easy even for complex systems. They scale up with the computer power.

4 May 2001D. Ceperley Simulation Methods4 Two Simulation Modes A. Give us the phenomena and invent a model to mimic the problem. The semi-empirical approach. But one cannot reliably extrapolate the model away from the empirical data. B. Maxwell, Boltzmann and Schoedinger gave us the model. All we must do is numerically solve the mathematical problem and determine the properties. (first principles or ab initio methods) This is what we will talk about.

5 May 2001D. Ceperley Simulation Methods5 “The general theory of quantum mechanics is now almost complete. The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.” Dirac, 1929

6 May 2001D. Ceperley Simulation Methods6 Challenges of Simulation Physical and mathematical underpinings: What approximations come in: Computer time is limited  few particles for short periods of time. (space-time is 4d. Moore’s Law implies lengths and times will double every 6 years if O(N).) Systems with many particles and long time scales are problematical. Hamiltonian is unknown, until we solve the quantum many-body problem! How do we estimate errors? Statistical and systematic. How do we manage ever more complex codes?

7 May 2001D. Ceperley Simulation Methods7 Short history of Molecular Simulations Metropolis, Rosenbluth, Teller (1953) Monte Carlo Simulation of hard disks. Fermi, Pasta Ulam (1954) experiment on ergodicity Alder & Wainwright (1958) liquid-solid transition in hard spheres. “long time tails” (1970) Vineyard (1960) Radiation damage using MD Rahman (1964) liquid argon, water(1971) Verlet (1967) Correlation functions,... Andersen, Rahman, Parrinello (1980) constant pressure MD Nose, Hoover, (1983) constant temperature thermostats. Car, Parrinello (1985) ab initio MD.

8 May 2001D. Ceperley Simulation Methods8 Molecular Dynamics (MD) Pick particles, masses and potential. Initialize positions and momentum. (boundary conditions in space and time) Solve F = m a to determine r(t), v(t). Newton (1667-87) Compute properties along the trajectory Estimate errors. Try to use the simulation to answer physical questions.

9 May 2001D. Ceperley Simulation Methods9 What are the forces? Crucial since V(q) determines the quality of result. In this lecture we will use semi-empirical potentials: potential is constructed on theoretical grounds but using some experimental data. Common examples are Lennard-Jones, Coulomb, embedded atom potentials. They are only good for simple materials. We will not discuss these potentials for reasons of time. The ab initio philosophy is that potentials are to be determined directly from quantum mechanics as needed. But computer power is not yet adequate in general. A powerful approach is to use simulations at one level to determine parameters at the next level.

10 May 2001D. Ceperley Simulation Methods10 Statistical Ensembles Classical phase space is 6N variables (p i, q i ) and a Hamiltonian function H(q,p,t). We may know a few constants of motion such as energy, number of particles, volume... Ergodic hypothesis: each state consistent with our knowledge is equally “likely”; the microcanonical ensemble. Implies the average value does not depend on initial conditions. A system in contact with a heat bath at temperature T will be distributed according to the canonical ensemble: exp(-H(q,p)/k B T )/Z The momentum integrals can be performed. Are systems in nature really ergodic? Not always!

11 May 2001D. Ceperley Simulation Methods11 Ergodicity Fermi- Pasta- Ulam “experiment” (1954) 1-D anharmonic chain: V=  [ (q i+1 -q i ) 2 +  (q i+1 -q i ) 3 ] The system was started out with energy with the lowest energy mode. Equipartition would imply that the energy would flow into the other modes. Systems at low temperatures never come into equilibrium. The energy sloshes back and forth between various modes forever. At higher temperature many-dimensional systems become ergodic. Area of non-linear dynamics devoted to these questions.

12 May 2001D. Ceperley Simulation Methods12 Let us say here that the results of our computations were, from the beginning, surprising us. Instead of a continuous flow of energy from the first mode to the higher modes, all of the problems show an entirely different behavior. … Instead of a gradual increase of all the higher modes, the energy is exchanged, essentially, among only a certain few. It is, therefore, very hard to observe the rate of “thermalization” or mixing in our problem, and this wa s the initial purpose of the calculation. Fermi, Pasta, Ulam (1954)

13 May 2001D. Ceperley Simulation Methods13 Equivalent to exponential divergence of trajectories, or sensitivity to initial conditions. (This is a blessing for numerical work. Why?) What we mean by ergodic is that after some interval of time the system is any state of the system is possible. Example: shuffle a card deck 10 times. Any of the 52! arrangements could occur with equal frequency. Aside from these mathematical questions, there is always a practical question of convergence. How do you judge if your results converged? There is no sure way. Why? Only “experimental” tests. –Occasionally do very long runs. –Use different starting conditions. –Shake up the system. –Compare to experiment.

14 May 2001D. Ceperley Simulation Methods14 Statistical ensembles (E, V, N) microcanonical, constant volume (T, V, N) canonical, constant volume (T, P N) constant pressure (T, V,  ) grand canonical Which is best? It depends on: –the question you are asking –the simulation method: MC or MD (MC better for phase transitions) –your code. Lots of work in last 2 decades on various ensembles.

15 May 2001D. Ceperley Simulation Methods15 Definition of Simulation What is a simulation? An internal state “S” A rule for changing the state S n+1 = T (S n ) We repeat the iteration many time. Simulations can be –Deterministic (e.g. Newton’s equations=MD) –Stochastic (Monte Carlo, Brownian motion,…) Typically they are ergodic: there is a correlation time T. for times much longer than that, all non-conserved properties are close to their average value. Used for: –Warm up period –To get independent samples for computing errors.

16 May 2001D. Ceperley Simulation Methods16 Problems with estimating errors Any good simulation quotes systematic and statistical errors for anything important. Central limit theorem: after “enough” averaging, any statistical quantity approaches a normal distribution. One standard deviation means 2/3 of the time the correct answer is within  of the estimate. Problem in simulations is that data is correlated in time. It takes a “correlation” time to be “ergodic” We must throw away the initial transient and block successive parts to estimate the mean value and error. The error and mean are simultaneously determined from the data. We need at least 20 independent data points.

17 May 2001D. Ceperley Simulation Methods17 Estimating Errors  Trace of A(t):  Equilibration time.  Histogram of values of A ( P(A) ).  Mean of A (a).  Variance of A ( v ).  estimate of the mean:  A(t)/N  estimate of the variance,  Autocorrelation of A (C(t)).  Correlation time (  ).  The (estimated) error of the (estimated) mean (  ).  Efficiency [= 1/(CPU time * error 2 )]

18 May 2001D. Ceperley Simulation Methods18 Data Spork Interactive code to perform statistical analysis of data Shumway, Draeger,Ceperley

19 May 2001D. Ceperley Simulation Methods19 Statistical thinking is slippery “Shouldn’t the energy settle down to a constant” –NO. It fluctuates forever. It is the overall mean which converges. “My procedure is too complicated to compute errors” –NO. Run your whole code 10 times and compute the mean and variance from the different runs “The cumulative energy has converged”. –BEWARE. Even pathological cases have smooth cumulative energy curves. “Data set A differs from B by 2 error bars. Therefore it must be different”. –This is normal in 1 out of 10 cases.

20 May 2001D. Ceperley Simulation Methods20 Molecular Dynamics: Lecture 2 What to choose in an integrator The Verlet algorithm Boundary Conditions in Space and time Neighbor tables Long ranged interactions Thermostats Constraints

21 May 2001D. Ceperley Simulation Methods21 Criteria for an Integrator simplicity (How long does it take to write and debug?) efficiency (How fast to advance a given system?) stability (what is the long-term energy conservation?) reliability (Can it handle a variety of temperatures, densities, potentials?) compare statistical errors (going as h -1/2 ) with time step errors which go as h p where p=2,3...?

22 May 2001D. Ceperley Simulation Methods22 Characteristics of simulations. Potentials are highly non-linear with discontinuous higher derivatives either at the origin or at the box edge. Small changes in accuracy lead to totally different trajectories. (the mixing or ergodic property) We need low accuracy because the potentials are not very realistic. Universality saves us: a badly integrated system is probably similar to our original system. This is not true in the field of non-linear dynamics or, in studying the solar system CPU time is totally dominated by the calculation of forces. Memory limits are not too important. Energy conservation is important; roughly equivalent to time-reversal invariance.: allow 0.01kT fluctuation in the total energy.

23 May 2001D. Ceperley Simulation Methods23 The Verlet Algorithm The nearly universal choice for an integrator is the Verlet (leapfrog) algorithm r(t+h) = r(t) + v(t) h + 1/2 a(t) h 2 + b(t) h 3 + O(h 4 )Taylor expand r(t-h) = r(t) - v(t) h + 1/2 a(t) h 2 - b(t) h 3 + O(h 4 ) Reverse time r(t+h) = 2 r(t) - r(t-h) + a(t) h 2 + O(h 4 ) Add v(t) = (r(t+h) - r(t-h))/(2h) + O(h 2 ) estimate velocities Time reversal invariance is built in  the energy does not drift. 2345167910111213 8

24 May 2001D. Ceperley Simulation Methods24 How to set the time step Adjust to get energy conservation to 1% of kinetic energy. Even if errors are large, you are close to the exact trajectory of a nearby physical system with a different potential. Since we don’t really know the potential surface that accurately, this is satisfactory. Leapfrog algorithm has a problem with round-off error. Use the equivalent velocity Verlet instead: r(t+h) = r(t) +h [ v(t) +(h/2) a(t)] v(t+h/2) = v(t)+(h/2) a(t) v(t+h)=v(t+h/2) + (h/2) a(t+h)

25 May 2001D. Ceperley Simulation Methods25 Higher Order Methods? Higher order does not mean better Problem is that potentials are not analytic Systematic errors study from Berendsen 86 Statistical error for fixed CPU time.

26 May 2001D. Ceperley Simulation Methods26 Spatial Boundary Conditions Important because spatial scales are limited. What can we choose? No boundaries; e.g. droplet, protein in vacuum. If droplet has 1 million atoms and surface layer is 5 atoms thick  25% of atoms are on the surface. Periodic Boundaries: most popular choice because there are no surfaces (see next slide) but there can still be problems. Simulations on a sphere External potentials Mixed boundaries (e.g. infinite in z, periodic in x and y)

27 May 2001D. Ceperley Simulation Methods27 Periodic Boundary Conditions Is the system periodic or just an infinite array of images? How do you calculate distances in a periodic system?

28 May 2001D. Ceperley Simulation Methods28 Minimum Image Convention: take the closest distance:|r| M = min ( r+nL) Potential is cutoff so that V(r)=0 for r>L/2 since force needs to be continuous. How about the derivative? Image potential V I =  v(r i -r j +nL) For long range potential this leads to the Ewald image potential. You need a back ground and convergence method (more later) Periodic distances x -L-L/20L/2L

29 May 2001D. Ceperley Simulation Methods29 Complexity of Force Calculations Complexity is defined as how a computer algorithm scales with the number of degrees of freedom (particles) Number of terms in pair potential is N(N-1)/2  O(N 2 ) For short range potential you can use neighbor tables to reduce it to O(N) –(Verlet) neighbor list for systems that move slowly –bin sort list (map system onto a mesh and find neighbors from the mesh table) Long range potentials with Ewald sums are O(N 3/2 ) but Fast Multipole Algorithms are O(N) for very large N.

30 May 2001D. Ceperley Simulation Methods30 ! Loop over all pairs of atoms. do i=2,natoms do j=1,i-1 !Compute distance between i and j. r2 = 0 do k=1,ndim dx(k) = r(k,i) - r(k,j) !Periodic boundary conditions. if(dx(k).gt. ell2(k)) dx(k) = dx(k)-ell(k) if(dx(k).lt.-ell2(k)) dx(k) = dx(k)+ell(k) r2 = r2 + dx(k)*dx(k) enddo !Only compute for pairs inside radial cutoff. if(r2.lt.rcut2) then r2i=sigma2/r2 r6i=r2i*r2i*r2i !Shifted Lennard-Jones potential. pot = pot+eps4*ri6*(ri6-1)- potcut !Radial force. rforce = eps24*r6i*r2i*(2*r6i-1) do k = 1, ndim force(k,i)=force(k,i) + rforce*dx(k) force(k,j)=force(k,j) - rforce*dx(k) enddo endif enddo LJ Force Computation

31 May 2001D. Ceperley Simulation Methods31 Constant Temperature MD Problem in MD is how to control the temperature. (BC in time.) How to start the system? (sample velocities from a Gaussian distribution) If we start from a perfect lattice as the system becomes disordered it will suck up the kinetic energy and cool down. (v.v for starting from a gas) QUENCH method. Run for a while, compute kinetic energy, then rescale the momentum to correct temperature, repeat as needed. Nose-Hoover Thermostat controls the temperature automatically by coupling the microcanonical system to a heat bath Methods have non-physical dynamics since they do not respect locality of interactions. Such effects are O(1/N)

32 May 2001D. Ceperley Simulation Methods32 Quench method Run for a while, compute kinetic energy, then rescale the momentum to correct temperature, repeat as needed. Control is at best O(1/N)

33 May 2001D. Ceperley Simulation Methods33 Nose-Hoover thermostat MD in canonical distribution (TVN) Introduce a friction force  (t) T Reservoir SYSTEM Dynamics of friction coefficient to get canonical ensemble. Feedback restores makes kinetic energy=temperature Q= “heat bath mass”. Large Q is weak coupling

34 May 2001D. Ceperley Simulation Methods34 Effect of thermostat System temperature fluctuates but how quickly? Q=1 Q=100 DIMENSION 3 TYPE argon 256 48. POTENTIAL argon argon 1 1. 1. 2.5 DENSITY 1.05 TEMPERATURE 1.15 TABLE_LENGTH 10000 LATTICE 4 4 4 4 SEED 10 WRITE_SCALARS 25 NOSE 100. RUN MD 2200.05

35 May 2001D. Ceperley Simulation Methods35 Thermostats are needed in non-equilibrium situations where there might be a flux of energy in or out of the system. It is time reversable, deterministic and goes to the canonical distribution but: How natural is the thermostat? –Interactions are non-local. They propagate instantaneously –Interaction with a single heat bath variable-dynamics can be strange. Be careful to adjust the “mass” REFERENCES 1.S. Nose, J. Chem. Phys. 81, 511 (1984); Mol. Phys. 52, 255 (1984). 2.W. Hoover, Phys. Rev. A31, 1695 (1985).

36 May 2001D. Ceperley Simulation Methods36 Constant Pressure To generalize MD, follow similar procedure as for the thermostat for constant pressure. The size of the box is coupled to the internal pressure TPN Volume is coupled to virial pressure Unit cell shape can also change.

37 May 2001D. Ceperley Simulation Methods37 Parrinello-Rahman simulation 500 KCl ions at 300K First P=0 Then P=44kB System spontaneously changes from rocksalt to CsCl structure

38 May 2001D. Ceperley Simulation Methods38 Can “automatically” find new crystal structures Nice feature is that the boundaries are flexible But one is not guaranteed to get out of local minimum One can get the wrong answer. Careful free energy calculations are needed to establish stable structure. All such methods have non-physical dynamics since they do not respect locality of interactions. Non-physical effects are O(1/N) REFERENCES 1.H. C. Andersen, J. Chem. Phys. 72, 2384 (1980). 2.M. Parrinello and A. Rahman, J. Appl. Phys. 52, 7158 (1981).

39 May 2001D. Ceperley Simulation Methods39 Constraints in MD Some problems require constraints: –fixed bond length (water molecule). We would like to get rid of high frequency motion (not interesting) –In LDA-MD we need to keep wavefunctions orthogonal. –Putting constraints in “by hand” is difficult (for example, working in polar coordinates) WARNING: constraints change the ensemble. SHAKE algorithm lets dynamics move forward without constraint; then forces it back to satisfy the constraint. Let equation for constraint be:  (R) =0 Bond constraint is  (R) =|r i -r j | 2 - a 2

40 May 2001D. Ceperley Simulation Methods40 Add Lagrange multiplier (t) Equation of motion is : m a(t) = F - (t)  (R,t) Using leapfrog method : R(t+h) =2R(t)-R(t-h) +h 2 F(t)/m - h 2 (t)  (R,t) What should we use for (t)? Newton’s method so that  (R(t+h))=0. d  Iterate until convergence. With many constraints: cycle through constraint equations.

41 May 2001D. Ceperley Simulation Methods41 Brownian dynamics Put a system in contact with a heat bath Will discuss how to do this later. Leads to discontinuous velocities. Not necessarily a bad thing, but requires some physical insight into how the bath interacts with the system in question. For example, this is appropriate for a large molecule (protein or colloid) in contact with a solvent. Other heat baths in nature are given by phonons and photons,…

42 May 2001D. Ceperley Simulation Methods42 Monitoring the simulation Static properties: pressure, specific heat etc. Density Pair correlation in real space and fourier space. Order parameters: How to tell a liquid from a solid

43 May 2001D. Ceperley Simulation Methods43 Thermodynamic properties Internal energy=kinetic energy + potential energy Kinetic energy is kT/2 per momentum Specific heat = mean squared fluctuation in energy pressure can be computed from the virial theorem. compressibility, bulk modulus, sound speed But we have problems for the basic quantities of entropy and free energy since they are not ratios with respect to the Boltzmann distribution. We will discuss this later.

44 May 2001D. Ceperley Simulation Methods44

45 May 2001D. Ceperley Simulation Methods45 Microscopic Density  (r) = Or you can take its Fourier Transform:  k = (This is a good way to smooth the density.) A solid has broken symmetry (order parameter). The density is not constant. At a liquid-gas transition the density is also inhomgeneous. In periodic boundary conditions the k-vectors are on a grid: k=2  /L (n x,n y,n z ) Long wave length modes are absent. In a solid Lindemann’s ratio gives a rough idea of melting: u 2 = /d 2

46 May 2001D. Ceperley Simulation Methods46 Order parameters A system has certain symmetries: translation invariance. At high temperatures one expect the system to have those same symmetries at the microscopic scale. (e.g. a gas) BUT as the system cools those symmetries are broken. (a gas condenses). At a liquid gas-transition the density is no longer fixed: droplets form. The density is the order parameter. At a liquid-solid transition, both rotational symmetry and translational symmetry are broken The best way to monitor the transition is to look for the dynamics of the order parameter.

47 May 2001D. Ceperley Simulation Methods47

48 May 2001D. Ceperley Simulation Methods48 Electron Density during exchange 2d Wigner crystal (quantum)

49 May 2001D. Ceperley Simulation Methods49 Snapshots of densities Liquid or crystal or glass? Blue spots are defects

50 May 2001D. Ceperley Simulation Methods50 Density distribution within a helium droplet During addition of molecule, it travels from the surface to the interior. Red is high density, blue low density of helium

51 May 2001D. Ceperley Simulation Methods51 Pair Correlation Function, g(r) Primary quantity in a liquid is the probability distribution of pairs of particles. Given a particle at the origin what is the density of surrounding particles g(r) = (2  /N 2 ) Density-density correlation Related to thermodynamic properties.

52 May 2001D. Ceperley Simulation Methods52 g(r) in liquid and solid helium First peak is at inter-particle spacing. (shell around the particle) goes out to r<L/2 in periodic boundary conditions.

53 May 2001D. Ceperley Simulation Methods53 (The static) Structure Factor S(K) The Fourier transform of the pair correlation function is the structure factor S(k) = /N(1) S(k) = 1 +   dr exp(ikr) (g(r)-1)(2) problem with (2) is to extend g(r) to infinity This is what is measured in neutron and X-Ray scattering experiments. Can provide a direct test of the assumed potential. Used to see the state of a system: liquid, solid, glass, gas? (much better than g(r) ) Order parameter in solid is  G

54 May 2001D. Ceperley Simulation Methods54 In a perfect lattice S(k) will be non-zero only on the reciprocal lattice vectors G: S(G) = N At non-zero temperature (or for a quantum system) this structure factor is reduced by the Debye-Waller factor S(G) = 1+ (N-1)exp(-G 2 u 2 /3) To tell a liquid from a crystal we see how S(G) scales as the system is enlarged. In a solid, S(k) will have peaks that scale with the number of atoms. The compressibility is given by: We can use this is detect the liquid-gas transition since the compressibility should diverge as k approaches 0. (order parameter is density)

55 May 2001D. Ceperley Simulation Methods55 Crystalliquid

56 May 2001D. Ceperley Simulation Methods56 Here is a snapshot of a binary mixture. What correlation function would be important?

57 May 2001D. Ceperley Simulation Methods57 In a perfect lattice S(k) will be non-zero only on the reciprocal lattice vectors: S(G) = N At non-zero temperature (or for a quantum system) this structure factor is reduced by the Debye-Waller factor S(G) = 1+ (N-1)exp(-G 2 u 2 /3) To tell a liquid from a crystal we see how S(G) scales as the system is enlarged. In a solid, S(k) will have peaks that scale with the number of atoms. The compressibility is given by: We can use this is detect the liquid gas transition since the compressibility should diverge. (order parameter is density)

58 May 2001D. Ceperley Simulation Methods58 Dynamical Properties Fluctuation-Dissipation theorem: Here A e -iwt is a perturbation and  (w) e -iwt is the response of B. We calculate the average on the lhs in equilibrium (no external perturbation). Fluctuations we see in equilibrium are equivalent to how a non-equilibrium system approaches equilibrium. (Onsager regression hypothesis) Density-Density response function is S(k,w) can be measured by scattering and is sensitive to collective motions.

59 May 2001D. Ceperley Simulation Methods59 Diffusion constant is defined by Fick’s law and controls how systems mix Microscopically we calculate: Alder-Wainwright discovered long-time tails on the velocity autocorrelation function so that the diffusion constant does not exist in 2D Diffusion Constant

60 May 2001D. Ceperley Simulation Methods60 Mixture simulation with CLAMPS Initial conditionLater

61 May 2001D. Ceperley Simulation Methods61 Transport Coefficients Diffusion: Particle flux Viscosity: Stress tensor Heat transport: energy current Electrical Conductivity: electrical current These can also be evaluated with non-equilibrium simulations use thermostats to control. Impose a shear flow Impose a heat flow

62 May 2001D. Ceperley Simulation Methods62 CLAMPS input file TYPE argon 864 48. POTENTIAL argon argon 1 1. 1. 2.5 DENSITY 0.35 TEMPERATURE 1.418 LATTICE SEED 10 WRITE_SCALARS 10 WRITE_COORD 20 RUN MD 300.032 Executes 300 dynamics steps Setup system Output intervals

63 May 2001D. Ceperley Simulation Methods63 Monte Carlo: Lecture 3 What is Monte Carlo? Theory of Markov processes Detailed balance and transition rules. Grand Canonical and polymer Reptation Can Monte Carlo tell us about real dynamics? Why Monte Carlo instead of MD?

64 May 2001D. Ceperley Simulation Methods64 Monte Carlo Invented in Los Alamos in 1944 and named after the famous casino. A way of doing integrals by using random numbers: P(R)=sampling function Convergence guaranteed by the Central Limit Theorem If P(R)  f(R) then variance  0 faster way of doing integrals for dimensions > 4

65 May 2001D. Ceperley Simulation Methods65 Theory of Markov processes Markov process is a random walk through phase space:Markov process is a random walk through phase space: s 1  s 2  s 3  s 4  … If it is ergodic, then it will converge to a unique stationary distribution (only one eigenvalue) Detailed balance:  (s) P(s  s’) =  (s’)P (s’  s ). Rate balance from s to s’. We achieve detailed balance by rejecting moves. Acceptance probability is:

66 May 2001D. Ceperley Simulation Methods66 “Classic” Metropolis method Metropolis Rosenbluth Teller (1953) method: Move from s to s’ with probability T(s  s’)= constant Accept with move with probability: a(s  s’)= min [ 1, exp ( - (E(s’)-E(s))/k B T ) ] Repeat many times Given ergodicity, the distribution of s will be the canonical distribution:  (s) = exp(-E(s)/k B T)/Z

67 May 2001D. Ceperley Simulation Methods67 MONTE CARLO CODE call initstate(sold) vold = action(sold) LOOP{ callsample(sold,snew,pnew,1) vnew = action(snew) call sample(snew,sold,pold,0) a=exp(-vnew+vold)*pold/pnew if(a.gt.sprng()) { sold=snew vold=vnew naccept=naccept+1} call averages(sold) } Initialize the state Sample snew Trial action Find prob. of going backward Acceptance prob. Accept the move Collect statistics

68 May 2001D. Ceperley Simulation Methods68 How to sample Snew = Sold +  (sprng - 0.5) Uniform distribution in a cube  It is more efficient to move one particle at a time because only the energy of that particle comes in and the movement and acceptance ratio will be larger.

69 May 2001D. Ceperley Simulation Methods69 Monte Carlo Rules Optimize the efficiency= error 2 (computer time) with respect to any sampling parameters. Measure acceptance ratio. Set to roughly1/2 by varying the “step size” Accepted and rejected states count the same! Exact: no time step error, no ergodic problems in principle but no dynamics.

70 May 2001D. Ceperley Simulation Methods70 Optimizing the moves Any transition rule is allowed as long as you can go anywhere in phase space with a finite number of them. (ergodic) Try to find a T(s  s’)   (s’). If you can the acceptance ratio will be 1. We can use the forces to push the walk in the right direction. Taylor expand about the current point. V(r)=V(r 0 )-F(r)(r-r o ) Set T(s  s’)  exp[ -  (V(r 0 )- F(r 0 )(r-r o ))] Leads to Force-Bias Monte Carlo Related to Brownian motion (Smoluchowski Eq.)

71 May 2001D. Ceperley Simulation Methods71 Brownian Dynamics Consider a big molecule in a solvent. In the high viscosity limit the “master equation” is: Enforce detailed balance by rejections! (hybrid method) Also the equation for Diffusion Quantum Monte Carlo!

72 May 2001D. Ceperley Simulation Methods72 Other Ensembles in MC Monte Carlo can handle discrete and continuous variables at the same time. This allows us to treat other ensembles. For example consider the ( , V, T) ensemble. The number of particles is variable. We consider moves that change the number of particles by adding or subtracting 1 and accepting or rejecting the move.

73 May 2001D. Ceperley Simulation Methods73 Free Energy Computation Free energy ( and the entropy) is more difficult since ensemble average are of ratios only. Special methods are needed: e.g. thermodynamic integration. P T Integration path Find shortest path to known state.

74 May 2001D. Ceperley Simulation Methods74 Polymer Simulations Polymers move very slowly because of entanglement. A good algorithm is “reptation” Cut off one end and grow onto the other end. Acceptance probability is change in potential Simple moves go quickly through polymer space. Completely unphysical dynamics or is it? This may be how entangled polymers actually move.

75 May 2001D. Ceperley Simulation Methods75 Continuum of Methods Path Integral Monte Carlo Ab initio Molecular Dynamics semi-empirical Molecular Dynamics Langevin Equation Brownian Dynamics Metropolis Monte Carlo Kinetic Monte Carlo The general procedure is to average out fast degrees of freedom.

76 May 2001D. Ceperley Simulation Methods76 Monte Carlo Dynamics MC introduced as a way of sampling configuration space so that MC dynamics is purely fictitious. Brownian dynamics and reptation are a model dynamics. If we satisfy certain conservation laws in the dynamics (e.g. locality of moves) then the dynamics is a possible model with “local thermodynamic equilibrium”. For Ising model we speak of Kawasaki Dynamics- this respects the locality of spin. MC dynamics is useful because it is so much faster than MD. MC dynamics neglects long time correlations which can arise from hydrodynamical effects.

77 May 2001D. Ceperley Simulation Methods77 Comparison between MC and MD Which is better? MD can compute dynamics. MC has a kinetics under user control, but dynamics is not necessarily physical. MC dynamics is useful for studying long-term diffusive process. MC is simpler: no forces, no time step errors and a direct simulation of the canonical ensemble. In MD you can only work on how to make the CPUtime/physical time faster. In MC you can invent better transition rules and ergodicity is less of a problem. MD is sometimes very effective in highly constrained systems. MC is more general, it can handle discrete degrees of freedom (e. g. spin models, quantum systems), grand canonical ensemble... So you need both! The best is to have both in the same code so you can use MC to warm up the dynamics.

78 May 2001D. Ceperley Simulation Methods78 “An intelligent being who, at a given moment, knows all the forces that cause nature to move and the positions of the objects that it is made from, if also it is powerful enough to analyze this data, would have described in the same formula the movements of the largest bodies of the universe and those of the lightest atoms. Although scientific research steadily approaches the abilities of this intelligent being, complete prediction will always remain infinitely far away.” Laplace, 1820 Although detailed predictions may be impossible, is it possible to exactly calculate the properties of materials?


Download ppt "May 2001D. Ceperley Simulation Methods1 Simulation Techniques: Outline Lecture 1: Introduction:Statistical Mechanics, ergodicity, statistical errors. Lecture."

Similar presentations


Ads by Google