Monte Carlo Simulation of Canonical Distribution The idea is to generate states i,j,… by a stochastic process such that the probability  (i) of state.

Slides:



Advertisements
Similar presentations
PRAGMA – 9 V.S.S.Sastry School of Physics University of Hyderabad 22 nd October, 2005.
Advertisements

Monte Carlo Simulation Wednesday, 9/11/2002 Stochastic simulations consider particle interactions. Ensemble sampling Markov Chain Metropolis Sampling.
Modern Monte Carlo Methods: (2) Histogram Reweighting (3) Transition Matrix Monte Carlo Jian-Sheng Wang National University of Singapore.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Markov Chains 1.
Boris Altshuler Columbia University Anderson Localization against Adiabatic Quantum Computation Hari Krovi, Jérémie Roland NEC Laboratories America.
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee.
11 - Markov Chains Jim Vallandingham.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Lecture 3: Markov processes, master equation
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
Monte Carlo Simulation Methods - ideal gas. Calculating properties by integration.
Ising Model Dr. Ernst Ising May 10, 1900 – May 11, 1998.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
The Gibbs sampler Suppose f is a function from S d to S. We generate a Markov chain by consecutively drawing from (called the full conditionals). The n’th.
1 Cluster Monte Carlo Algorithms & softening of first-order transition by disorder TIAN Liang.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Monte Carlo Simulation of Ising Model and Phase Transition Studies
Continuous Time Monte Carlo and Driven Vortices in a Periodic Potential V. Gotcheva, Yanting Wang, Albert Wang and S. Teitel University of Rochester Lattice.
Monte Carlo Methods H. Rieger, Saarland University, Saarbrücken, Germany Summerschool on Computational Statistical Physics, NCCU Taipei, Taiwan.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Monte Carlo Simulation of Ising Model and Phase Transition Studies By Gelman Evgenii.
Simulation Output Analysis
Advanced methods of molecular dynamics Monte Carlo methods
Relating computational and physical complexity Computational complexity: How the number of computational steps needed to solve a problem scales with problem.
Monte Carlo Methods: Basics
Classical and Quantum Monte Carlo Methods Or: Why we know as little as we do about interacting fermions Erez Berg Student/Postdoc Journal Club, Oct
F.F. Assaad. MPI-Stuttgart. Universität-Stuttgart Numerical approaches to the correlated electron problem: Quantum Monte Carlo.  The Monte.
F.F. Assaad. MPI-Stuttgart. Universität-Stuttgart Numerical approaches to the correlated electron problem: Quantum Monte Carlo.  The Monte.
Outline Review of extended ensemble methods (multi-canonical, Wang-Landau, flat-histogram, simulated tempering) Replica MC Connection to parallel tempering.
1 Worm Algorithms Jian-Sheng Wang National University of Singapore.
Basic Monte Carlo (chapter 3) Algorithm Detailed Balance Other points.
9. Convergence and Monte Carlo Errors. Measuring Convergence to Equilibrium Variation distance where P 1 and P 2 are two probability distributions, A.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
Monte Carlo Methods in Statistical Mechanics Aziz Abdellahi CEDER group Materials Basics Lecture : 08/18/
Simulated Annealing.
Introduction to Lattice Simulations. Cellular Automata What are Cellular Automata or CA? A cellular automata is a discrete model used to study a range.
The Ising Model Mathematical Biology Lecture 5 James A. Glazier (Partially Based on Koonin and Meredith, Computational Physics, Chapter 8)
Two Temperature Non-equilibrium Ising Model in 1D Nick Borchers.
8. Selected Applications. Applications of Monte Carlo Method Structural and thermodynamic properties of matter [gas, liquid, solid, polymers, (bio)-macro-
Monte Carlo Methods So far we have discussed Monte Carlo methods based on a uniform distribution of random numbers on the interval [0,1] p(x) = 1 0  x.
CompSci 100E 3.1 Random Walks “A drunk man wil l find his way home, but a drunk bird may get lost forever”  – Shizuo Kakutani Suppose you proceed randomly.
Time-dependent Schrodinger Equation Numerical solution of the time-independent equation is straightforward constant energy solutions do not require us.
Percolation Percolation is a purely geometric problem which exhibits a phase transition consider a 2 dimensional lattice where the sites are occupied with.
Lecture 2 Molecular dynamics simulates a system by numerically following the path of all particles in phase space as a function of time the time T must.
Molecular Modelling - Lecture 2 Techniques for Conformational Sampling Uses CHARMM force field Written in C++
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
13. Extended Ensemble Methods. Slow Dynamics at First- Order Phase Transition At first-order phase transition, the longest time scale is controlled by.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
An Introduction to Monte Carlo Methods in Statistical Physics Kristen A. Fichthorn The Pennsylvania State University University Park, PA
Javier Junquera Importance sampling Monte Carlo. Cambridge University Press, Cambridge, 2002 ISBN Bibliography.
Chapter 6 Product-Form Queuing Network Models Prof. Ali Movaghar.
CS774. Markov Random Field : Theory and Application Lecture 15 Kyomin Jung KAIST Oct
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
CompSci 100E 4.1 Google’s PageRank web site xxx web site yyyy web site a b c d e f g web site pdq pdq.. web site yyyy web site a b c d e f g web site xxx.
Percolation Percolation is a purely geometric problem which exhibits a phase transition consider a 2 dimensional lattice where the sites are occupied with.
STAT 534: Statistical Computing
Basic Monte Carlo (chapter 3) Algorithm Detailed Balance Other points non-Boltzmann sampling.
Computational Physics (Lecture 10) PHY4370. Simulation Details To simulate Ising models First step is to choose a lattice. For example, we can us SC,
Monte Carlo Simulation of the Ising Model Consider a system of N classical spins which can be either up or down. The total.
The Monte Carlo Method/ Markov Chains/ Metropolitan Algorithm from sec in “Adaptive Cooperative Systems” -summarized by Jinsan Yang.
Computational Physics (Lecture 10)
Advanced Statistical Computing Fall 2016
Industrial Engineering Dep
14. TMMC, Flat-Histogram and Wang-Landau Method
Common Types of Simulations
Presentation transcript:

Monte Carlo Simulation of Canonical Distribution The idea is to generate states i,j,… by a stochastic process such that the probability  (i) of state i is given by the appropriate distribution –> canonical, grand canonical etc.  states are generated and the desired quantity A i (energy, magnetization,…) is calculated for each state = lim 1/   A i  i For the canonical distribution,  (i) = exp(-  E i )/ Z where Z =  exp(-  E i )

How do we do this using a computer? Consider a system of N classical spins which can be up or down. The total number of microstates is M = 2 N H = -J  s i s j i<j We could generate configurations randomly and calculate E(i) and weight its contribution by exp(-  E(i)) =  E(i) exp(-  E(i)) /  exp(-  E(i)) Very inefficient since M = 2 N is exponentially large. We can never generate all states if they have equal probability and many configurations make a small contribution We want to use importance sampling!

Importance sampling =  [A(i)/  (i)] exp(-  E(i))  (i)  [1/  (i)] exp(-  E(i))  (i) If we generate the microstates with probability  (i) = exp(-  E(i))/  exp(-  E(i)) then = (1/n)  A(i) How do we obtain  (i) ?

Markov process Suppose the system is in state i. The next state is selected with a transition probability P(j  i) that does not depend on the previous history of the system. This process produces states with a unique steady-state probability distribution (after a transient) The steady state probability  (j) is an eigenvector with eigenvalue 1 of the transition matrix  (j) =  P(j  i)  (i) i

Eg. if he is in room 2, then P(3  2) = P(1  2) = 1/2 Similarly, P(1  3) = P(2  3) = P(4  3) = 1/3 What are the transition probabilities? What fraction of the time will the student spend in each room in a steady state? Consider the following example A student changes rooms at regular intervals and uses any of the doors leaving the room with equal probability

Hence P(j  i) = 0 1/2 1/3 0 1/2 0 1/3 0 1/2 1/ /3 0 Eigenvalues are 1, -1/2, -1/4  1/2 (11/12) 1/2 Eigenvector of largest eigenvalue is ( 1/4, 1/4, 3/8, 1/8) Hence after a long time we reach a steady state with  (1)= 1/4  (2)= 1/4  (3)=3/8  (4)=1/8 Note:   (i) = 1 (normalization) P(j  i)  (i) = P(i  j)  (j) (detailed balance)

Ising Model Suppose system is in state i. Pick a site  at random and consider flipping it s  = - s . The final state can be the same (i) or different (j). After n steps  (f) = lim P(f  i) =  P(f  i n-1 ) P(i n-1  i n-2 ) … P(i 1  i) n  approaches a limiting distribution independent of the initial state i. We require  (f) to be normalized and satisfy  (m)/  (j) = exp[-  (E(m)-E(j)] for all pairs m,j Normalization means  P(j  m) = 1 j and P(j  m)  (m) = P(m  j)  (j) “detailed balance” Hence  (m) =  P(j  m)  (m) =  P(m  j)  (j) j j  (m) is a stationary probability distribution

Metropolis Algorithm 0) establish an initial microstate 1) pick site  randomly 2) compute the energy change if the spin is flipped  E = E(new) – E(old) 3) determine the value of A(i) 4) if  E  0, then flip it and proceed to 7 5) if  E>0, then compute w=e -  E 6) generate a random number r 7) if r  w accept the new state otherwise remain in the old 8) repeat steps 1) to 7) 9) Calculate and - 2

Periodic boundary conditions

Specific heat and magnetic susceptibility C v = - 2 kT 2  = = - 2 kT e.g. Ising Model S i =  1 on a square lattice of N=L 2 sites In the limit L , the exact results are known

In the limit as L  the system undergoes a phase transition The exact T c = The specific heat diverges logarithmically C v ~ ln|T-T c | The susceptibility diverges as  ~ |T-T c | -  with  =7/4

Monte Carlo Simulation of the Ising Model

This is an example of an order- disorder transition F = E - TS energy(order) versus entropy(disorder) In d=1, the ground state at T=0 has all spins aligned parallel  Low energy excitations correspond to domain walls   E = 2J  S = k ln(N)  F =2J- kT ln(N) < 0 The ordered phase is unstable at finite T>0 towards the formation of defects (domain walls)

d=2 On the square lattice, the ground state has all spins aligned parallel  Low energy excitations consist of compact clusters(domains) of overturned spins   r=8   E = 2J r,  S = k ln(3 r ) r is the perimeter of the cluster Hence  F  [2J- kT ln(3)] r r>>1 At low T,  F is positive but vanishes at a finite T