The Monte Carlo Method/ Markov Chains/ Metropolitan Algorithm from sec in “Adaptive Cooperative Systems” -summarized by Jinsan Yang.

Slides:



Advertisements
Similar presentations
PRAGMA – 9 V.S.S.Sastry School of Physics University of Hyderabad 22 nd October, 2005.
Advertisements

Monte Carlo Simulation Wednesday, 9/11/2002 Stochastic simulations consider particle interactions. Ensemble sampling Markov Chain Metropolis Sampling.
02/12/ a tutorial on Markov Chain Monte Carlo (MCMC) Dima Damen Maths Club December 2 nd 2008.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Introduction of Markov Chain Monte Carlo Jeongkyun Lee.
Ch 11. Sampling Models Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by I.-H. Lee Biointelligence Laboratory, Seoul National.
Computer Vision Lab. SNU Young Ki Baik An Introduction to MCMC for Machine Learning (Markov Chain Monte Carlo)
Markov Chains 1.
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee.
11 - Markov Chains Jim Vallandingham.
Introduction to Sampling based inference and MCMC Ata Kaban School of Computer Science The University of Birmingham.
CHAPTER 16 MARKOV CHAIN MONTE CARLO
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Graduate School of Information Sciences, Tohoku University
BAYESIAN INFERENCE Sampling techniques
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
Monte Carlo Simulation Methods - ideal gas. Calculating properties by integration.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
Machine Learning CUNY Graduate Center Lecture 7b: Sampling.
Monte Carlo Methods in Partial Differential Equations.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Advanced methods of molecular dynamics Monte Carlo methods
Entropy Rate of a Markov Chain
Monte Carlo Methods: Basics
Classical and Quantum Monte Carlo Methods Or: Why we know as little as we do about interacting fermions Erez Berg Student/Postdoc Journal Club, Oct
Introduction to Monte Carlo Simulation. What is a Monte Carlo simulation? In a Monte Carlo simulation we attempt to follow the `time dependence’ of a.
F.F. Assaad. MPI-Stuttgart. Universität-Stuttgart Numerical approaches to the correlated electron problem: Quantum Monte Carlo.  The Monte.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
Basic Monte Carlo (chapter 3) Algorithm Detailed Balance Other points.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
Monte Carlo Methods in Statistical Mechanics Aziz Abdellahi CEDER group Materials Basics Lecture : 08/18/
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 3.
Simulated Annealing.
Markov Chain Monte Carlo and Gibbs Sampling Vasileios Hatzivassiloglou University of Texas at Dallas.
Schematic representation of ‘Microstates’. Integretion over a Markovian Web. G. C. Boulougouris, D. Frenkel, J. Chem. Theory Comput. 2005, 1,
An Introduction to Markov Chain Monte Carlo Teg Grenager July 1, 2004.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
7. Metropolis Algorithm. Markov Chain and Monte Carlo Markov chain theory describes a particularly simple type of stochastic processes. Given a transition.
Lecture #9: Introduction to Markov Chain Monte Carlo, part 3
An Introduction to Monte Carlo Methods in Statistical Physics Kristen A. Fichthorn The Pennsylvania State University University Park, PA
Graduate School of Information Sciences, Tohoku University
Javier Junquera Importance sampling Monte Carlo. Cambridge University Press, Cambridge, 2002 ISBN Bibliography.
CS774. Markov Random Field : Theory and Application Lecture 15 Kyomin Jung KAIST Oct
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
STAT 534: Statistical Computing
Basic Monte Carlo (chapter 3) Algorithm Detailed Balance Other points non-Boltzmann sampling.
Monte Carlo Simulation of the Ising Model Consider a system of N classical spins which can be either up or down. The total.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Systematic errors of MC simulations Equilibrium error averages taken before the system has reached equilibrium  Monitor the variables you are interested.
Introduction to Sampling based inference and MCMC
Markov Chain Monte Carlo methods --the final project of stat 6213
Advanced Statistical Computing Fall 2016
Monte Carlo: A Simple Simulator’s View
Jun Liu Department of Statistics Stanford University
Markov chain monte carlo
Remember that our objective is for some density f(y|) for observations where y and  are vectors of data and parameters,  being sampled from a prior.
14. TMMC, Flat-Histogram and Wang-Landau Method
Haim Kaplan and Uri Zwick
7. Metropolis Algorithm.
DIAGRAMMATIC MONTE CARLO:
Biointelligence Laboratory, Seoul National University
Common Types of Simulations
Markov Networks.
Lesson 9: Basic Monte Carlo integration
Presentation transcript:

The Monte Carlo Method/ Markov Chains/ Metropolitan Algorithm from sec in “Adaptive Cooperative Systems” -summarized by Jinsan Yang

2.9 The Monte Carlo Method Definition  Monte Carlo method is a device for studying stochastic models  Introduced by Ulam and von Neumann in 1947 for solving integral and differential equations  It is easier to solve numerically to an equation by finding stochastic process and computing the resulting statistics than to solve the equation directly  The use of a random sampling method for the solution of a deterministic mathematical problem

Study of Reaction Processes  Monte Carlo method for the oxygen-carbon reactions  Emission probabilities for six particle species were computed and compared to a uniformly distributed random number to determine which particle type would be emitted  Candidate energy was accepted if the probability for emitting a particle with the proposed energy was greater than another random number, else was rejected and these process was repeated until an energy was selected

Problems in Statistical Mechanics  The Monte Carlo method was extended by Metropolis et al in 1953 to statistical mechanics to determine properties of substances of multiple interacting elements  In direct manner, every possible configuration of the N- molecule system should be included.  In Monte Carlo method, the summation is carried out by choosing a configuration at random and weight by appropriate Boltzmann factor  In the approach of Metropolis et al, a molecule was selected and a small displacement was conducted by a uniformly drawn random number. If the corresponding energy change was greater than a random number, the change was permitted and the procedure was repeated.

2.10 Markov Chains Definitions  In an array of N elements, each element can be up/down state and for each unique configuration of the array corresponds a random variable: energy or magnetization of the system.  Collection of configuration points for X=x i is referred to as the state X=x i  Can extend these definitions to two or more random variables

 Transition Probabilities  Markov chain

Convergence to stationary distributions  Irreducible MC: if there is no closed collection other than the collection of all states (every state can be reached from every other states)  A probability distribution {w i } is stationary if  Convergence theorem for irreducible and aperiodic MC: n-step transition probabilities will approach a unique stationary distribution if stationary distribution exists (influence of initial state will disappear)  (stationary distribution does not exist)

Convergence to stationary distributions  condition of detailed balance relation :  Under above condition, the probability distribution is stationary  Goals of metropolis et al. Avid explicit evaluation of the partition function Efficient MC sampling by picking configuration relative to the importance To achieve these, transition probabilities are defined with balance relation and with Gibbs distribution as the stationary distribution

2.11 The Metropolis Algorithm MC with three priori transition probability conditions: Define transition probability Gibbs distribution

Transition probabilities obey the detailed balance relation Limiting stationary distribution reached by the n-step transition probabilities is unique and therefore the stationary distribution generated by the Metropolis algorithm must be the Gibbs distribution.

Metropolis sampling procedure  At time t, the state X takes the value x t, and select a state Y t at random from the target distribution in a manner which satisfies the symmetry condition  Energy difference  If the energy difference is negative, transition is allowed  If the energy difference is positive, select a random number r between 0 and 1 and if allow the transition