9. Convergence and Monte Carlo Errors. Measuring Convergence to Equilibrium Variation distance where P 1 and P 2 are two probability distributions, A.

Slides:



Advertisements
Similar presentations
Random Processes Introduction (2)
Advertisements

Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.
Lecture 14: Spin glasses Outline: the EA and SK models heuristic theory dynamics I: using random matrices dynamics II: using MSR.
Lecture 3: Markov processes, master equation
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Graduate School of Information Sciences, Tohoku University
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
Continuous Random Variables. For discrete random variables, we required that Y was limited to a finite (or countably infinite) set of values. Now, for.
The standard error of the sample mean and confidence intervals
The standard error of the sample mean and confidence intervals
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
1 Cluster Monte Carlo Algorithms & softening of first-order transition by disorder TIAN Liang.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Standard Normal Distribution
1 Sociology 601, Class 4: September 10, 2009 Chapter 4: Distributions Probability distributions (4.1) The normal probability distribution (4.2) Sampling.
The standard error of the sample mean and confidence intervals How far is the average sample mean from the population mean? In what interval around mu.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
k r Factorial Designs with Replications r replications of 2 k Experiments –2 k r observations. –Allows estimation of experimental errors Model:
Chapter 10: Estimating with Confidence
Modern Navigation Thomas Herring
Standard error of estimate & Confidence interval.
The standard error of the sample mean and confidence intervals How far is the average sample mean from the population mean? In what interval around mu.
Review of normal distribution. Exercise Solution.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Principles of Pattern Recognition
Lecture 11: Ising model Outline: equilibrium theory d = 1
Chapter 11: Estimation Estimation Defined Confidence Levels
+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 8: Estimating with Confidence Section 8.1 Confidence Intervals: The.
Random Sampling, Point Estimation and Maximum Likelihood.
Geo479/579: Geostatistics Ch12. Ordinary Kriging (1)
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
Normal Distributions Z Transformations Central Limit Theorem Standard Normal Distribution Z Distribution Table Confidence Intervals Levels of Significance.
Sampling Distributions & Standard Error Lesson 7.
General Confidence Intervals Section Starter A shipment of engine pistons are supposed to have diameters which vary according to N(4 in,
MA-250 Probability and Statistics Nazar Khan PUCIT Lecture 26.
1 Monte Carlo Methods Wang Jian-Sheng Department of Physics.
8. Selected Applications. Applications of Monte Carlo Method Structural and thermodynamic properties of matter [gas, liquid, solid, polymers, (bio)-macro-
Mathematics and Statistics Boot Camp II David Siroky Duke University.
Basic Numerical Procedure
1 Chapter 7 Sampling Distributions. 2 Chapter Outline  Selecting A Sample  Point Estimation  Introduction to Sampling Distributions  Sampling Distribution.
An Efficient Sequential Design for Sensitivity Experiments Yubin Tian School of Science, Beijing Institute of Technology.
Time-dependent Schrodinger Equation Numerical solution of the time-independent equation is straightforward constant energy solutions do not require us.
1 Cluster Monte Carlo Algorithms: Jian-Sheng Wang National University of Singapore.
Sampling distributions rule of thumb…. Some important points about sample distributions… If we obtain a sample that meets the rules of thumb, then…
+ DO NOW. + Chapter 8 Estimating with Confidence 8.1Confidence Intervals: The Basics 8.2Estimating a Population Proportion 8.3Estimating a Population.
The final exam solutions. Part I, #1, Central limit theorem Let X1,X2, …, Xn be a sequence of i.i.d. random variables each having mean μ and variance.
Principles of the Global Positioning System Lecture 12 Prof. Thomas Herring Room ;
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Sampling and estimation Petter Mostad
Review Normal Distributions –Draw a picture. –Convert to standard normal (if necessary) –Use the binomial tables to look up the value. –In the case of.
Chapter 19 Monte Carlo Valuation. Copyright © 2006 Pearson Addison-Wesley. All rights reserved Monte Carlo Valuation Simulation of future stock.
1/18/2016Atomic Scale Simulation1 Definition of Simulation What is a simulation? –It has an internal state “S” In classical mechanics, the state = positions.
1 Series Expansion in Nonequilibrium Statistical Mechanics Jian-Sheng Wang Dept of Computational Science, National University of Singapore.
Ch 8 Estimating with Confidence 8.1: Confidence Intervals.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Percolation Percolation is a purely geometric problem which exhibits a phase transition consider a 2 dimensional lattice where the sites are occupied with.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Monte Carlo Simulation of Canonical Distribution The idea is to generate states i,j,… by a stochastic process such that the probability  (i) of state.
+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 8: Estimating with Confidence Section 8.1 Confidence Intervals: The.
6/11/2016Atomic Scale Simulation1 Definition of Simulation What is a simulation? –It has an internal state “S” In classical mechanics, the state = positions.
Computational Physics (Lecture 10) PHY4370. Simulation Details To simulate Ising models First step is to choose a lattice. For example, we can us SC,
How many iterations in the Gibbs sampler? Adrian E. Raftery and Steven Lewis (September, 1991) Duke University Machine Learning Group Presented by Iulian.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Computational Physics (Lecture 10)
Virtual University of Pakistan
Chapter 7: Sampling Distributions
CHAPTER 15 SUMMARY Chapter Specifics
Lecture 7 Sampling and Sampling Distributions
Presentation transcript:

9. Convergence and Monte Carlo Errors

Measuring Convergence to Equilibrium Variation distance where P 1 and P 2 are two probability distributions, A is a set of states, i is a single state.

Eigenvalue Problem Consider the matrix S defined by [S] ij = p i ½ W(i->j) p j -½ then S is real and symmetric and eigenvalues of S satisfy | n | ≤ 1 One of the eigenvalue must be 0 =1 with eigenvector p j ½.

Spectrum Decomposition Then we have U T SU = Λ, or S = U Λ U T where Λ is a diagonal matrix with diagonal elements k and U is orthonormal matrix, U U T = I. W can be expressed in U, P, and Λ as W = P -½ UΛU T P ½

Evolution in terms of eigen-states P n = P 0 W n = P 0 P -½ UΛU T P ½ P -½ UΛU T P ½ … = P 0 P -½ UΛ n U T P ½ In component form, this means P n (j) = ∑ i P 0 (i) p i -½ p j ½ ∑ k k n u ik u jk

Discussion In the limit n goes to ∞, P n (j) ≈ ∑ i P 0 (i) p i -½ p j ½ u i0 u j0 = p j The leading correction to the limit is P n (j) ≈ p j + a 1 n = p j + a e -n/ 

Exponential Correlation Time We define  by the next largest eigenvalue  = - 1/log 1 This number characterizes the theoretical rate of convergence in a Markov chain.

Measuring Error Let Q t be some quantity of interest at time step t, then sample average is Q N = (1/N) ∑ t Q t We treat Q N as a random variable. By central limit theorem, Q N is normal distributed with a mean = and variance σ N 2 = - 2. standards for average over the exact distribution.

Confidence Interval The chance that the actual mean is in the interval [ Q N – σ N, Q N + σ N ] is about 68 percents. σ N cannot be computed (exactly) in a single MC run of length N.

Estimating Variance The calculation of var(Q) = - 2 and  int can be done in a single run of length N.

Error Formula The above derivation gives the famous error estimate in Monte Carlo as: where var(Q) = - 2 can be estimated by sample variance of Q t.

Time-Dependent Correlation function and integrated correlation time We define and

Circular Buffer for Calculating f(t) Q t, Current time t Q t-1 Previous time t-1 Earliest time t-(M-1) We store the values of Q s of the previous M-1 times and the current value Q t QsQs

An Example of f(t) Time-dependent correlation function for 3D Ising at T c on a 16 3 lattice; Swendsen-Wang dynamics. From J S Wang, Physica A 164 (1990) 240.

Efficient Method for Computing  int We compute  int by the formula  int = N σ N 2 /var(Q) For small value N and then extrapolating N to ∞. From J S Wang, O Kozan and R H Swendsen, Phys Rev E 66 (2002)

Exponential and integrated correlation times where 1 < 1 is the second largest eigenvalue of W matrix. This result says that exponential correlation time  (=-1/log 1 ) is related to the largest integrated correlation time.

Critical Slowing Down TcTc T  The correlation time becomes large near T c. For a finite system  (T c )  L z, with dynamical critical exponent z ≈ 2 for local moves

Relaxation towards Equilibrium Time t Magnetization m T < T c T = T c T > T c Schematic curves of relaxation of the total magnetization as a function of time. At T c relaxation is slow, described by power law: m  t -β/(zν)

Jackknife Method Let n be the number of independent samples Let c be some estimate using all n samples Let c i be the same estimate but using n-1 samples, with i-th sample removed Then Jackknife error estimate is