Lecture 15 Sampling.

Slides:



Advertisements
Similar presentations
02/12/ a tutorial on Markov Chain Monte Carlo (MCMC) Dima Damen Maths Club December 2 nd 2008.
Advertisements

Exact Inference in Bayes Nets
Efficient Cosmological Parameter Estimation with Hamiltonian Monte Carlo Amir Hajian Amir Hajian Cosmo06 – September 25, 2006 Astro-ph/
Introduction of Markov Chain Monte Carlo Jeongkyun Lee.
Ch 11. Sampling Models Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by I.-H. Lee Biointelligence Laboratory, Seoul National.
Markov Chains Modified by Longin Jan Latecki
Computer Vision Lab. SNU Young Ki Baik An Introduction to MCMC for Machine Learning (Markov Chain Monte Carlo)
Markov Chains 1.
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee.
Markov Networks.
Introduction to Sampling based inference and MCMC Ata Kaban School of Computer Science The University of Birmingham.
CHAPTER 16 MARKOV CHAIN MONTE CARLO
Bayesian Reasoning: Markov Chain Monte Carlo
Graduate School of Information Sciences, Tohoku University
CS774. Markov Random Field : Theory and Application Lecture 04 Kyomin Jung KAIST Sep
Stochastic approximate inference Kay H. Brodersen Computational Neuroeconomics Group Department of Economics University of Zurich Machine Learning and.
BAYESIAN INFERENCE Sampling techniques
Exact Inference (Last Class) variable elimination  polytrees (directed graph with at most one undirected path between any two vertices; subset of DAGs)
Edge Detection Enhancement Using Gibbs Sampler
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
The University of Texas at Austin, CS 395T, Spring 2008, Prof. William H. Press IMPRS Summer School 2009, Prof. William H. Press 1 4th IMPRS Astronomy.
Machine Learning CUNY Graduate Center Lecture 7b: Sampling.
Today Introduction to MCMC Particle filters and MCMC
Monte Carlo Methods in Partial Differential Equations.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
WSEAS AIKED, Cambridge, Feature Importance in Bayesian Assessment of Newborn Brain Maturity from EEG Livia Jakaite, Vitaly Schetinin and Carsten.
Perceptual and Sensory Augmented Computing Machine Learning, Summer’09 Machine Learning – Lecture 16 Approximate Inference Bastian Leibe RWTH.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
1 Gil McVean Tuesday 24 th February 2009 Markov Chain Monte Carlo.
Monte Carlo Methods1 T Special Course In Information Science II Tomas Ukkonen
Suppressing Random Walks in Markov Chain Monte Carlo Using Ordered Overrelaxation Radford M. Neal 발표자 : 장 정 호.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
Markov-Chain Monte Carlo CSE586 Computer Vision II Spring 2010, Penn State Univ.
Bayesian Reasoning: Tempering & Sampling A/Prof Geraint F. Lewis Rm 560:
Exact Inference (Last Class) Variable elimination  polytrees (directed graph with at most one undirected path between any two vertices; subset of DAGs)
Numerical Bayesian Techniques. outline How to evaluate Bayes integrals? Numerical integration Monte Carlo integration Importance sampling Metropolis algorithm.
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee.
An Introduction to Markov Chain Monte Carlo Teg Grenager July 1, 2004.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
7. Metropolis Algorithm. Markov Chain and Monte Carlo Markov chain theory describes a particularly simple type of stochastic processes. Given a transition.
Lecture #9: Introduction to Markov Chain Monte Carlo, part 3
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
A tutorial on Markov Chain Monte Carlo. Problem  g (x) dx I = If{X } form a Markov chain with stationary probability  i  I  g(x ) i  (x ) i  i=1.
CS774. Markov Random Field : Theory and Application Lecture 15 Kyomin Jung KAIST Oct
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Gil McVean, Department of Statistics Thursday February 12 th 2009 Monte Carlo simulation.
Kevin Stevenson AST 4762/5765. What is MCMC?  Random sampling algorithm  Estimates model parameters and their uncertainty  Only samples regions of.
Daphne Koller Sampling Methods Metropolis- Hastings Algorithm Probabilistic Graphical Models Inference.
CS498-EA Reasoning in AI Lecture #19 Professor: Eyal Amir Fall Semester 2011.
The Monte Carlo Method/ Markov Chains/ Metropolitan Algorithm from sec in “Adaptive Cooperative Systems” -summarized by Jinsan Yang.
Introduction to Sampling based inference and MCMC
Markov Chain Monte Carlo methods --the final project of stat 6213
Advanced Statistical Computing Fall 2016
Jun Liu Department of Statistics Stanford University
Markov chain monte carlo
Remember that our objective is for some density f(y|) for observations where y and  are vectors of data and parameters,  being sampled from a prior.
Markov Networks.
CSCI 5822 Probabilistic Models of Human and Machine Learning
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Markov Chain Monte Carlo
Slides for Sampling from Posterior of Shape
MCMC for PGMs: The Gibbs Chain
Approximate Inference by Sampling
Opinionated Lessons #39 MCMC and Gibbs Sampling in Statistics
Conditional Random Fields
Markov Networks.
Presentation transcript:

Lecture 15 Sampling

Approximate Inference Motivation comes from EM, where in the E-step we need to do inference which is often intractable. Approximate techniques include: Laplace approx. (Gaussian around the mode), mean field (factorized ansatz), loopy belief propagation – all are biased but are efficient. Sampling is asymptotically unbiased but is typically slow.

Sampling Ancestral sampling: top-down sampling from Bayes’ net. Sampling by transformation: try to find a transformation f(.) such that x = f(y) and y is simple to sample from. Rejection sampling: find a curve f(x) >= p(x) and sample from f(x). Then accept a fraction p(x)/f(x). Many more methods that we don’t have time for. Most methods however break down in high dimensions: curse of dimensionality.

MCMC & Gibbs We give up that all samples have to be independent. Sample a new sample given the previous one according to some proposal distribution S(x’|x). We only accept the new sample x’ with a certain probability A(x,x’). T(x,x’) = S(x’|x) X A(x,x’) is chosen such that the samples eventually come from the desired distribution p(x). p(x) should be the invariant distribution of the kernel T, which is unique of the chain is ergodic (aperiodic & irreducible). Detailed balance is a condition that is sufficient and is easier to satisfy.

MH & Gibbs Metropis Hastings Markov Chain Monte Carlo sampler always accepts of the prob. of x’ is larger and reject with a fraction p(x’)/p(x) (in case of symmetric S). Gibbs sampling is MH method that always accepts and uses the conditional distributions to sample: X, Y ~P(X,Y)  iterate X~P(X|Y) & Y~P(Y|X). MCMC has problems if the random variables are highly dependent and if there are multiple modes. In both cases it takes very long to explore all of the space that has significant probability. This is called slow mixing.