Wei Sun and KC Chang George Mason University March 2008 Convergence Study of Message Passing In Arbitrary Continuous Bayesian.

Slides:



Advertisements
Similar presentations
Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal University of Málaga (Spain) Dpt. of System Engineering and Automation May Pasadena,
Advertisements

Efficient Inference for General Hybrid Bayesian Networks Wei Sun PhD in Information Technology George Mason University, 2007.
1 Some Comments on Sebastiani et al Nature Genetics 37(4)2005.
Exact Inference in Bayes Nets
Dynamic Bayesian Networks (DBNs)
3 March, 2003University of Glasgow1 Statistical-Mechanical Approach to Probabilistic Inference --- Cluster Variation Method and Generalized Loopy Belief.
Pearl’s Belief Propagation Algorithm Exact answers from tree-structured Bayesian networks Heavily based on slides by: Tomas Singliar,
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
Overview of Inference Algorithms for Bayesian Networks Wei Sun, PhD Assistant Research Professor SEOR Dept. & C4I Center George Mason University, 2009.
Variational Inference and Variational Message Passing
Bayesian network inference
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
Machine Learning CMPT 726 Simon Fraser University CHAPTER 1: INTRODUCTION.
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Constructing Belief Networks: Summary [[Decide on what sorts of queries you are interested in answering –This in turn dictates what factors to model in.
5/25/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
1 Bayesian Networks Chapter ; 14.4 CS 63 Adapted from slides by Tim Finin and Marie desJardins. Some material borrowed from Lise Getoor.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
. PGM 2002/3 – Tirgul6 Approximate Inference: Sampling.
24 November, 2011National Tsin Hua University, Taiwan1 Mathematical Structures of Belief Propagation Algorithms in Probabilistic Information Processing.
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
1 Approximate Inference 2: Importance Sampling. (Unnormalized) Importance Sampling.
Computer vision: models, learning and inference Chapter 19 Temporal models.
General Principle of Monte Carlo Fall 2013 By Yaohang Li, Ph.D.
Computer vision: models, learning and inference Chapter 19 Temporal models.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Probabilistic Robotics Bayes Filter Implementations.
High-resolution computational models of genome binding events Yuan (Alan) Qi Joint work with Gifford and Young labs Dana-Farber Cancer Institute Jan 2007.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
Direct Message Passing for Hybrid Bayesian Networks Wei Sun, PhD Assistant Research Professor SFL, C4I Center, SEOR Dept. George Mason University, 2009.
Bayesian Statistics and Belief Networks. Overview Book: Ch 13,14 Refresher on Probability Bayesian classifiers Belief Networks / Bayesian Networks.
Inference Complexity As Learning Bias Daniel Lowd Dept. of Computer and Information Science University of Oregon Joint work with Pedro Domingos.
Tokyo Institute of Technology, Japan Yu Nishiyama and Sumio Watanabe Theoretical Analysis of Accuracy of Gaussian Belief Propagation.
Physics Fluctuomatics (Tohoku University) 1 Physical Fluctuomatics 12th Bayesian network and belief propagation in statistical inference Kazuyuki Tanaka.
14 October, 2010LRI Seminar 2010 (Univ. Paris-Sud)1 Statistical performance analysis by loopy belief propagation in probabilistic image processing Kazuyuki.
Learning With Bayesian Networks Markus Kalisch ETH Zürich.
CPSC 422, Lecture 11Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 11 Oct, 2, 2015.
1 Mean Field and Variational Methods finishing off Graphical Models – Carlos Guestrin Carnegie Mellon University November 5 th, 2008 Readings: K&F:
Unscented Kalman Filter 1. 2 Linearization via Unscented Transform EKF UKF.
Bayesian networks and their application in circuit reliability estimation Erin Taylor.
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Nonlinear State Estimation
Pattern Recognition and Machine Learning
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
1 Relational Factor Graphs Lin Liao Joint work with Dieter Fox.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
30 November, 2005 CIMCA2005, Vienna 1 Statistical Learning Procedure in Loopy Belief Propagation for Probabilistic Image Processing Kazuyuki Tanaka Graduate.
10 October, 2007 University of Glasgow 1 EM Algorithm with Markov Chain Monte Carlo Method for Bayesian Image Analysis Kazuyuki Tanaka Graduate School.
Graduate School of Information Sciences, Tohoku University
Unscented Kalman Filter
Unscented Kalman Filter
Graduate School of Information Sciences, Tohoku University, Japan
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Bayesian Statistics and Belief Networks
Graduate School of Information Sciences, Tohoku University
Graduate School of Information Sciences, Tohoku University
Unscented Kalman Filter
Expectation-Maximization & Belief Propagation
Graduate School of Information Sciences, Tohoku University
Probabilistic image processing and Bayesian network
Approximate Inference by Sampling
Graduate School of Information Sciences, Tohoku University
Markov Networks.
Graduate School of Information Sciences, Tohoku University
Kazuyuki Tanaka Graduate School of Information Sciences
Presentation transcript:

Wei Sun and KC Chang George Mason University March 2008 Convergence Study of Message Passing In Arbitrary Continuous Bayesian Networks SPIE Orlando

2 Outline Bayesian Network & Probabilistic Inference Message Passing Algorithm Review Unscented Message Passing for Arbitrary Continuous Bayesian Network Numerical Experiments and Convergence Study Summary

3 Bayesian Network and Its Inference Problems Bayesian network (BN) is an useful probabilistic model in statistics, artificial intelligence, machine learning  Conditional independence  Efficient modeling with visualization, modularity, causal logic, etc.  Joint probability distribution is represented by the product of Conditional probability distributions (CPDs) BN inference is NP-hard in general.  Tractable inference algorithms exist only for special classes of BNs  Approximate inference is in general feasible: simulation, model simplification, loopy belief propagation, etc. However, how good the performance of approximate methods is?

4 Inference for Arbitrary Bayesian Networks When the continuous random variables are involved, their distributions could be non-Gaussian, and their dependence relationships could be nonlinear. It is well known that there is NO EXACT SOLUTION generally in these cases. (It may be feasible for some special cases with exponential distributions.) An approximate inference method - loopy belief propagation is a good candidate in handling continuous variables. KEY ISSUES: continuous messages representations and manipulations. We propose a continuous version of loopy propagation algorithm and investigate its convergence performance. Unscented transformation plays important role in our algorithm and so it is called “ Unscented Message Passing ”.

5 Pearl’s Message Passing in BNs In message passing algorithm, each node maintains Lambda message and Pi message for itself. Also it sends Lambda message to every parent it has and Pi message to its children. After finite-number iterations of message passing, every node obtains its correct belief. For polytree, MP returns exact belief; For networks with loop, MP is called loopy propagation that still could give good approximation of posterior distribution. J. Pearl. “Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.” Morgan Kauffman, San Mateo, 1988.

6 Message Passing in Arbitrary Continuous BN Message is represented by MEAN and VARIANCE regardless of the distribution. Message propagations between continuous variables are equivalent to fusing multiple estimates and estimating functional transformation of distributions. Unscented transformation uses deterministic sampling scheme to obtain good estimates of the first two moments of continuous random variable subject to an nonlinear function transformation.

7 Unscented Transformation (UT) Unscented transformation is a deterministic sampling method Unscented transformation is a deterministic sampling method  Approximate the first two moments of a continuous random variable transformed via an arbitrary nonlinear function.  UT bases on the principle that it is easier to approximate a probability distribution than a nonlinear function. deterministic sample points are chosen and propagated via the nonlinear function. deterministic sample points are chosen and propagated via the nonlinear function. S.J. Julier, J.K. Uhlman. “A General Method for Approximating Non-linear Transformations of Probability Distribution”. Tech. Report, RRG, Univ. of Oxford, 1996.

8 Unscented Transformation Example A cloud of 5000 samples drawn from a Gaussian prior is propagated through an arbitrary highly nonlinear function and the true posterior sample mean and covariance are calculated, which can be regarded as a ground truth of the two approaches, EKF and UT.

9 Unscented Message Passing (UMP-BN) (For arbitrary continuous BN) Conventional Pearl’s Equations Derived generalized Equations to handle continuous variables.

10 UMP-BN Algorithm

11 Linear Gaussian  Randomly generated CPDs, linear relationships Nonlinear Gaussian  Purposely specified nonlinear relationships  No exact benchmark, using brute force likelihood weighting (20-million sample size) to provide the approximate true. Convergence Study  Converge or not  How many iterations using message passing UMP-BN: Numerical Experiments

12 UMP-BN: Experimental Models

13 Numerical Results - 1

14 Numerical Results - 2

15 Numerical Results - 3

16 It converges in all of the numerical experiments. Linear Gaussian:  Incinerator: 9.8 iterations on average  Alarm: 15.5 iterations on average Nonlinear Gaussian:  Incinerator: 10 iteration with the specified nonlinear functions Convergence

17 Summary and Future Work Unscented Message Passing (UMP) provides a good alternative algorithm for belief propagation for arbitrary continuous Bayesian networks. In our limited simulation cases, UMP always converges and it converges within small number of iterations. Theoretically, the complexity of loopy based algorithm depends on the size of loops and the so-called induced width of the networks. Further sampling based on UMP results could give estimates of the underlying distributions efficiently.