CSCE 582 Computation of the Most Probable Explanation in Bayesian Networks using Bucket Elimination -Hareesh Lingareddy University of South Carolina.

Slides:



Advertisements
Similar presentations
Exact Inference. Inference Basic task for inference: – Compute a posterior distribution for some query variables given some observed evidence – Sum out.
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Ch2 Data Preprocessing part3 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
. Exact Inference in Bayesian Networks Lecture 9.
Graphical Models BRML Chapter 4 1. the zoo of graphical models Markov networks Belief networks Chain graphs (Belief and Markov ) Factor graphs =>they.
“Using Weighted MAX-SAT Engines to Solve MPE” -- by James D. Park Shuo (Olivia) Yang.
Bucket Elimination: A Unifying Framework for Probabilistic Inference By: Rina Dechter Presented by: Gal Lavee.
Bucket Elimination: A unifying framework for Probabilistic inference Rina Dechter presented by Anton Bezuglov, Hrishikesh Goradia CSCE 582 Fall02 Instructor:
Bayesian Networks Bucket Elimination Algorithm 主講人:虞台文 大同大學資工所 智慧型多媒體研究室.
Constraint Optimization Presentation by Nathan Stender Chapter 13 of Constraint Processing by Rina Dechter 3/25/20131Constraint Optimization.
MPE, MAP AND APPROXIMATIONS Lecture 10: Statistical Methods in AI/ML Vibhav Gogate The University of Texas at Dallas Readings: AD Chapter 10.
Bayesian Networks Using random variables to represent objects and events in the world –Various instantiations to these variables can model the current.
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
Junction Trees: Motivation Standard algorithms (e.g., variable elimination) are inefficient if the undirected graph underlying the Bayes Net contains cycles.
From Variable Elimination to Junction Trees
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
CPSC 322, Lecture 35Slide 1 Finish VE for Sequential Decisions & Value of Information and Control Computer Science cpsc322, Lecture 35 (Textbook Chpt 9.4)
Overview Full Bayesian Learning MAP learning
1 Exact Inference Algorithms Bucket-elimination and more COMPSCI 179, Spring 2010 Set 8: Rina Dechter (Reading: chapter 14, Russell and Norvig.
CPSC 322, Lecture 30Slide 1 Reasoning Under Uncertainty: Variable elimination Computer Science cpsc322, Lecture 30 (Textbook Chpt 6.4) March, 23, 2009.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering An Introduction to Bayesian Networks March 16, 2010 Marco Valtorta SWRG 3A55.
. Hidden Markov Models Lecture #5 Prepared by Dan Geiger. Background Readings: Chapter 3 in the text book (Durbin et al.).
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering An Introduction to Bayesian Networks September 12, 2003 Marco Valtorta SWRG.
1 Exact Inference Algorithms for Probabilistic Reasoning; COMPSCI 276 Fall 2007.
. Hidden Markov Models For Genetic Linkage Analysis Lecture #4 Prepared by Dan Geiger.
December Marginal and Joint Beliefs in BN1 A Hybrid Algorithm to Compute Marginal and Joint Beliefs in Bayesian Networks and its complexity Mark.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering An Introduction to Bayesian Networks January 10, 2006 Marco Valtorta SWRG 3A55.
. Inference I Introduction, Hardness, and Variable Elimination Slides by Nir Friedman.
Tutorial #9 by Ma’ayan Fishelson
Classification and Prediction by Yen-Hsien Lee Department of Information Management College of Management National Sun Yat-Sen University March 4, 2003.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Section 6.4.1: Probabilistic Inference and.
AND/OR Search for Mixed Networks #CSP Robert Mateescu ICS280 Spring Current Topics in Graphical Models Professor Rina Dechter.
Computer vision: models, learning and inference
. Basic Model For Genetic Linkage Analysis Lecture #5 Prepared by Dan Geiger.
Undirected Models: Markov Networks David Page, Fall 2009 CS 731: Advanced Methods in Artificial Intelligence, with Biomedical Applications.
Department of Computer Science Undergraduate Events More
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Making Simple Decisions
Automated Planning and Decision Making Prof. Ronen Brafman Automated Planning and Decision Making 2007 Bayesian networks Variable Elimination Based on.
Introduction to Bayesian Networks
1 Variable Elimination Graphical Models – Carlos Guestrin Carnegie Mellon University October 11 th, 2006 Readings: K&F: 8.1, 8.2, 8.3,
Probabilistic Networks Chapter 14 of Dechter’s CP textbook Speaker: Daniel Geschwender April 1, 2013 April 1&3, 2013DanielG--Probabilistic Networks1.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
Ch. 3 Notes Page 19 P19 3.4b: Linear Programming.
Objective 16 Evaluate algebraic expressions, given values ©2002 by R. Villar All Rights Reserved.
1 Tutorial #9 by Ma’ayan Fishelson. 2 Bucket Elimination Algorithm An algorithm for performing inference in a Bayesian network. Similar algorithms can.
Daphne Koller Overview Conditional Probability Queries Probabilistic Graphical Models Inference.
3-2 Solving Systems Algebraically. In addition to graphing, which we looked at earlier, we will explore two other methods of solving systems of equations.
1 Structure Learning (The Good), The Bad, The Ugly Inference Graphical Models – Carlos Guestrin Carnegie Mellon University October 13 th, 2008 Readings:
Daphne Koller Overview Maximum a posteriori (MAP) Probabilistic Graphical Models Inference.
Conditional Independence As with absolute independence, the equivalent forms of X and Y being conditionally independent given Z can also be used: P(X|Y,
Perkins Honors Precalculus Day 1 Section Write the first 5 terms for each sequence. Set of terms sequence. Calculator: LIST : OPS : seq( expression.
Algebra 3 5.1/2 Systems of Linear Equations/Matrices.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk
Inference in Bayesian Networks
Naive Bayes Classifier
Bayesian Networks Probability In AI.
Data Mining Lecture 11.
More about Posterior Distributions
CSCI 5822 Probabilistic Models of Human and Machine Learning
Reasoning Under Uncertainty: Bnet Inference
Reasoning Under Uncertainty: Bnet Inference
Machine Learning: Lecture 6
Machine Learning: UNIT-3 CHAPTER-1
Hidden Markov Models ..
presented by Anton Bezuglov, Hrishikesh Goradia CSCE 582 Fall02
Presentation transcript:

CSCE 582 Computation of the Most Probable Explanation in Bayesian Networks using Bucket Elimination -Hareesh Lingareddy University of South Carolina

Bucket Elimination -Algorithmic framework that generalizes dynamic programming to accommodate algorithms for many complex problem solving and reasoning activities. -Uses “buckets” to mimic the algebraic manipulations involved in each of these problems resulting in an easily expressible algorithmic formulation

Bucket Elimination Algorithm -Partition functions on the graph into “buckets” in backwards relative to the given node order -In the bucket of variable X we put all functions that mention X but do not mention any variable having a higher index -Process buckets backwards relative to the node order -The computed function after elimination is placed in the bucket of the ‘highest’ variable in its scope

Algorithms using Bucket Elimination -Belief Assessment -Most Probable Estimation(MPE) -Maximum A Posteriori Hypothesis(MAP) -Maximum Expected Utility(MEU)

Belief Assessment Definition - Given a set of evidence compute the posterior probability of all the variables – The belief assessment task of X k = x k is to find In the Visit to Asia example, the belief assessment problem answers questions like – What is the probability that a person has tuberculosis, given that he/she has dyspnoea and has visited Asia recently ? where k – normalizing constant

Belief Assessment Overview In reverse Node Ordering: – Create bucket function by multiplying all functions (given as tables) containing the current node – Perform variable elimination using Summation over the current node – Place the new created function table into the appropriate bucket

Most Probable Explanation (MPE) Definition – Given evidence find the maximum probability assignment to the remaining variables – The MPE task is to find an assignment x o = (x o 1, …, x o n ) such that

Differences from Belief Assessment – Replace Sums With Max – Keep track of maximizing value at each stage – “Forward Step” to determine what is the maximizing assignment tuple

Elimination Algorithm for Most Probable Explanation Bucket  : Bucket  : Bucket  : Bucket  : Bucket  : Bucket : Bucket  : Bucket  : P(  |  ) P(  |  )*P(  ) P(  | , ) P(  | ,  ),  =“no” MPE= MAX { , , , ,, , ,  } (P(  |  )* P(  |  )* P(  | , )* P(  | ,  )* P(  )*P( |  )*P(  |  )*P(  )) P( |  ) P(  |  )*P(  ) H()H() H()H() H(,)H(,) H  ( ,,  ) H ( , ,  ) H()H() H(,)H(,) MPE probability Finding MPE = max , , , ,, , ,  P( , , , ,, , ,  ) H n (u)=max xn ( П xn  Fn C(x n |x pa ))

Elimination Algorithm for Most Probable Explanation Bucket  : Bucket  : Bucket  : Bucket  : Bucket  : Bucket : Bucket  : Bucket  : P(  |  ) P(  |  )*P(  ) P(  | , ) P(  | ,  ),  =“no” P( |  ) P(  |  )*P(  ) H()H() H()H() H(,)H(,) H  ( ,,  ) H ( , ,  ) H()H() H(,)H(,) Forward part  ’ = arg max  H  (  )* H  (  )  ’ = arg max  H  (  ’,  )  ’ = arg max  P(  ’|  )*P(  )* H (  ’,  ’,  ) ’ = arg max P( |  ’)*H  (  ’,,  ’)  ’ = arg max  P(  |  ’, ’)*H  ( ,  ’)*H  (  )  ’ = “no”  ’ = arg max  P(  |  ’)  ’ = arg max  P(  ’|  )*P(  ) Return: (  ’,  ’,  ’, ’,  ’,  ’,  ’,  ’)

MPE Overview In reverse node Ordering – Create bucket function by multiplying all functions (given as tables) containing the current node – Perform variable elimination using the Maximization operation over the current node (recording the maximizing state function) – Place the new created function table into the appropriate bucket In forward node ordering – Calculate the maximum probability using maximizing state functions

Maximum Aposteriori Hypothesis (MAP) Definition – Given evidence find an assignment to a subset of “hypothesis” variables that maximizes their probability – Given a set of hypothesis variables A = {A 1, …, A k },,the MAP task is to find an assignment a o = (a o 1, …, a o k ) such that