Download presentation

Presentation is loading. Please wait.

Published byDrew Sutor Modified over 4 years ago

1
1 WHY MAKING BAYESIAN NETWORKS BAYESIAN MAKES SENSE. Dawn E. Holmes Department of Statistics and Applied Probability University of California, Santa Barbara CA 93106, USA

2
2 Subjective Probability Rational degrees of belief. Rational degrees of belief. Keyness consensual rational degrees of belief. Keyness consensual rational degrees of belief.

3
3 What is a Bayesian Network? Directed acyclic graph Directed acyclic graph Nodes are variables (discrete or continuous)Nodes are variables (discrete or continuous) Arcs indicate dependence between variables.Arcs indicate dependence between variables. Conditional Probabilities (local distributions) Conditional Probabilities (local distributions) Missing arcs implies conditional independence Missing arcs implies conditional independence Independencies + local distributions => specification of a joint distribution Independencies + local distributions => specification of a joint distribution

4
4 Bayesian Networks In classical Bayesian network theory a prior distribution must be specified. In classical Bayesian network theory a prior distribution must be specified. When information is missing, we are able to find a minimally prejudiced prior distribution using MaxEnt When information is missing, we are able to find a minimally prejudiced prior distribution using MaxEnt

5
5 A Simple Bayesian Network

6
6 Priors for Bayesian Networks Using frequentist probabilities results in a rigid network.Using frequentist probabilities results in a rigid network. The results obtained using Bayesian networks are only as good as their prior distribution.The results obtained using Bayesian networks are only as good as their prior distribution. The maximum entropy formalism.The maximum entropy formalism.

7
7 Maximum Entropy and the Principle of Insufficient Reason. The principle of maximum entropy is a generalization of the principle of insufficient reason. The principle of maximum entropy is a generalization of the principle of insufficient reason. It is capable of determining a probability distribution for any combination of partial knowledge and partial ignorance. It is capable of determining a probability distribution for any combination of partial knowledge and partial ignorance.

8
8 An iterative algorithm for updating probabilities in a multivalued multiway tree in given. An iterative algorithm for updating probabilities in a multivalued multiway tree in given. A Lagrange multiplier technique is used to find the probability of an arbitrary state in a Bayesian tree using only MaxEnt. A Lagrange multiplier technique is used to find the probability of an arbitrary state in a Bayesian tree using only MaxEnt. Two Results

9
9 FRAGMENT OF STATE TABLE

10
10 A Simple Bayesian Network

11
11 A Simple BN with Maximum Entropy

12
12 Maximum Entropy in Bayesian Networks Maximum entropy provides a technique for eliciting knowledge from incomplete information. Maximum entropy provides a technique for eliciting knowledge from incomplete information. We use the maximum entropy formalism to optimally estimate the prior distribution of a Bayesian network. We use the maximum entropy formalism to optimally estimate the prior distribution of a Bayesian network. All and only the information provided by expert knowledge is used. All and only the information provided by expert knowledge is used.

13
13 What use are Subjective Bayesian Prior Distributions Why determine the prior distribution for a Bayesian network using maximum entropy? Why determine the prior distribution for a Bayesian network using maximum entropy? Any problem involving probabilities can be represented by a Bayesian network. Any problem involving probabilities can be represented by a Bayesian network.

14
14 Independence Proofs must not use techniques outside of MaxEnt. Proofs must not use techniques outside of MaxEnt. Proofs have already been given elsewhere. Proofs have already been given elsewhere.

15
15 Thank You!

Similar presentations

OK

Bayes Nets Rong Jin. Hidden Markov Model Inferring from observations (o i ) to hidden variables (q i ) This is a general framework for representing.

Bayes Nets Rong Jin. Hidden Markov Model Inferring from observations (o i ) to hidden variables (q i ) This is a general framework for representing.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google