Bayesian Networks Chapter 14 Section 1, 2, 4.

Slides:



Advertisements
Similar presentations
Bayesian networks Chapter 14 Section 1 – 2. Outline Syntax Semantics Exact computation.
Advertisements

Probabilistic Reasoning Bayesian Belief Networks Constructing Bayesian Networks Representing Conditional Distributions Summary.
BAYESIAN NETWORKS. Bayesian Network Motivation  We want a representation and reasoning system that is based on conditional independence  Compact yet.
Identifying Conditional Independencies in Bayes Nets Lecture 4.
1 22c:145 Artificial Intelligence Bayesian Networks Reading: Ch 14. Russell & Norvig.
Bayesian Networks Chapter 14 Section 1, 2, 4. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) March, 16, 2009.
Review: Bayesian learning and inference
Bayesian Networks. Motivation The conditional independence assumption made by naïve Bayes classifiers may seem to rigid, especially for classification.
Bayesian networks Chapter 14 Section 1 – 2.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 14 Jim Martin.
Bayesian Belief Networks
Bayesian Belief Network. The decomposition of large probabilistic domains into weakly connected subsets via conditional independence is one of the most.
KI2 - 2 Kunstmatige Intelligentie / RuG Probabilities Revisited AIMA, Chapter 13.
University College Cork (Ireland) Department of Civil and Environmental Engineering Course: Engineering Artificial Intelligence Dr. Radu Marinescu Lecture.
Bayesian Reasoning. Tax Data – Naive Bayes Classify: (_, No, Married, 95K, ?)
1 Probabilistic Belief States and Bayesian Networks (Where we exploit the sparseness of direct interactions among components of a world) R&N: Chap. 14,
Bayesian networks More commonly called graphical models A way to depict conditional independence relationships between random variables A compact specification.
Probabilistic Reasoning
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Bayesian Networks Material used 1 Random variables
Artificial Intelligence CS 165A Tuesday, November 27, 2007  Probabilistic Reasoning (Ch 14)
Bayesian networks Chapter 14. Outline Syntax Semantics.
Bayesian networks Chapter 14 Section 1 – 2. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Probabilistic Belief States and Bayesian Networks (Where we exploit the sparseness of direct interactions among components of a world) R&N: Chap. 14, Sect.
Bayesian networks. Motivation We saw that the full joint probability can be used to answer any question about the domain, but can become intractable as.
1 Chapter 14 Probabilistic Reasoning. 2 Outline Syntax of Bayesian networks Semantics of Bayesian networks Efficient representation of conditional distributions.
2 Syntax of Bayesian networks Semantics of Bayesian networks Efficient representation of conditional distributions Exact inference by enumeration Exact.
1 Monte Carlo Artificial Intelligence: Bayesian Networks.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
Probabilistic Reasoning [Ch. 14] Bayes Networks – Part 1 ◦Syntax ◦Semantics ◦Parameterized distributions Inference – Part2 ◦Exact inference by enumeration.
Marginalization & Conditioning Marginalization (summing out): for any sets of variables Y and Z: Conditioning(variant of marginalization):
Review: Bayesian inference  A general scenario:  Query variables: X  Evidence (observed) variables and their values: E = e  Unobserved variables: Y.
Uncertainty Let action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1.partial observability (road state, other.
Conditional Probability, Bayes’ Theorem, and Belief Networks CISC 2315 Discrete Structures Spring2010 Professor William G. Tanner, Jr.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) Nov, 13, 2013.
Conditional Independence As with absolute independence, the equivalent forms of X and Y being conditionally independent given Z can also be used: P(X|Y,
Belief Networks CS121 – Winter Other Names Bayesian networks Probabilistic networks Causal networks.
PROBABILISTIC REASONING Heng Ji 04/05, 04/08, 2016.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Web-Mining Agents Data Mining Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny (Übungen)
A Brief Introduction to Bayesian networks
Another look at Bayesian inference
Reasoning Under Uncertainty: Belief Networks
CS 188: Artificial Intelligence Spring 2007
CS 2750: Machine Learning Directed Graphical Models
Bayesian networks Chapter 14 Section 1 – 2.
Presented By S.Yamuna AP/CSE
Qian Liu CSE spring University of Pennsylvania
Bayesian networks (1) Lirong Xia Spring Bayesian networks (1) Lirong Xia Spring 2017.
Read R&N Ch Next lecture: Read R&N
Conditional Probability, Bayes’ Theorem, and Belief Networks
Learning Bayesian Network Models from Data
Bayesian Networks Probability In AI.
Read R&N Ch Next lecture: Read R&N
Probabilistic Reasoning; Network-based reasoning
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
CAP 5636 – Advanced Artificial Intelligence
Belief Networks CS121 – Winter 2003 Belief Networks.
Read R&N Ch Next lecture: Read R&N
Bayesian networks Chapter 14 Section 1 – 2.
Probabilistic Reasoning
Read R&N Ch Next lecture: Read R&N
Bayesian Networks CSE 573.
Bayesian networks (1) Lirong Xia. Bayesian networks (1) Lirong Xia.
Presentation transcript:

Bayesian Networks Chapter 14 Section 1, 2, 4

Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact specification of full joint distributions Syntax: a set of nodes, one per variable a directed, acyclic graph (link ≈ "directly influences") if there is a link from x to y, x is said to be a parent of y a conditional distribution for each node given its parents: P (Xi | Parents (Xi)) In the simplest case, conditional distribution represented as a conditional probability table (CPT) giving the distribution over Xi for each combination of parent values The topology of the network specifies the conditional independence relationships that hold in the domain. The intuitive meaning of an arrow is typically that X has a direct influence on Y, which suggests that causes should be parents of effects. Once the topology is laid out, we need only specify a conditional probability distribution for each variable, given its parents. This suffices to specify the full joint distribution for all the variables.

Example Topology of network encodes conditional independence assertions: Weather is independent of the other variables Toothache and Catch are conditionally independent given Cavity Formally, the conditional independence of Toothache and Catch given Cavity is indicated by the absence of a link between Toothache and Catch. Cavity is a direct cause of Toothache and Catch, with no direct causal relationship between Toothache and Catch exists.

Example I'm at work, neighbor John calls to say my alarm is ringing, but neighbor Mary doesn't call. Sometimes it's set off by minor earthquakes. Is there a burglar? Variables: Burglary, Earthquake, Alarm, JohnCalls, MaryCalls Network topology reflects "causal" knowledge: A burglar can set the alarm off An earthquake can set the alarm off The alarm can cause Mary to call The alarm can cause John to call John usually calls when the alarm rings, but is also sometimes confused between the alarm and the phone ringing. Mary, sometimes misses the alarm ringing because she listens to very loud music.

Example contd. First consider the topology of the network. Notice – no nodes corresponding to Mary’s listening to music or the telephone ringing confusing John. These factors are summarized in the uncertainty associated with the links from Alarm to JohnCalls and Mary Calls. This is laziness – the probabilities summarize an infinite set of circumstances in which the alarm might fail to go off or John or Mary might fail to call to report it. Conditional distributions – each distribution is shown as a conditional probability table (CPT). Each row in a PT contains the condition probability of each node value for a conditional case – a possible combination of values for the parent nodes. Each row must sum to 1. (With boolean variables, usually only positive value is shown since the not of that variable can be calculated from it.) A node with no parents has only 1 row, representing the prior probabilities of each possible value of the variable.

Compactness A CPT for Boolean Xi with k Boolean parents has 2k rows for the combinations of parent values Each row requires one number p for Xi = true (the number for Xi = false is just 1-p) If each variable has no more than k parents, the complete network requires O(n · 2k) numbers I.e., grows linearly with n, vs. O(2n) for the full joint distribution For burglary net, 1 + 1 + 4 + 2 + 2 = 10 numbers (vs. 25-1 = 31)

Semantics The full joint distribution is defined as the product of the local conditional distributions: P (X1, … ,Xn) = πi = 1 P (Xi | Parents(Xi)) Thus each entry in the joint distribution is represented by the product of the appropriate elements of the conditional probability tables in the Bayesian network. = 0.90 * 0.70 * 0.001 * 0.999 * 0.998 = 0.00062 n There are two ways of understanding the semantics of a Bayes network: (1) see the network as a representation of the joint probability distribution; (2) as an encoding of a collection of independence statements.

Back to the dentist example ... We now represent the world of the dentist D using three propositions – Cavity, Toothache, and PCatch D’s belief state consists of 23 = 8 states each with some probability: {cavity^toothache^pcatch, ¬cavity^toothache^pcatch, cavity^ ¬toothache^pcatch,...}

The belief state is defined by the full joint probability of the propositions pcatch ¬pcatch ¬ pcatch cavity 0.108 0.012 0.072 0.008 ¬ cavity 0.016 0.064 0.144 0.576 toothache ¬ toothache

Probabilistic Inference pcatch ¬ pcatch cavity 0.108 0.012 0.072 0.008 ¬ cavity 0.016 0.064 0.144 0.576 toothache ¬ toothache P(cavity n toothache) = 0.108 + 0.012 + ... = 0.28

Probabilistic Inference pcatch ¬ pcatch cavity 0.108 0.012 0.072 0.008 ¬ cavity 0.016 0.064 0.144 0.576 toothache ¬ toothache P(cavity) = 0.108 + 0.012 + 0.072 + 0.008 = 0.2

Probabilistic Inference pcatch ¬ pcatch cavity 0.108 0.012 0.072 0.008 ¬ cavity 0.016 0.064 0.144 0.576 toothache ¬ toothache Marginalization: P (c) = StSpc P(c^t^pc) using the conventions that c = cavity or ¬ cavity and that St is the sum over t = {toothache, ¬ toothache}

Conditional Probability P(A^B) = P(A|B) P(B) = P(B|A) P(A) P(A|B) is the posterior probability of A given B

pcatch ¬ pcatch cavity 0.108 0.012 0.072 0.008 ¬cavity 0.016 0.064 0.144 0.576 toothache ¬ toothache P(cavity|toothache) = P(cavity^toothache)/P(toothache) = (0.108+0.012)/(0.108+0.012+0.016+0.064) = 0.6 Interpretation: After observing Toothache, the patient is no longer an “average” one, and the prior probabilities of Cavity is no longer valid P(cavity|toothache) is calculated by keeping the ratios of the probabilities of the 4 cases unchanged, and normalizing their sum to 1

normalization constant pcatch ¬ pcatch cavity 0.108 0.012 0.072 0.008 ¬cavity 0.016 0.064 0.144 0.576 toothache ¬ toothache P(cavity|toothache) = P(cavity^toothache)/P(toothache) = (0.108+0.012)/(0.108+0.012+0.016+0.064) = 0.6 P(¬ cavity|toothache)=P(¬ cavity^toothache)/P(toothache) = (0.016+0.064)/(0.108+0.012+0.016+0.064) = 0.4 P(C|toochache) = a P(C ^ toothache) = a Spc P(C ^ toothache ^ pc) = a [(0.108, 0.016) + (0.012, 0.064)] = a (0.12, 0.08) = (0.6, 0.4) normalization constant

Conditional Probability P(A^B) = P(A|B) P(B) = P(B|A) P(A) P(A^B^C) = P(A|B,C) P(B^C) = P(A|B,C) P(B|C) P(C) P(Cavity) = StSpc P(Cavity^t^pc) = StSpc P(Cavity|t,pc) P(t^pc) P(c) = StSpc P(c^t^pc) = StSpc P(c|t,pc)P(t^pc)

Independence Two random variables A and B are independent if P(A^B) = P(A) P(B) hence if P(A|B) = P(A) Two random variables A and B are independent given C, if P(A^B|C) = P(A|C) P(B|C) hence if P(A|B,C) = P(A|C)

Issues If a state is described by n propositions, then a belief state contains 2n states (possibly, some have probability 0)  Modeling difficulty: many numbers must be entered in the first place  Computational issue: memory size and time

pcatch ¬pcatch ¬ pcatch cavity 0.108 0.012 0.072 0.008 ¬ cavity 0.016 0.064 0.144 0.576 toothache ¬ toothache toothache and pcatch are independent given cavity (or ¬ cavity), but this relation is hidden in the numbers ! [Verify this] Bayesian networks explicitly represent independence among propositions to reduce the number of probabilities defining a belief state Verification: independent if p(toothache ^ catch | cavity) = p(toothache|cavity)*p(catch|cavity) P(toothache ^ catch ^ cavity)/p(cavity) = .54

Bayesian Network Notice that Cavity is the “cause” of both Toothache and PCatch, and represent the causality links explicitly Give the prior probability distribution of Cavity Give the conditional probability tables of Toothache and PCatch P(cavity) 0.2 Cavity P(toothache|c) cavity ¬cavity 0.6 0.1 P(pclass|c) cavity ¬ cavity 0.9 0.02 You can get these conditional probabilities from the CPT table (full joint distribution). P(cavity)= (.108 + .012 + .072 + .008) P(toothache|cavity) = P(toothache ^ cavity)/P(cavity) = .6 Toothache PCatch 5 probabilities, instead of 7

A More Complex BN Burglary Earthquake Alarm MaryCalls JohnCalls causes effects Intuitive meaning of arc from x to y: “x has direct influence on y” Directed acyclic graph

A More Complex BN Burglary Earthquake Alarm MaryCalls JohnCalls P(B) 0.001 P(E) 0.002 Burglary Earthquake Alarm MaryCalls JohnCalls Size of the CPT for a node with k parents: 2k B E P(A|…) TTFF TFTF 0.95 0.94 0.29 0.001 A P(J|…) TF 0.90 0.05 A P(M|…) TF 0.70 0.01 10 probabilities, instead of 31

What does the BN encode? Burglary Earthquake Alarm MaryCalls JohnCalls Each of the beliefs JohnCalls and MaryCalls is independent of Burglary and Earthquake given Alarm or ¬Alarm For example, John does not observe any burglaries directly

A node is independent of its non-descendants given its parents What does the BN encode? Burglary Earthquake Alarm MaryCalls JohnCalls A node is independent of its non-descendants given its parents The beliefs JohnCalls and MaryCalls are independent given Alarm or ¬Alarm For instance, the reasons why John and Mary may not call if there is an alarm are unrelated

Conditional Independence of non-descendents A node X is conditionally independent of its non-descendents (e.g., the Zijs) given its parents (the Uis shown in the gray area).

Markov Blanket This tells us that we need to know limited information in order to do calculation in here. A node X is conditionally independent of all other nodes in the network, given its parents, chlidren, and chlidren’s parents.

Locally Structured World A world is locally structured (or sparse) if each of its components interacts directly with relatively few other components In a sparse world, the CPTs are small and the BN contains many fewer probabilities than the full joint distribution If the # of entries in each CPT is bounded, i.e., O(1), then the # of probabilities in a BN is linear in n – the # of propositions – instead of 2n for the joint distribution

But does a BN represent a belief state But does a BN represent a belief state? In other words, can we compute the full joint distribution of the propositions from it?

Calculation of Joint Probability P(B) 0.001 P(E) 0.002 Burglary Earthquake Alarm MaryCalls JohnCalls P(j^m^a^¬b¬^e) = ?? B E P(A|…) TTFF TFTF 0.95 0.94 0.29 0.001 A P(J|…) TF 0.90 0.05 A P(M|…) TF 0.70 0.01

P(J|A, ¬B, ¬E) = P(J|A) (J and ¬B^¬E are independent given A) Burglary Earthquake Alarm MaryCalls JohnCalls P(J^M^A^¬B^¬E) = P(J^M|A, ¬B, ¬E) * P(A^¬B^¬E) = P(J|A, ¬B, ¬E) * P(M|A, ¬B, ¬E) * P(A^¬B^¬E) (J and M are independent given A) P(J|A, ¬B, ¬E) = P(J|A) (J and ¬B^¬E are independent given A) P(M|A, ¬B, ¬E) = P(M|A) P(A^¬B^¬E) = P(A|¬B, ¬E) * P(¬B|¬E) * P(¬E) = P(A|¬B, ¬E) * P(¬B) * P(¬E) (¬B and ¬E are independent) P(J^M^A^¬B^¬E) = P(J|A)P(M|A)P(A|¬B, ¬E)P(¬B)P(¬E)

Calculation of Joint Probability P(B) 0.001 P(E) 0.002 Burglary Earthquake Alarm MaryCalls JohnCalls P(J^M^A^¬B^¬E) = P(J|A)P(M|A)P(A|¬B, ¬E)P(¬B)P(¬E) = 0.9 x 0.7 x 0.001 x 0.999 x 0.998 = 0.00062 B E P(A|…) TTFF TFTF 0.95 0.94 0.29 0.001 A P(J|…) TF 0.90 0.05 A P(M|…) TF 0.70 0.01

Calculation of Joint Probability P(B) 0.001 P(E) 0.002 Burglary Earthquake Alarm MaryCalls JohnCalls P(J^M^A^¬B^¬E) = P(J|A)P(M|A)P(A|¬B, ¬E)P(¬B)P(¬E) = 0.9 x 0.7 x 0.001 x 0.999 x 0.998 = 0.00062 B E P(A|…) TTFF TFTF 0.95 0.94 0.29 0.001 P(x1^x2^…^xn) = Pi=1,…,nP(xi|parents(Xi))  full joint distribution table A P(J|…) TF 0.90 0.05 A P(M|…) TF 0.70 0.01

Calculation of Joint Probability Since a BN defines the full joint distribution of a set of propositions, it represents a belief state P(B) 0.001 P(E) 0.002 Burglary Earthquake Alarm MaryCalls JohnCalls P(J^M^A^¬B^¬E) = P(J|A)P(M|A)P(A|¬B, ¬E)P(¬B)P(¬E) = 0.9 x 0.7 x 0.001 x 0.999 x 0.998 = 0.00062 B E P(A|…) TTFF TFTF 0.95 0.94 0.29 0.001 P(x1^x2^…^xn) = Pi=1,…,nP(xi|parents(Xi))  full joint distribution table A P(J|…) TF 0.90 0.05 A P(M|…) TF 0.70 0.01

Querying the BN The BN gives P(t|c) What about P(c|t)? P(cavity|t) = P(cavity ^ t)/P(t) = P(t|cavity) P(cavity) / P(t) [Bayes’ rule] P(c|t) = a P(t|c) P(c) Querying a BN is just applying the trivial Bayes’ rule on a larger scale P(C) 0.1 Cavity Toothache C P(T|c) TF 0.4 0.01111

Exact Inference in Bayesian Networks Let’s generalize that last example a little – suppose we are given that JohnCalls and MaryCalls are both true, what is the probability distribution for Burglary? P(Burglary | JohnCalls = true, MaryCalls=true) Look back at using full joint distribution for this purpose – summing over hidden variables.

Inference by enumeration (example in the text book) – figure 14.8 P(X | e) = α P (X, e) = α ∑y P(X, e, y) P(B| j,m) = αP(B,j,m) = α ∑e ∑a P(B,e,a,j,m) P(b| j,m) = α ∑e ∑a P(b)P(e)P(a|be)P(j|a)P(m|a) P(b| j,m) = α P(b)∑e P(e)∑a P(a|be)P(j|a)P(m|a) P(B| j,m) = α <0.00059224, 0.0014919> P(B| j,m) ≈ <0.284, 0.716> Go back and look at what would happen if you used this method for the previous example (toothache).

Enumeration-Tree Calculation

Inference by enumeration (another way of looking at it) – figure 14.8 P(X | e) = α P (X, e) = α ∑y P(X, e, y) P(B| j,m) = αP(B,j,m) = α ∑e ∑a P(B,e,a,j,m) P(b| j,m) = P(B,e,a,j,m) + P(B,e,¬a,j,m) + P(B,¬e,a,j,m) + P(B,¬e,¬a,j,m) P(B| j,m) = α <0.00059224, 0.0014919> P(B| j,m) ≈ <0.284, 0.716> Go back and look at what would happen if you used this method for the previous example (toothache).

Constructing Bayesian networks 1. Choose an ordering of variables X1, … ,Xn such that root causes are first in the order, then the variables that they influence, and so forth. 2. For i = 1 to n select parents from X1, … ,Xi-1 such that P (Xi | Parents(Xi)) = P (Xi | X1, ... Xi-1) Note:the parents of a node are all of the nodes that influence it. In this way, each node is conditionally independent of its predecessors in the order, given its parents. P (X1, … ,Xn) = πi =1 P (Xi | X1, … , Xi-1) (chain rule) = πi =1P (Xi | Parents(Xi)) (by construction) n n How do we construct a Bayesian network in such a way that the resulting joint distribution is a good representation of a given domain?

Example – How important is the ordering? Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? Adding MaryCalls – no parents Adding JohnCalls: if Mary calls, that probably means the alarm has gone off, which of course would make it more likely that John Calls. Thus, JohnCalls needs to have MaryCalls as a parent.

Example Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? Adding Alarm: Clearly if both call, it is more likely the alarm has gone off then if just one or neither calls. So, this needs both MaryCalls and JohnCalls as parents – not conditionally independent.

Example Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? P(B | A, J, M) = P(B)? Adding Burglary: If we know the alarm state, then the call from John or Mary might give us information about our phone ringing or Mary’s music, but not about burglary – so we only need Alarm as the parent.

Example Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? Yes P(B | A, J, M) = P(B)? No P(E | B, A ,J, M) = P(E | A)? P(E | B, A, J, M) = P(E | A, B)? Adding Earthquake: if the alarm is on, it is more likely that there has been an earthquake. But if we know that there has been a burglary, then that explains the alarm, and the probability of an earthquack would be only slightly above normal. Hence, we need both alarm and burglary as parents.

Example Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? Yes P(B | A, J, M) = P(B)? No P(E | B, A ,J, M) = P(E | A)? No P(E | B, A, J, M) = P(E | A, B)? Yes

Example contd. Deciding conditional independence is hard in noncausal directions (Causal models and conditional independence seem hardwired for humans!) Network is less compact: 1 + 2 + 4 + 2 + 4 = 13 numbers needed

Summary Bayesian networks provide a natural representation for (causally induced) conditional independence Topology + CPTs = compact representation of joint distribution Generally easy for domain experts to construct