Probabilistic Reasoning [Ch. 14] Bayes Networks – Part 1 ◦Syntax ◦Semantics ◦Parameterized distributions Inference – Part2 ◦Exact inference by enumeration.

Slides:



Advertisements
Similar presentations
Bayesian networks Chapter 14 Section 1 – 2. Outline Syntax Semantics Exact computation.
Advertisements

BAYESIAN NETWORKS. Bayesian Network Motivation  We want a representation and reasoning system that is based on conditional independence  Compact yet.
Identifying Conditional Independencies in Bayes Nets Lecture 4.
1 22c:145 Artificial Intelligence Bayesian Networks Reading: Ch 14. Russell & Norvig.
Bayesian Networks Chapter 14 Section 1, 2, 4. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) March, 16, 2009.
3/19. Conditional Independence Assertions We write X || Y | Z to say that the set of variables X is conditionally independent of the set of variables.
Review: Bayesian learning and inference
Bayesian Networks. Motivation The conditional independence assumption made by naïve Bayes classifiers may seem to rigid, especially for classification.
Bayesian networks Chapter 14 Section 1 – 2.
Bayesian Belief Networks
CS 188: Artificial Intelligence Fall 2009 Lecture 15: Bayes’ Nets II – Independence 10/15/2009 Dan Klein – UC Berkeley.
Bayesian Belief Network. The decomposition of large probabilistic domains into weakly connected subsets via conditional independence is one of the most.
CS 188: Artificial Intelligence Spring 2009 Lecture 15: Bayes’ Nets II -- Independence 3/10/2009 John DeNero – UC Berkeley Slides adapted from Dan Klein.
5/25/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Bayesian Reasoning. Tax Data – Naive Bayes Classify: (_, No, Married, 95K, ?)
Bayesian networks More commonly called graphical models A way to depict conditional independence relationships between random variables A compact specification.
Probabilistic Reasoning
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Bayes’ Nets [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at
Advanced Artificial Intelligence
Bayesian Networks Material used 1 Random variables
Bayes’ Nets  A Bayes’ net is an efficient encoding of a probabilistic model of a domain  Questions we can ask:  Inference: given a fixed BN, what is.
Bayesian networks Chapter 14. Outline Syntax Semantics.
Bayesian Networks Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Bayesian networks Chapter 14 Section 1 – 2. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
CSE 473: Artificial Intelligence Spring 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.
Bayesian networks. Motivation We saw that the full joint probability can be used to answer any question about the domain, but can become intractable as.
1 Chapter 14 Probabilistic Reasoning. 2 Outline Syntax of Bayesian networks Semantics of Bayesian networks Efficient representation of conditional distributions.
2 Syntax of Bayesian networks Semantics of Bayesian networks Efficient representation of conditional distributions Exact inference by enumeration Exact.
Probabilistic Models  Models describe how (a portion of) the world works  Models are always simplifications  May not account for every variable  May.
1 Monte Carlo Artificial Intelligence: Bayesian Networks.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
QUIZ!!  T/F: Traffic, Umbrella are cond. independent given raining. TRUE  T/F: Fire, Smoke are cond. Independent given alarm. FALSE  T/F: BNs encode.
Announcements Project 4: Ghostbusters Homework 7
Marginalization & Conditioning Marginalization (summing out): for any sets of variables Y and Z: Conditioning(variant of marginalization):
CHAPTER 5 Probability Theory (continued) Introduction to Bayesian Networks.
Review: Bayesian inference  A general scenario:  Query variables: X  Evidence (observed) variables and their values: E = e  Unobserved variables: Y.
CPSC 7373: Artificial Intelligence Lecture 5: Probabilistic Inference Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) Nov, 13, 2013.
PROBABILISTIC REASONING Heng Ji 04/05, 04/08, 2016.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
CS 2750: Machine Learning Bayesian Networks Prof. Adriana Kovashka University of Pittsburgh March 14, 2016.
Web-Mining Agents Data Mining Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny (Übungen)
Artificial Intelligence Bayes’ Nets: Independence Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides.
A Brief Introduction to Bayesian networks
Another look at Bayesian inference
CS 188: Artificial Intelligence Spring 2007
Bayesian Networks Chapter 14 Section 1, 2, 4.
Bayesian networks Chapter 14 Section 1 – 2.
Presented By S.Yamuna AP/CSE
Bayesian networks (1) Lirong Xia Spring Bayesian networks (1) Lirong Xia Spring 2017.
Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account.
CS 4/527: Artificial Intelligence
Probabilistic Reasoning; Network-based reasoning
CS 188: Artificial Intelligence
CAP 5636 – Advanced Artificial Intelligence
CSE 473: Artificial Intelligence Autumn 2011
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Spring 2007
Bayesian networks Chapter 14 Section 1 – 2.
Probabilistic Reasoning
CS 188: Artificial Intelligence Spring 2006
Bayesian networks (1) Lirong Xia. Bayesian networks (1) Lirong Xia.
CS 188: Artificial Intelligence Fall 2008
Presentation transcript:

Probabilistic Reasoning [Ch. 14] Bayes Networks – Part 1 ◦Syntax ◦Semantics ◦Parameterized distributions Inference – Part2 ◦Exact inference by enumeration ◦Exact inference by variable elimination ◦Approximate inference by stochastic simulation ◦Approximate inference by Markov chain Monte Carlo

Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact specification of full joint distributions Syntax: ◦a set of nodes, one per variable ◦a directed, acyclic graph (link ≈ “directly influences”) ◦a conditional distribution for each node given its parents: P(X i |Parents(X i )) In the simplest case, conditional distribution represented as a conditional probability table (CPT) giving the distribution over X i for each combination of parent values

Bayes’ Nets A Bayes’ net is an efficient encoding of a probabilistic model of a domain Questions we can ask: ◦Inference: given a fixed BN, what is P(X | e)? ◦Representation: given a BN graph, what kinds of distributions can it encode? ◦Modeling: what BN is most appropriate for a given domain?

Bayes’ Net Semantics Let’s formalize the semantics of a Bayes’ net A set of nodes, one per variable X A directed, acyclic graph A conditional distribution for each node ◦A collection of distributions over X, one for each combination of parents’ values ◦CPT: conditional probability table ◦Description of a noisy “causal” process A Bayes net = Topology (graph) + Local Conditional Probabilities

Topology Limits Distributions Given some graph topology G, only certain joint distributions can be encoded The graph structure guarantees certain (conditional) independences (There might be more independence) Adding arcs increases the set of distributions, but has several costs Full conditioning can encode any distribution

Independence in a BN Important question about a BN: ◦Are two nodes independent given certain evidence? ◦If yes, can prove using algebra (tedious in general) ◦If no, can prove with a counter example ◦Example: XYZ ◦Question: are X and Z necessarily independent? Answer: no.Example: low pressure causes rain, which causes traffic. X can influence Z, Z can influence X (via Y) Addendum: they could be independent: how?

Causal Chains This configuration is a “causal chain” X: Low pressure XYZXYZ Y: Rain Z: Traffic ◦Is X independent of Z given Y? Yes! ◦Evidence along the chain “blocks” the influence

Common Cause Another basic configuration: two effects of the same cause ◦Are X and Z independent? ◦Are X and Z independent given Y? XZ Y: Project due X: Newsgroup busy Z: Lab full Y ◦Observing the cause blocks influence between effects. Yes!

Common Effect XZ Last configuration: two causes of one effect (v-structures) ◦Are X and Z independent? Yes: the ballgame and the rain cause traffic, but they are not correlated Still need to prove they must be (try it!) ◦Are X and Z independent given Y? No: seeing traffic puts the rain and the ballgame in competition as explanation? ◦This is backwards from the other cases Observing an effect activates influence between possible causes. Y X: Raining Z: Ballgame Y: Traffic

The General Case Any complex example can be analyzed using these three canonical cases General question: in a given BN, are two variables independent (given evidence)? Solution: analyze the graph

Reachability Recipe: shade evidence nodes Attempt 1: if two nodes are connected by an undirected path not blocked by a shaded node, they are conditionally independent L Almost works, but not quite ◦Where does it break? ◦Answer: the v-structure at T doesn’t count as a link in a path unless “active” R T B D

Reachability (D-Separation) Question: Are X and Y conditionally independent given evidence vars {Z}? ◦Yes, if X and Y “separated” by Z ◦Look for active paths from X to Y ◦No active paths = independence! A path is active if each triple is active: ActiveTriplesXBActiveTriplesXB Y Inactive Triples XB Y X B YX B Y ◦Causal chain X  B  Y where B is unobserved (either direction) ◦Common cause X  B  Y where B is unobserved ◦Common effect (aka v-structure) X  B  Y where B or one of its descendents is observed All it takes to block a path is a single inactive segment X B Y X B Y XY B

ExampleExample RB XBYBYYBYBYY X B Inactive Triples YesYes T T’T’ XBYBYYBYBYY B X Active TriplesActive Triples

ExampleExample RB L YesYesYesYes XBYBYYBYBYY X B T D T’T’ YesYes XBYBYYBYBYY B X Active TriplesActive Triples

ExampleExample Variables: ◦R: Raining ◦T: Traffic ◦D: Roof drips ◦S: I’m sad XBYBYYBYBYY X B Inactive Triples Questions: YesYes XBYBYYBYBYY B X Active TriplesActive Triples

Causality? When Bayes’ nets reflect the true causal patterns: ◦Often simpler (nodes have fewer parents) ◦Often easier to think about ◦Often easier to elicit from experts BNs need not actually be causal ◦Sometimes no causal net exists over the domain ◦E.g. consider the variables Traffic and Drips ◦End up with arrows that reflect correlation, not causation What do the arrows really mean? ◦Topology may happen to encode causal structure ◦Topology only guaranteed to encode conditional independence

Changing Bayes’ Net Structure The same joint distribution can be encoded in many different Bayes’ nets ◦Causal structure tends to be the simplest Analysis question: given some edges, what other edges do you need to add? ◦One answer: fully connect the graph ◦Better answer: don’t make any false conditional independence assumptions

ExampleExample Topology of network encodes conditional independence assertions: Weather is independent of the other variables Toothache and Catch are conditionally independent, given Cavity

ExampleExample I'm at work, neighbor John calls to say my alarm is ringing, but neighbor Mary doesn't call. Sometimes it's set off by minor earthquakes. Is there a burglar? Variables: Burglar, Earthquake, Alarm, JohnCalls, MaryCalls Network topology reflects “causal” knowledge: ◦A burglar can set the alarm off ◦An earthquake can set the alarm off ◦The alarm can cause Mary to call ◦The alarm can cause John to call

Example contd. Probabilities derived from prior observations

Compactness A CPT for Boolean X i with k Boolean parents has 2 k rows for the combinations of parent values Each row requires one number p for X i =true (the number for X i =false is just 1 - p) If each variable has no more than k parents, the complete network requires O(n · 2 k ) numbers I.e., grows linearly with n, vs. O(2 n ) for the full joint distribution For burglary net, =10 numbers (vs = 31)

Global semantics “Global” semantics defines the full joint distribution as the product of the local conditional distributions: P(x 1,..., x n ) = Π n i=1 P(x | parents(X )) iiiiii e.g., P(j ∧ m ∧ a ∧ ¬ b ∧ ¬ e) = P(j | a)P(m | a)P(a |¬ b, ¬ e)P( ¬ b)P( ¬ e) = 0.9 × 0.7 × × × ≈

Local semantics Local semantics: each node is conditionally independent of its nondescendants given its parents Theorem: Local semantics ⇔ global semantics

Markov blanket Each node is conditionally independent of all others given its Markov blanket: parents + children + children's parents

Constructing Bayesian networks Need a method such that a series of locally testable assertions of conditional independence guarantees the required global semantics 1. Choose an ordering of variables X ,...,X n 2. For i = 1 to n add X i to the network select parents from X ,...,X i- such that P(X i | Parents(X i )) = P(X i | X ,...,X i- ) This choice of parents guarantees the global semantics: P(X,..., X n )= Π n i=1 P(X | X,...,X iiiii-i- ) (chain rule) = Π n i=1 P(X | Parents(X )) (by construction) iii

Example: Problem formulation I'm at work, neighbor John calls to say my alarm is ringing, but neighbor Mary doesn't call. Sometimes it's set off by minor earthquakes. Is there a burglar? Variables: Burglar, Earthquake, Alarm, JohnCalls, MaryCalls Network topology reflects “causal” knowledge: ◦A burglar can set the alarm off ◦An earthquake can set the alarm off ◦The alarm can cause Mary to call ◦The alarm can cause John to call

ExampleExample Suppose we choose the ordering M, J, A, B, E MaryCalls JohnCalls Alarm P(J | M) = P(J)? P(A | J,M) = P(A | J)? P(A | J,M) = P(A)? P(B | A,J,M) = P(B | A)? Yes P(B | A,J,M) = P(B)? P(E | B,A,J,M) = P(E | A)? P(E | B,A,J,M) = P(E | A,B)? Yes Burglary Earthquake NoNo NoNo NoNo NoNo

Example contd. MaryCalls JohnCalls Alarm Deciding conditional independence is hard in non-causal directions (Causal models and conditional independence seem hardwired for humans!) Assessing conditional probabilities is hard in non-causal directions Network is less compact: = 13 numbers needed Burglary Earthquake

Example: Car diagnosis Initial evidence: car won't start Testable variables (green), “broken, so fix it” variables (orange) Hidden variables (gray) ensure sparse structure, reduce parameters

Example: Car insurance

Compact conditional distributions CPT grows exponentially with number of parents CPT becomes infinite with continuous-valued parent or child Solution: canonical distributions that are defined compactly Deterministic nodes are the simplest case: X = f(Parents(X)) for some function f E.g., Boolean functions NorthAmerican ⇔ Canadian ∨ US ∨ Mexican E.g., numerical relationships among continuous variables  Level  inflow  precipitation - outflow - evaporation tt

Compact conditional distributions contd. Noisy-OR distributions model multiple noninteracting causes 1)Parents U 1...U k include all causes (can add leak node) 2)Independent failure probability qi for each cause alone ⇒ P(X | U 1...U j, ¬ U j ¬ U k ) = 1 - Π j ii=1 q i Know Number of parameters linear in number of parents ColdFluMalariaFFFFFTFTFFTTTFFTFTTTFTTTColdFluMalariaFFFFFTFTFFTTTFFTFTTTFTTT P(Fever)P(Fever)P(¬Fever)P(¬Fever) = 0.2 x = 0.6 x = 0.6 x = 0.6 x 0.2 x 0.1 Infer