Artificial Intelligence Bayes’ Nets: Independence Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides.

Slides:



Advertisements
Similar presentations
Lirong Xia Bayesian networks (2) Thursday, Feb 25, 2014.
Advertisements

Identifying Conditional Independencies in Bayes Nets Lecture 4.
Bayesian networks Chapter 14 Section 1 – 2.
Bayesian Network Representation Continued
CS 188: Artificial Intelligence Fall 2009 Lecture 15: Bayes’ Nets II – Independence 10/15/2009 Dan Klein – UC Berkeley.
CS 188: Artificial Intelligence Spring 2009 Lecture 15: Bayes’ Nets II -- Independence 3/10/2009 John DeNero – UC Berkeley Slides adapted from Dan Klein.
5/25/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005.
10/22  Homework 3 returned; solutions posted  Homework 4 socket opened  Project 3 assigned  Mid-term on Wednesday  (Optional) Review session Tuesday.
Announcements Homework 8 is out Final Contest (Optional)
CS 188: Artificial Intelligence Fall 2009 Lecture 14: Bayes’ Nets 10/13/2009 Dan Klein – UC Berkeley.
1 Bayesian Networks Chapter ; 14.4 CS 63 Adapted from slides by Tim Finin and Marie desJardins. Some material borrowed from Lise Getoor.
Bayes’ Nets [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at
Advanced Artificial Intelligence
Announcements  Midterm 1  Graded midterms available on pandagrader  See Piazza for post on how fire alarm is handled  Project 4: Ghostbusters  Due.
Bayes’ Nets  A Bayes’ net is an efficient encoding of a probabilistic model of a domain  Questions we can ask:  Inference: given a fixed BN, what is.
Made by: Maor Levy, Temple University  Probability expresses uncertainty.  Pervasive in all of Artificial Intelligence  Machine learning 
Bayesian networks Chapter 14. Outline Syntax Semantics.
Bayesian Networks Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
CSE 473: Artificial Intelligence Spring 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.
Probabilistic Models  Models describe how (a portion of) the world works  Models are always simplifications  May not account for every variable  May.
1 Monte Carlo Artificial Intelligence: Bayesian Networks.
Introduction to Bayesian Networks
QUIZ!!  T/F: Traffic, Umbrella are cond. independent given raining. TRUE  T/F: Fire, Smoke are cond. Independent given alarm. FALSE  T/F: BNs encode.
CSE 473: Artificial Intelligence Spring 2012 Bayesian Networks - Inference Dan Weld Slides adapted from Jack Breese, Dan Klein, Daphne Koller, Stuart Russell,
Bayes’ Nets: Sampling [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available.
Announcements Project 4: Ghostbusters Homework 7
Probabilistic Reasoning [Ch. 14] Bayes Networks – Part 1 ◦Syntax ◦Semantics ◦Parameterized distributions Inference – Part2 ◦Exact inference by enumeration.
QUIZ!! T/F: Probability Tables PT(X) do not always sum to one. FALSE
CHAPTER 5 Probability Theory (continued) Introduction to Bayesian Networks.
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2006 Readings: K&F: 3.1, 3.2, 3.3.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
A Brief Introduction to Bayesian networks
CS 188: Artificial Intelligence Spring 2007
CS 2750: Machine Learning Directed Graphical Models
Bayesian networks Chapter 14 Section 1 – 2.
Presented By S.Yamuna AP/CSE
Bayesian networks (1) Lirong Xia Spring Bayesian networks (1) Lirong Xia Spring 2017.
Artificial Intelligence
Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account.
CS 4/527: Artificial Intelligence
Quizzz Rihanna’s car engine does not start (E).
CS 4/527: Artificial Intelligence
CS 4/527: Artificial Intelligence
CAP 5636 – Advanced Artificial Intelligence
CHAPTER 7 BAYESIAN NETWORK INDEPENDENCE BAYESIAN NETWORK INFERENCE MACHINE LEARNING ISSUES.
Inference Inference: calculating some useful quantity from a joint probability distribution Examples: Posterior probability: Most likely explanation: B.
CS 188: Artificial Intelligence
Instructors: Fei Fang (This Lecture) and Dave Touretzky
CAP 5636 – Advanced Artificial Intelligence
CSE 473: Artificial Intelligence Autumn 2011
CS 188: Artificial Intelligence
Probabilistic Reasoning
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
Probabilistic Reasoning
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Fall 2008
Class #16 – Tuesday, October 26
CS 188: Artificial Intelligence Spring 2007
Bayesian networks Chapter 14 Section 1 – 2.
Probabilistic Reasoning
CS 188: Artificial Intelligence Spring 2006
Warm-up as you walk in Each node in a Bayes net represents a conditional probability distribution. What distribution do you get when you multiply all of.
Probabilistic Reasoning
Bayesian networks (2) Lirong Xia. Bayesian networks (2) Lirong Xia.
Bayesian networks (1) Lirong Xia. Bayesian networks (1) Lirong Xia.
CS 188: Artificial Intelligence Fall 2008
Bayesian networks (2) Lirong Xia.
Presentation transcript:

Artificial Intelligence Bayes’ Nets: Independence Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides adapted from those created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. Some others from colleagues at Adelaide University.]

Probability Recap  Conditional probability  Product rule  Chain rule  X, Y independent if and only if:  X and Y are conditionally independent given Z if and only if: Product of parents cond on child

Bayes’ Nets  A Bayes’ net is an efficient encoding of a probabilistic model of a domain  Questions we can ask:  Inference: given a fixed BN, what is P(X | e)?  Representation: given a BN graph, what kinds of distributions can it encode?  Modeling: what BN is most appropriate for a given domain?

Bayes’ Net Semantics  A directed, acyclic graph, one node per random variable  A conditional probability table (CPT) for each node  A collection of distributions over X, one for each combination of parents’ values  Bayes’ nets implicitly encode joint distributions  As a product of local conditional distributions  To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together:

Example: Alarm Network BP(B) +b b0.999 EP(E) +e e0.998 BEAP(A|B,E) +b+e+a0.95 +b+e-a0.05 +b-e+a0.94 +b-e-a0.06 -b+e+a0.29 -b+e-a0.71 -b-e+a b-e-a0.999 AJP(J|A) +a+j0.9 +a-j0.1 -a+j0.05 -a-j0.95 AMP(M|A) +a+m0.7 +a-m0.3 -a+m0.01 -a-m0.99 BE A MJ

Example: Alarm Network BP(B) +b b0.999 EP(E) +e e0.998 BEAP(A|B,E) +b+e+a0.95 +b+e-a0.05 +b-e+a0.94 +b-e-a0.06 -b+e+a0.29 -b+e-a0.71 -b-e+a b-e-a0.999 AJP(J|A) +a+j0.9 +a-j0.1 -a+j0.05 -a-j0.95 AMP(M|A) +a+m0.7 +a-m0.3 -a+m0.01 -a-m0.99 BE A MJ

Size of a Bayes’ Net  How big is a joint distribution over N Boolean variables? 2N2N  How big is an N-node net if nodes have up to k parents? O(N * 2 k+1 )  Both give you the power to calculate  BNs: Huge space savings!  Also easier to elicit local CPTs  Also faster to answer queries (coming)

Bayes’ Nets  Representation  Conditional Independences  Probabilistic Inference  Learning Bayes’ Nets from Data

Conditional Independence  X and Y are independent if  X and Y are conditionally independent given Z  (Conditional) independence is a property of a distribution  Example:

Independence in a BN  Important question about a BN:  Are two nodes independent given certain evidence?  If yes, can prove using algebra (tedious in general)  If no, can prove with a counter example  Example:  Question: are X and Z necessarily independent?  Answer: no. Example: low pressure causes rain, which causes traffic.  X can influence Z, Z can influence X (via Y)  Addendum: they could be independent: how? XYZ

D-separation: Outline

 Study independence properties for triples  Analyze complex cases in terms of member triples  D-separation: a condition / algorithm for answering such queries

Causal Chains  This configuration is a “causal chain” X: Low pressure Y: Rain Z: Traffic  Guaranteed X independent of Z ? No!  One example set of CPTs for which X is not independent of Z is sufficient to show this independence is not guaranteed.  Example:  Low pressure causes rain causes traffic, high pressure causes no rain causes no traffic  In numbers: P( +y | +x ) = 1, P( -y | - x ) = 1, P( +z | +y ) = 1, P( -z | -y ) = 1

Causal Chains  This configuration is a “causal chain”  Guaranteed X independent of Z given Y?  Evidence along the chain “blocks” the influence Yes! X: Low pressure Y: Rain Z: Traffic

Common Cause  This configuration is a “common cause”  Guaranteed X independent of Z ? No!  One example set of CPTs for which X is not independent of Z is sufficient to show this independence is not guaranteed.  Example:  Project due causes both forums busy and lab full  In numbers: P( +x | +y ) = 1, P( -x | -y ) = 1, P( +z | +y ) = 1, P( -z | -y ) = 1 Y: Project due X: Forums busy Z: Lab full

Common Cause  This configuration is a “common cause”  Guaranteed X and Z independent given Y?  Observing the cause blocks influence between effects. Yes! Y: Project due X: Forums busy Z: Lab full

Common Effect  Last configuration: two causes of one effect (v-structures) Z: Traffic  Are X and Y independent?  Yes: the ballgame and the rain cause traffic, but they are not correlated  Still need to prove they must be (try it!)  Are X and Y independent given Z?  No: seeing traffic puts the rain and the ballgame in competition as explanation.  This is backwards from the other cases  Observing an effect activates influence between possible causes. X: RainingY: Ballgame

Independence in Larger Than Triplet  The fundamental “design” decision in representing a model as a Bayes Net/Graphical Model is that the variable of a node is conditionally independent of its non-descendents given its parents. This is exactly the step from the general factorization: To the special factorization: 18 To “see this” work your way up from the bottom of the network…...

Independence – larger than triples  A variable of a node is conditionally independent of all other variables (of other nodes), given its parents, its children and its childrens parents. That is, given its Markov Blanket. (see AIMA 3 rd ed. Section 2.2). 19

Independence – larger than triples 20

The General Case D-separation is more general….the following may be of use in some problems......

The General Case  General question: in a given BN, are two variables independent (given evidence)?  Solution: analyze the graph  Any complex example can be broken into repetitions of the three canonical cases

Reachability  Recipe: shade evidence nodes, look for paths in the resulting graph  Attempt 1: if two nodes are connected by an undirected path not blocked by a shaded node, they are conditionally independent  Almost works, but not quite  Where does it break?  Answer: the v-structure at T doesn’t count as a link in a path unless “active” R T B D L

Active / Inactive Paths  Question: Are X and Y conditionally independent given evidence variables {Z}?  Yes, if X and Y “d-separated” by Z  Consider all (undirected) paths from X to Y  No active paths = independence!  A path is active if each triple is active:  Causal chain A  B  C where B is unobserved (either direction)  Common cause A  B  C where B is unobserved  Common effect (aka v-structure) A  B  C where B or one of its descendents is observed  All it takes to block a path is a single inactive segment Active Triples Inactive Triples

 Query:  Check all (undirected!) paths between and  If one or more active, then independence not guaranteed  Otherwise (i.e. if all paths are inactive), then independence is guaranteed D-Separation ?

Example Yes R T B T’T’

Example R T B D L T’T’ Yes

Example  Variables:  R: Raining  T: Traffic  D: Roof drips  S: I’m sad  Questions: T S D R Yes

Bayes Nets Representation Summary  Bayes nets compactly encode joint distributions  Guaranteed independencies of distributions can be deduced from BN graph structure  D-separation gives precise conditional independence guarantees from graph alone  A Bayes’ net’s joint distribution may have further (conditional) independence that is not detectable until you inspect its specific distribution

Bayes’ Nets  Representation  Conditional Independences  Probabilistic Inference  Enumeration (exact, exponential complexity)  Variable elimination (exact, worst-case exponential complexity, often better)  Probabilistic inference is NP-complete  Sampling (approximate)  Learning Bayes’ Nets from Data