Bayesian Networks CS 271: Fall 2007 Instructor: Padhraic Smyth.

Slides:



Advertisements
Similar presentations
CS188: Computational Models of Human Behavior
Advertisements

BAYESIAN NETWORKS. Bayesian Network Motivation  We want a representation and reasoning system that is based on conditional independence  Compact yet.
Exact Inference in Bayes Nets
1 Bayesian Networks Slides from multiple sources: Weng-Keen Wong, School of Electrical Engineering and Computer Science, Oregon State University.
Dynamic Bayesian Networks (DBNs)
For Monday Read chapter 18, sections 1-2 Homework: –Chapter 14, exercise 8 a-d.
For Monday Finish chapter 14 Homework: –Chapter 13, exercises 8, 15.
Bayesian Networks. Introduction A problem domain is modeled by a list of variables X 1, …, X n Knowledge about the problem domain is represented by a.
Bayesian Networks Chapter 14 Section 1, 2, 4. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) March, 16, 2009.
Review: Bayesian learning and inference
Bayesian Networks Chapter 2 (Duda et al.) – Section 2.11
Bayesian Networks. Motivation The conditional independence assumption made by naïve Bayes classifiers may seem to rigid, especially for classification.
Bayesian networks Chapter 14 Section 1 – 2.
Bayesian Belief Networks
1 Bayesian Reasoning Chapter 13 CMSC 471 Adapted from slides by Tim Finin and Marie desJardins.
Bayesian Networks What is the likelihood of X given evidence E? i.e. P(X|E) = ?
© P. Smyth: UC Irvine: Graphical Models: 2004: 1 Learning from data with graphical models Padhraic Smyth © Information and Computer Science University.
. Approximate Inference Slides by Nir Friedman. When can we hope to approximate? Two situations: u Highly stochastic distributions “Far” evidence is discarded.
Bayesian Networks Alan Ritter.
CPSC 422, Lecture 18Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18 Feb, 25, 2015 Slide Sources Raymond J. Mooney University of.
1 Bayesian Networks Chapter ; 14.4 CS 63 Adapted from slides by Tim Finin and Marie desJardins. Some material borrowed from Lise Getoor.
Bayesian Reasoning. Tax Data – Naive Bayes Classify: (_, No, Married, 95K, ?)
Bayesian networks More commonly called graphical models A way to depict conditional independence relationships between random variables A compact specification.
Probabilistic Reasoning
Knowledge and Uncertainty CS 271: Fall 2007 Instructor: Padhraic Smyth.
Artificial Intelligence CS 165A Tuesday, November 27, 2007  Probabilistic Reasoning (Ch 14)
Read R&N Ch Next lecture: Read R&N
Bayesian networks Chapter 14. Outline Syntax Semantics.
Knowledge and Uncertainty 부산대학교 전자전기컴퓨터공학과 인공지능연구실 김민호
Bayesian Networks What is the likelihood of X given evidence E? i.e. P(X|E) = ?
Bayesian networks. Motivation We saw that the full joint probability can be used to answer any question about the domain, but can become intractable as.
For Wednesday Read Chapter 11, sections 1-2 Program 2 due.
Aprendizagem Computacional Gladys Castillo, UA Bayesian Networks Classifiers Gladys Castillo University of Aveiro.
Bayesian Statistics and Belief Networks. Overview Book: Ch 13,14 Refresher on Probability Bayesian classifiers Belief Networks / Bayesian Networks.
Introduction to Bayesian Networks
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Revised by M.-O. Heo Summarized by J.W. Nam Biointelligence Laboratory,
Notes on Graphical Models Padhraic Smyth Department of Computer Science University of California, Irvine.
Computing & Information Sciences Kansas State University Data Sciences Summer Institute Multimodal Information Access and Synthesis Learning and Reasoning.
Marginalization & Conditioning Marginalization (summing out): for any sets of variables Y and Z: Conditioning(variant of marginalization):
Lecture 2: Statistical learning primer for biologists
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
Pattern Recognition and Machine Learning
Reasoning Under Uncertainty: Independence and Inference CPSC 322 – Uncertainty 5 Textbook §6.3.1 (and for HMMs) March 25, 2011.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) Nov, 13, 2013.
Bayesian Networks Read R&N Ch Next lecture: Read R&N
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
CS 2750: Machine Learning Bayesian Networks Prof. Adriana Kovashka University of Pittsburgh March 14, 2016.
Bayesian Networks Chapter 2 (Duda et al.) – Section 2.11 CS479/679 Pattern Recognition Dr. George Bebis.
CS 2750: Machine Learning Directed Graphical Models
Qian Liu CSE spring University of Pennsylvania
Read R&N Ch Next lecture: Read R&N
Introduction to Artificial Intelligence
Learning Bayesian Network Models from Data
Introduction to Artificial Intelligence
Read R&N Ch Next lecture: Read R&N
Bayesian Networks Read R&N Ch. 13.6,
Read R&N Ch Next lecture: Read R&N
Knowledge and Uncertainty
Read R&N Ch Next lecture: Read R&N
CS 271: Fall 2007 Instructor: Padhraic Smyth
Read R&N Ch Next lecture: Read R&N
Pattern Recognition and Image Analysis
Bayesian Statistics and Belief Networks
Read R&N Ch Next lecture: Read R&N
Probabilistic Reasoning
Read R&N Ch Next lecture: Read R&N
Chapter 14 February 26, 2004.
Presentation transcript:

Bayesian Networks CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 11: Bayesian Networks 2 CS 271, Fall 2007: Professor Padhraic Smyth Logistics Remaining lectures –Bayesian networks (today) –2 on machine learning –No lecture next Tuesday Dec 4 th (out of town) Homeworks –#5 (Bayesian networks) is due Thursday –#6 (machine learning) will be out end of next week, due end of the following week Extra-credit projects –If you have not heard from me, go ahead and start working on it (I have only ed people who needed to revise their proposals) Final exam –2 weeks from Thursday In class, closed-book, cumulative but with emphasis on logic onwards

Topic 11: Bayesian Networks 3 CS 271, Fall 2007: Professor Padhraic Smyth Today’s Lecture Definition of Bayesian networks –Representing a joint distribution by a graph –Can yield an efficient factored representation for a joint distribution Inference in Bayesian networks –Inference = answering queries such as P(Q | e) –Intractable in general (scales exponentially with num variables) –But can be tractable for certain classes of Bayesian networks –Efficient algorithms leverage the structure of the graph Other aspects of Bayesian networks –Real-valued variables –Other types of queries –Special cases: naïve Bayes classifiers, hidden Markov models Reading: 14.1 to 14.4 (inclusive) – rest of chapter 14 is optional

Topic 11: Bayesian Networks 4 CS 271, Fall 2007: Professor Padhraic Smyth Computing with Probabilities: Law of Total Probability Law of Total Probability (aka “summing out” or marginalization) P(a) =  b P(a, b) =  b P(a | b) P(b) where B is any random variable Why is this useful? given a joint distribution (e.g., P(a,b,c,d)) we can obtain any “marginal” probability (e.g., P(b)) by summing out the other variables, e.g., P(b) =  a  c  d P(a, b, c, d) Less obvious: we can also compute any conditional probability of interest given a joint distribution, e.g., P(c | b) =  a  d P(a, c, d | b) = 1 / P(b)  a  d P(a, c, d, b) where 1 / P(b) is just a normalization constant Thus, the joint distribution contains the information we need to compute any probability of interest.

Topic 11: Bayesian Networks 5 CS 271, Fall 2007: Professor Padhraic Smyth Computing with Probabilities: The Chain Rule or Factoring We can always write P(a, b, c, … z) = P(a | b, c, …. z) P(b, c, … z) (by definition of joint probability) Repeatedly applying this idea, we can write P(a, b, c, … z) = P(a | b, c, …. z) P(b | c,.. z) P(c|.. z)..P(z) This factorization holds for any ordering of the variables This is the chain rule for probabilities

Topic 11: Bayesian Networks 6 CS 271, Fall 2007: Professor Padhraic Smyth Conditional Independence 2 random variables A and B are conditionally independent given C iff P(a, b | c) = P(a | c) P(b | c) for all values a, b, c More intuitive (equivalent) conditional formulation –A and B are conditionally independent given C iff P(a | b, c) = P(a | c) OR P(b | a, c) P(b | c), for all values a, b, c –Intuitive interpretation: P(a | b, c) = P(a | c) tells us that learning about b, given that we already know c, provides no change in our probability for a, i.e., b contains no information about a beyond what c provides Can generalize to more than 2 random variables –E.g., K different symptom variables X1, X2, … XK, and C = disease –P(X1, X2,…. XK | C) =  P(Xi | C) –Also known as the naïve Bayes assumption

“…probability theory is more fundamentally concerned with the structure of reasoning and causation than with numbers.” Glenn Shafer and Judea Pearl Introduction to Readings in Uncertain Reasoning, Morgan Kaufmann, 1990

Topic 11: Bayesian Networks 8 CS 271, Fall 2007: Professor Padhraic Smyth Bayesian Networks A Bayesian network specifies a joint distribution in a structured form Represent dependence/independence via a directed graph –Nodes = random variables –Edges = direct dependence Structure of the graph  Conditional independence relations Requires that graph is acyclic (no directed cycles) 2 components to a Bayesian network –The graph structure (conditional independence assumptions) –The numerical probabilities (for each variable given its parents) In general, p(X 1, X 2,....X N ) =  p(X i | parents(X i ) ) The full joint distribution The graph-structured approximation

Topic 11: Bayesian Networks 9 CS 271, Fall 2007: Professor Padhraic Smyth Example of a simple Bayesian network A B C Probability model has simple factored form Directed edges => direct dependence Absence of an edge => conditional independence Also known as belief networks, graphical models, causal networks Other formulations, e.g., undirected graphical models p(A,B,C) = p(C|A,B)p(A)p(B)

Topic 11: Bayesian Networks 10 CS 271, Fall 2007: Professor Padhraic Smyth Examples of 3-way Bayesian Networks ACB Marginal Independence: p(A,B,C) = p(A) p(B) p(C)

Topic 11: Bayesian Networks 11 CS 271, Fall 2007: Professor Padhraic Smyth Examples of 3-way Bayesian Networks A CB Conditionally independent effects: p(A,B,C) = p(B|A)p(C|A)p(A) B and C are conditionally independent Given A e.g., A is a disease, and we model B and C as conditionally independent symptoms given A

Topic 11: Bayesian Networks 12 CS 271, Fall 2007: Professor Padhraic Smyth Examples of 3-way Bayesian Networks A B C Independent Causes: p(A,B,C) = p(C|A,B)p(A)p(B) “Explaining away” effect: Given C, observing A makes B less likely e.g., earthquake/burglary/alarm example A and B are (marginally) independent but become dependent once C is known

Topic 11: Bayesian Networks 13 CS 271, Fall 2007: Professor Padhraic Smyth Examples of 3-way Bayesian Networks ACB Markov dependence: p(A,B,C) = p(C|B) p(B|A)p(A)

Topic 11: Bayesian Networks 14 CS 271, Fall 2007: Professor Padhraic Smyth Example Consider the following 5 binary variables: –B = a burglary occurs at your house –E = an earthquake occurs at your house –A = the alarm goes off –J = John calls to report the alarm –M = Mary calls to report the alarm –What is P(B | M, J) ? (for example) –We can use the full joint distribution to answer this question Requires 2 5 = 32 probabilities Can we use prior domain knowledge to come up with a Bayesian network that requires fewer probabilities?

Topic 11: Bayesian Networks 15 CS 271, Fall 2007: Professor Padhraic Smyth Constructing a Bayesian Network: Step 1 Order the variables in terms of causality (may be a partial order) e.g., {E, B} -> {A} -> {J, M} P(J, M, A, E, B) = P(J, M | A, E, B) P(A| E, B) P(E, B) ~ P(J, M | A) P(A| E, B) P(E) P(B) ~ P(J | A) P(M | A) P(A| E, B) P(E) P(B) These CI assumptions are reflected in the graph structure of the Bayesian network

Topic 11: Bayesian Networks 16 CS 271, Fall 2007: Professor Padhraic Smyth The Resulting Bayesian Network

Topic 11: Bayesian Networks 17 CS 271, Fall 2007: Professor Padhraic Smyth Constructing this Bayesian Network: Step 2 P(J, M, A, E, B) = P(J | A) P(M | A) P(A | E, B) P(E) P(B) There are 3 conditional probability tables (CPDs) to be determined: P(J | A), P(M | A), P(A | E, B) –Requiring = 8 probabilities And 2 marginal probabilities P(E), P(B) -> 2 more probabilities Where do these probabilities come from? –Expert knowledge –From data (relative frequency estimates) –Or a combination of both - see discussion in Section 20.1 and 20.2 (optional)

Topic 11: Bayesian Networks 18 CS 271, Fall 2007: Professor Padhraic Smyth The Bayesian network

Topic 11: Bayesian Networks 19 CS 271, Fall 2007: Professor Padhraic Smyth Number of Probabilities in Bayesian Networks Consider n binary variables Unconstrained joint distribution requires O(2 n ) probabilities If we have a Bayesian network, with a maximum of k parents for any node, then we need O(n 2 k ) probabilities Example –Full unconstrained joint distribution n = 30: need 10 9 probabilities for full joint distribution –Bayesian network n = 30, k = 4: need 480 probabilities

Topic 11: Bayesian Networks 20 CS 271, Fall 2007: Professor Padhraic Smyth The Bayesian Network from a different Variable Ordering

Topic 11: Bayesian Networks 21 CS 271, Fall 2007: Professor Padhraic Smyth The Bayesian Network from a different Variable Ordering

Topic 11: Bayesian Networks 22 CS 271, Fall 2007: Professor Padhraic Smyth Given a graph, can we “read off” conditional independencies? A node is conditionally independent of all other nodes in the network given its Markov blanket (in gray)

Topic 11: Bayesian Networks 23 CS 271, Fall 2007: Professor Padhraic Smyth Inference (Reasoning) in Bayesian Networks Consider answering a query in a Bayesian Network –Q = set of query variables –e = evidence (set of instantiated variable-value pairs) –Inference = computation of conditional distribution P(Q | e) Examples –P(burglary | alarm) –P(earthquake | JCalls, MCalls) –P(JCalls, MCalls | burglary, earthquake) Can we use the structure of the Bayesian Network to answer such queries efficiently? Answer = yes –Generally speaking, complexity is inversely proportional to sparsity of graph

Topic 11: Bayesian Networks 24 CS 271, Fall 2007: Professor Padhraic Smyth Example: Tree-Structured Bayesian Network D A B C F E G p(a, b, c, d, e, f, g) is modeled as p(a|b)p(c|b)p(f|e)p(g|e)p(b|d)p(e|d)p(d)

Topic 11: Bayesian Networks 25 CS 271, Fall 2007: Professor Padhraic Smyth Example D A B c F E g Say we want to compute p(a | c, g)

Topic 11: Bayesian Networks 26 CS 271, Fall 2007: Professor Padhraic Smyth Example D A B c F E g Direct calculation: p(a|c,g) =  bdef p(a,b,d,e,f | c,g) Complexity of the sum is O(m 4 )

Topic 11: Bayesian Networks 27 CS 271, Fall 2007: Professor Padhraic Smyth Example D A B c F E g Reordering:  d p(a|b)  d p(b|d,c)  e p(d|e)  f p(e,f |g)

Topic 11: Bayesian Networks 28 CS 271, Fall 2007: Professor Padhraic Smyth Example D A B c F E g Reordering:  b  p(a|b)  d p(b|d,c)  e p(d|e)  f p(e,f |g) p(e|g)

Topic 11: Bayesian Networks 29 CS 271, Fall 2007: Professor Padhraic Smyth Example D A B c F E g Reordering:  b  p(a|b)  d p(b|d,c)  e p(d|e) p(e|g) p(d|g)

Topic 11: Bayesian Networks 30 CS 271, Fall 2007: Professor Padhraic Smyth Example D A B c F E g Reordering:  b  p(a|b)  d p(b|d,c) p(d|g) p(b|c,g)

Topic 11: Bayesian Networks 31 CS 271, Fall 2007: Professor Padhraic Smyth Example D A B c F E g Reordering:  b  p(a|b) p(b|c,g) p(a|c,g) Complexity is O(m), compared to O(m 4 )

Topic 11: Bayesian Networks 32 CS 271, Fall 2007: Professor Padhraic Smyth General Strategy for inference Want to compute P(q | e) Step 1: P(q | e) = P(q,e)/P(e) =  P(q,e), since P(e) is constant wrt Q Step 2: P(q,e) =  a..z P(q, e, a, b, …. z), by the law of total probability Step 3:  a..z P(q, e, a, b, …. z) =  a..z  i P(variable i | parents i)  (using Bayesian network factoring) Step 4: Distribute summations across product terms for efficient computation

Topic 11: Bayesian Networks 33 CS 271, Fall 2007: Professor Padhraic Smyth Inference Examples Examples worked on whiteboard

Topic 11: Bayesian Networks 34 CS 271, Fall 2007: Professor Padhraic Smyth Complexity of Bayesian Network inference Assume the network is a polytree –Only a single directed path between any 2 nodes Complexity scales as O(n m K+1 ) n = number of variables m = arity of variables K = maximum number of parents for any node –Compare to O(m n-1 ) for brute-force method Network is not a polytree? –Can cluster variables to render the new graph a tree –Very similar to tree methods used for –Complexity is O(n m W+1 ), where W = num variables in largest cluster

Topic 11: Bayesian Networks 35 CS 271, Fall 2007: Professor Padhraic Smyth Real-valued Variables Can Bayesian Networks handle Real-valued variables? –If we can assume variables are Gaussian, then the inference and theory for Bayesian networks is well-developed, E.g., conditionals of a joint Gaussian is still Gaussian, etc In inference we replace sums with integrals –For other density functions it depends… Can often include a univariate variable at the “edge” of a graph, e.g., a Poisson conditioned on day of week –But for many variables there is little know beyond their univariate properties, e.g., what would be the joint distribution of a Poisson and a Gaussian? (its not defined) –Common approaches in practice Put real-valued variables at “leaf nodes” (so nothing is conditioned on them) Assume real-valued variables are Gaussian or discrete Discretize real-valued variables

Topic 11: Bayesian Networks 36 CS 271, Fall 2007: Professor Padhraic Smyth Other aspects of Bayesian Network Inference The problem of finding an optimal (for inference) ordering and/or clustering of variables for an arbitrary graph is NP-hard –Various heuristics are used in practice –Efficient algorithms and software now exist for working with large Bayesian networks E.g., work in Professor Rina Dechter’s group Other types of queries? –E.g., finding the most likely values of a variable given evidence –arg max P(Q | e) = “most probable explanation” or maximum a posteriori query - Can also leverage the graph structure in the same manner as for inference – essentially replaces “sum” operator with “max”

Topic 11: Bayesian Networks 37 CS 271, Fall 2007: Professor Padhraic Smyth Naïve Bayes Model Y1Y1 Y2Y2 Y3Y3 C YnYn P(C | Y 1,…Y n ) =  P(Y i | C) P (C) Features Y are conditionally independent given the class variable C Widely used in machine learning e.g., spam classification: Y’s = counts of words in s Conditional probabilities P(Yi | C) can easily be estimated from labeled data

Topic 11: Bayesian Networks 38 CS 271, Fall 2007: Professor Padhraic Smyth Hidden Markov Model (HMM) Y1Y1 S1S1 Y2Y2 S2S2 Y3Y3 S3S3 YnYn SnSn Observed Hidden Two key assumptions: 1. hidden state sequence is Markov 2. observation Y t is CI of all other variables given S t Widely used in speech recognition, protein sequence models Since this is a Bayesian network polytree, inference is linear in n

Topic 11: Bayesian Networks 39 CS 271, Fall 2007: Professor Padhraic Smyth Summary Bayesian networks represent a joint distribution using a graph The graph encodes a set of conditional independence assumptions Answering queries (or inference or reasoning) in a Bayesian network amounts to efficient computation of appropriate conditional probabilities Probabilistic inference is intractable in the general case –But can be carried out in linear time for certain classes of Bayesian networks

Backup Slides (can be ignored)

Topic 11: Bayesian Networks 41 CS 271, Fall 2007: Professor Padhraic Smyth Junction Tree D A B, E C F G Good news: can perform MP algorithm on this tree Bad news: complexity is now O(K 2 )

Topic 11: Bayesian Networks 42 CS 271, Fall 2007: Professor Padhraic Smyth A More General Algorithm Message Passing (MP) Algorithm –Pearl, 1988; Lauritzen and Spiegelhalter, 1988 –Declare 1 node (any node) to be a root –Schedule two phases of message-passing nodes pass messages up to the root messages are distributed back to the leaves –In time O(N), we can compute P(….)

Topic 11: Bayesian Networks 43 CS 271, Fall 2007: Professor Padhraic Smyth Sketch of the MP algorithm in action

Topic 11: Bayesian Networks 44 CS 271, Fall 2007: Professor Padhraic Smyth Sketch of the MP algorithm in action 1

Topic 11: Bayesian Networks 45 CS 271, Fall 2007: Professor Padhraic Smyth Sketch of the MP algorithm in action 1 2

Topic 11: Bayesian Networks 46 CS 271, Fall 2007: Professor Padhraic Smyth Sketch of the MP algorithm in action 1 2 3

Topic 11: Bayesian Networks 47 CS 271, Fall 2007: Professor Padhraic Smyth Sketch of the MP algorithm in action

Topic 11: Bayesian Networks 48 CS 271, Fall 2007: Professor Padhraic Smyth Graphs with “loops” D A B C F E G Network is not a polytree

Topic 11: Bayesian Networks 49 CS 271, Fall 2007: Professor Padhraic Smyth Graphs with “loops” D A B C F E G General approach: “cluster” variables together to convert graph to a polytree

Topic 11: Bayesian Networks 50 CS 271, Fall 2007: Professor Padhraic Smyth Junction Tree D A B, E C F G

Topic 11: Bayesian Networks 51 CS 271, Fall 2007: Professor Padhraic Smyth Junction Tree D A B, E C F G Good news: can perform MP algorithm on this tree Bad news: complexity is now O(K 2 )