Describing Data The canonical descriptive strategy is to describe the data in terms of their underlying distribution As usual, we have a p-dimensional.

Slides:



Advertisements
Similar presentations
A Tutorial on Learning with Bayesian Networks
Advertisements

Learning with Missing Data
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Variational Methods for Graphical Models Micheal I. Jordan Zoubin Ghahramani Tommi S. Jaakkola Lawrence K. Saul Presented by: Afsaneh Shirazi.
CS498-EA Reasoning in AI Lecture #15 Instructor: Eyal Amir Fall Semester 2011.
Exact Inference in Bayes Nets
Learning: Parameter Estimation
Dynamic Bayesian Networks (DBNs)
Bayesian Networks VISA Hyoungjune Yi. BN – Intro. Introduced by Pearl (1986 ) Resembles human reasoning Causal relationship Decision support system/ Expert.
Chapter 8-3 Markov Random Fields 1. Topics 1. Introduction 1. Undirected Graphical Models 2. Terminology 2. Conditional Independence 3. Factorization.
From Variable Elimination to Junction Trees
Graphical Models - Learning -
Graphical Models - Inference -
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
Regulatory Network (Part II) 11/05/07. Methods Linear –PCA (Raychaudhuri et al. 2000) –NIR (Gardner et al. 2003) Nonlinear –Bayesian network (Friedman.
PGM 2003/04 Tirgul 3-4 The Bayesian Network Representation.
. Learning Bayesian networks Slides by Nir Friedman.
. PGM: Tirgul 10 Learning Structure I. Benefits of Learning Structure u Efficient learning -- more accurate models with less data l Compare: P(A) and.
Structure Learning in Bayesian Networks
Bayesian Network Representation Continued
. Bayesian Networks Lecture 9 Edited from Nir Friedman’s slides by Dan Geiger from Nir Friedman’s slides.
Probabilistic Graphical Models Tool for representing complex systems and performing sophisticated reasoning tasks Fundamental notion: Modularity Complex.
. PGM: Tirgul 10 Parameter Learning and Priors. 2 Why learning? Knowledge acquisition bottleneck u Knowledge acquisition is an expensive process u Often.
. Inference I Introduction, Hardness, and Variable Elimination Slides by Nir Friedman.
Graphical Models: An Introduction Lise Getoor Computer Science Dept University of Maryland
. Approximate Inference Slides by Nir Friedman. When can we hope to approximate? Two situations: u Highly stochastic distributions “Far” evidence is discarded.
Bayesian Networks Alan Ritter.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Thanks to Nir Friedman, HU
. DAGs, I-Maps, Factorization, d-Separation, Minimal I-Maps, Bayesian Networks Slides by Nir Friedman.
1 Bayesian Networks Chapter ; 14.4 CS 63 Adapted from slides by Tim Finin and Marie desJardins. Some material borrowed from Lise Getoor.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
. PGM 2002/3 – Tirgul6 Approximate Inference: Sampling.
A Brief Introduction to Graphical Models
Automated Planning and Decision Making Prof. Ronen Brafman Automated Planning and Decision Making 2007 Bayesian networks Variable Elimination Based on.
UIUC CS 598: Section EA Graphical Models Deepak Ramachandran Fall 2004 (Based on slides by Eyal Amir (which were based on slides by Lise Getoor and Alvaro.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Revised by M.-O. Heo Summarized by J.W. Nam Biointelligence Laboratory,
Learning With Bayesian Networks Markus Kalisch ETH Zürich.
1 CMSC 671 Fall 2001 Class #21 – Tuesday, November 13.
Knowledge Representation & Reasoning Lecture #4 UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005 (Based on slides by Lise Getoor and Alvaro.
Conditional Probability Distributions Eran Segal Weizmann Institute.
CS498-EA Reasoning in AI Lecture #10 Instructor: Eyal Amir Fall Semester 2009 Some slides in this set were adopted from Eran Segal.
1 Bayesian Networks (Directed Acyclic Graphical Models) The situation of a bell that rings whenever the outcome of two coins are equal can not be well.
Lecture 2: Statistical learning primer for biologists
Exploiting Structure in Probability Distributions Irit Gat-Viks Based on presentation and lecture notes of Nir Friedman, Hebrew University.
1 Use graphs and not pure logic Variables represented by nodes and dependencies by edges. Common in our language: “threads of thoughts”, “lines of reasoning”,
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
1 Learning P-maps Param. Learning Graphical Models – Carlos Guestrin Carnegie Mellon University September 24 th, 2008 Readings: K&F: 3.3, 3.4, 16.1,
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
04/21/2005 CS673 1 Being Bayesian About Network Structure A Bayesian Approach to Structure Discovery in Bayesian Networks Nir Friedman and Daphne Koller.
Bayesian Learning Bayes Theorem MAP, ML hypotheses MAP learners
Crash Course on Machine Learning Part VI Several slides from Derek Hoiem, Ben Taskar, Christopher Bishop, Lise Getoor.
1 BN Semantics 2 – Representation Theorem The revenge of d-separation Graphical Models – Carlos Guestrin Carnegie Mellon University September 17.
1 Structure Learning (The Good), The Bad, The Ugly Inference Graphical Models – Carlos Guestrin Carnegie Mellon University October 13 th, 2008 Readings:
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2006 Readings: K&F: 3.1, 3.2, 3.3.
CS 2750: Machine Learning Bayesian Networks Prof. Adriana Kovashka University of Pittsburgh March 14, 2016.
Knowledge Representation & Reasoning Lecture #5 UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005 (Based on slides by Lise Getoor and Alvaro.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk
CS 2750: Machine Learning Directed Graphical Models
Qian Liu CSE spring University of Pennsylvania
Bayes Net Learning: Bayesian Approaches
Approximate Inference
Bell & Coins Example Coin1 Bell Coin2
Bayesian Networks Based on
Class #19 – Tuesday, November 3
Inference III: Approximate Inference
Parametric Methods Berlin Chen, 2005 References:
Learning Bayesian networks
Presentation transcript:

Describing Data The canonical descriptive strategy is to describe the data in terms of their underlying distribution As usual, we have a p-dimensional data matrix with variables X 1, …, X p The joint distribution is P(X 1, …, X p ) The joint gives us complete information about the variables Given the joint distribution, we can answer any question about the relationships among any subset of variables –are X2 and X5 independent? –generating approximate answers to queries for large databases or selectivity estimation Given a query (conditions that observations must satisfy), estimate the fraction of rows that satisfy this condition (the selectivity of the query) These estimates are needed during query optimization If we have a good approximation for the joint distribution of data, we can use it to efficiently compute approximate selectivities

Graphical Models In the next 3-4 lectures, we will be studying graphical models e.g. Bayesian networks, Bayes nets, Belief nets, Markov networks, etc. We will study: –representation –reasoning –learning Materials based on upcoming book by Nir Friedman and Daphne Koller. Slides courtesy of Nir Friedman.

Probability Distributions Let X 1,…,X p be random variables Let P be a joint distribution over X 1,…,X p If the variables are binary, then we need O(2 p ) parameters to describe P Can we do better? Key idea: use properties of independence

Independent Random Variables Two variables X and Y are independent if –P(X = x|Y = y) = P(X = x) for all values x,y –That is, learning the values of Y does not change prediction of X If X and Y are independent then –P(X,Y) = P(X|Y)P(Y) = P(X)P(Y) In general, if X 1,…,X p are independent, then –P(X 1,…,X p )= P(X 1 )...P(X p ) –Requires O(n) parameters

Conditional Independence Unfortunately, most of random variables of interest are not independent of each other A more suitable notion is that of conditional independence Two variables X and Y are conditionally independent given Z if –P(X = x|Y = y,Z=z) = P(X = x|Z=z) for all values x,y,z –That is, learning the values of Y does not change prediction of X once we know the value of Z –notation: I ( X, Y | Z )

Example: Naïve Bayesian Model A common model in early diagnosis: –Symptoms are conditionally independent given the disease (or fault) Thus, if –X 1,…,X p denote whether the symptoms exhibited by the patient (headache, high-fever, etc.) and –H denotes the hypothesis about the patients health then, P(X 1,…,X p,H) = P(H)P(X 1 |H)…P(X p |H), This naïve Bayesian model allows compact representation –It does embody strong independence assumptions

Modeling assumptions: Ancestors can affect descendants' genotype only by passing genetic materials through intermediate generations Example: Family trees Noisy stochastic process: Example: Pedigree A node represents an individual’s genotype Homer Bart Marge LisaMaggie

Markov Assumption We now make this independence assumption more precise for directed acyclic graphs (DAGs) Each random variable X, is independent of its non- descendents, given its parents Pa(X) Formally, I (X, NonDesc(X) | Pa(X)) Descendent Ancestor Parent Non-descendent X Y1Y1 Y2Y2

Markov Assumption Example In this example: –I ( E, B ) –I ( B, {E, R} ) –I ( R, {A, B, C} | E ) –I ( A, R | B,E ) –I ( C, {B, E, R} | A) Earthquake Radio Burglary Alarm Call

I-Maps A DAG G is an I-Map of a distribution P if all the Markov assumptions implied by G are satisfied by P (Assuming G and P both use the same set of random variables) Examples: XYXY

Factorization Given that G is an I-Map of P, can we simplify the representation of P? Example: Since I(X,Y), we have that P(X|Y) = P(X) Applying the chain rule P(X,Y) = P(X|Y) P(Y) = P(X) P(Y) Thus, we have a simpler representation of P(X,Y) XY

Factorization Theorem Thm: if G is an I-Map of P, then Proof: By chain rule: wlog. X 1,…,X p is an ordering consistent with G From assumption: Since G is an I-Map, I (X i, NonDesc(X i )| Pa(X i )) Hence, We conclude, P(X i | X 1,…,X i-1 ) = P(X i | Pa(X i ) )

Factorization Theorem From assumption: Thm: if G is an I-Map of P, then Proof: By chain rule: wlog. X 1,…,X p is an ordering consistent with G Since G is an I-Map, I (X i, NonDesc(X i )| Pa(X i )) We conclude, P(X i | X 1,…,X i-1 ) = P(X i | Pa(X i ) ) Hence,

Factorization Example P(C,A,R,E,B) = P(B)P(E|B)P(R|E,B)P(A|R,B,E)P(C|A,R,B,E) Earthquake Radio Burglary Alarm Call versus P(C,A,R,E,B) = P(B) P(E) P(R|E) P(A|B,E) P(C|A)

Consequences We can write P in terms of “local” conditional probabilities If G is sparse, – that is, |Pa(X i )| < k,  each conditional probability can be specified compactly –e.g. for binary variables, these require O(2 k ) params.  representation of P is compact –linear in number of variables

Pause…Summary We defined the following concepts The Markov Independences of a DAG G –I (X i, NonDesc(X i ) | Pa i ) G is an I-Map of a distribution P –If P satisfies the Markov independencies implied by G We proved the factorization theorem if G is an I-Map of P, then

Let Markov(G) be the set of Markov Independencies implied by G The factorization theorem shows G is an I-Map of P  We can also show the opposite: Thm:  G is an I-Map of P Conditional Independencies

Proof (Outline) Example: X Y Z

Implied Independencies Does a graph G imply additional independencies as a consequence of Markov(G)? We can define a logic of independence statements Some axioms: –I( X ; Y | Z )  I( Y; X | Z ) –I( X ; Y 1, Y 2 | Z )  I( X; Y 1 | Z )

d-seperation A procedure d-sep(X; Y | Z, G) that given a DAG G, and sets X, Y, and Z returns either yes or no Goal: d-sep(X; Y | Z, G) = yes iff I(X;Y|Z) follows from Markov(G)

Paths Intuition: dependency must “flow” along paths in the graph A path is a sequence of neighboring variables Examples: R  E  A  B C  A  E  R Earthquake Radio Burglary Alarm Call

Paths We want to know when a path is –active -- creates dependency between end nodes –blocked -- cannot create dependency end nodes We want to classify situations in which paths are active.

Blocked Unblocked E R A E R A Path Blockage Three cases: –Common cause – Blocked Active

Blocked Unblocked E C A E C A Path Blockage Three cases: –Common cause –Intermediate cause – Blocked Active

Blocked Unblocked E B A C E B A C E B A C Path Blockage Three cases: –Common cause –Intermediate cause –Common Effect Blocked Active

Path Blockage -- General Case A path is active, given evidence Z, if Whenever we have the configuration B or one of its descendents are in Z No other nodes in the path are in Z A path is blocked, given evidence Z, if it is not active. A C B

A –d-sep(R,B)? Example E B C R

–d-sep(R,B) = yes –d-sep(R,B|A)? Example E B A C R

–d-sep(R,B) = yes –d-sep(R,B|A) = no –d-sep(R,B|E,A)? Example E B A C R

d-Separation X is d-separated from Y, given Z, if all paths from a node in X to a node in Y are blocked, given Z. Checking d-separation can be done efficiently (linear time in number of edges) –Bottom-up phase: Mark all nodes whose descendents are in Z –X to Y phase: Traverse (BFS) all edges on paths from X to Y and check if they are blocked

Soundness Thm: If –G is an I-Map of P –d-sep( X; Y | Z, G ) = yes then –P satisfies I( X; Y | Z ) Informally, Any independence reported by d-separation is satisfied by underlying distribution

Completeness Thm: If d-sep( X; Y | Z, G ) = no then there is a distribution P such that –G is an I-Map of P –P does not satisfy I( X; Y | Z ) Informally, Any independence not reported by d-separation might be violated by the underlying distribution We cannot determine this by examining the graph structure alone

I-Maps revisited The fact that G is I-Map of P might not be that useful For example, complete DAGs –A DAG is G is complete if we cannot add an arc without creating a cycle These DAGs do not imply any independencies Thus, they are I-Maps of any distribution X1X1 X3X3 X2X2 X4X4 X1X1 X3X3 X2X2 X4X4

Minimal I-Maps A DAG G is a minimal I-Map of P if G is an I-Map of P If G’  G, then G’ is not an I-Map of P Removing any arc from G introduces (conditional) independencies that do not hold in P

Minimal I-Map Example If is a minimal I-Map Then, these are not I-Maps: X1X1 X3X3 X2X2 X4X4 X1X1 X3X3 X2X2 X4X4 X1X1 X3X3 X2X2 X4X4 X1X1 X3X3 X2X2 X4X4 X1X1 X3X3 X2X2 X4X4

Constructing minimal I-Maps The factorization theorem suggests an algorithm Fix an ordering X 1,…,X n For each i, –select Pa i to be a minimal subset of {X 1,…,X i-1 }, such that I(X i ; {X 1,…,X i-1 } - Pa i | Pa i ) Clearly, the resulting graph is a minimal I-Map.

Non-uniqueness of minimal I-Map Unfortunately, there may be several minimal I-Maps for the same distribution –Applying I-Map construction procedure with different orders can lead to different structures E B A C R Original I-Map E B A C R Order: C, R, A, E, B

Choosing Ordering & Causality The choice of order can have drastic impact on the complexity of minimal I-Map Heuristic argument: construct I-Map using causal ordering among variables Justification? –It is often reasonable to assume that graphs of causal influence should satisfy the Markov properties.

P-Maps A DAG G is P-Map (perfect map) of a distribution P if –I(X; Y | Z) if and only if d-sep(X; Y |Z, G) = yes Notes: A P-Map captures all the independencies in the distribution P-Maps are unique, up to DAG equivalence

P-Maps Unfortunately, some distributions do not have a P- Map Example: A minimal I-Map: This is not a P-Map since I(A;C) but d-sep(A;C) = no A B C

Bayesian Networks A Bayesian network specifies a probability distribution via two components: –A DAG G –A collection of conditional probability distributions P(X i |Pa i ) The joint distribution P is defined by the factorization Additional requirement: G is a minimal I-Map of P

Summary We explored DAGs as a representation of conditional independencies: –Markov independencies of a DAG –Tight correspondence between Markov(G) and the factorization defined by G –d-separation, a sound & complete procedure for computing the consequences of the independencies –Notion of minimal I-Map –P-Maps This theory is the basis for defining Bayesian networks

Markov Networks We now briefly consider an alternative representation of conditional independencies Let U be an undirected graph Let N i be the set of neighbors of X i Define Markov(U) to be the set of independencies I( X i ; {X 1,…,X n } - N i - {X i } | N i ) U is an I-Map of P if P satisfies Markov(U)

Example This graph implies that I(A; C | B, D ) I(B; D | A, C ) Note: this example does not have a directed P-Map A D B C

Markov Network Factorization Thm: if P is strictly positive, that is P(x 1, …, x n ) > 0 for all assignments then U is an I-Map of P if and only if there is a factorization where C 1, …, C k are the maximal cliques in U Alternative form:

Bayesian Networks to Markov Networks We’ve seen that Pa i separate X i from its non- descendents What separates X i from the rest of the nodes? Markov Blanket: Minimal set Mb i such that I(X i ; {X 1,…,X n } - Mb i - {X i } | Mb i ) To construct that Markov blanket we need to consider all paths from X i to other nodes

Markov Blanket (cont) Three types of Paths: “Upward” paths –Blocked by parents X

Markov Blanket (cont) Three types of Paths: “Upward” paths –Blocked by parents “Downward” paths –Blocked by children X

Markov Blanket (cont) Three types of Paths: “Upward” paths –Blocked by parents “Downward” paths –Blocked by children “Sideway” paths –Blocked by “spouses” X

Markov Blanket (cont) We define the Markov Blanket for a DAG G Mb i consist of –Pa i –X i ’s children –Parents of X i ’s children (excluding X i ) Easy to see: If X j in Mb i then X i in Mb j

Moralized Graphs Given a DAG G, we define the moralized graph of G to be an undirected graph U such that –if X  Y in G, then X -- Y in U –if X  Y  Z in G, then X -- Z in U –no other edges are in U In other words: X -- Y in U if X in Y’s Markov blanket If G in an I-Map of P, then U is also an I-Map of P

Markov Networks vs. Bayesian Networks The transformation to a Moral graph loses information about independencies It is easy to show that undirected graphs satisfy: –I( X ; Y | Z )  I( X ; Y | Z, Z’ ) Adding more evidence does not create dependencies Thus, Markov networks cannot model “explaining away”

Example I( E ; B ) is not satisfied by the moralized graph E B A C R E B A C R

Relationship between Directed & Undirected Models Chain Graphs Directed Graphs Undirected Graphs

CPDs So far, we focused on how to represent independencies using DAGs The “other” component of a Bayesian networks is the specification of the conditional probability distributions (CPDs) We start with the simplest representation of CPDs and then discuss additional structure

Tabular CPDs When the variable of interest are all discrete, the common representation is as a table: For example P(C|A,B) can be represented by ABP(C = 0 | A, B)P(C = 1 | A, B)

Tabular CPDs Pros: Very flexible, can capture any CPD of discrete variables Can be easily stored and manipulated Cons: Representation size grows exponentially with the number of parents! Unwieldy to assess probabilities for more than few parents

Structured CPD To avoid the exponential blowup in representation, we need to focus on specialized types of CPDs This comes at a cost in terms of expressive power We now consider several types of structured CPDs

Deterministic CPDs The simplest form of CPDs is one where P(X|Y 1,…,Y k ) is defined as where f is some function In this case X is determined by the values of Y 1,…,Y k Depending on the class of functions we are willing to consider, this representation can be compact

Deterministic CPDs and d-seperation Deterministic relations can induce additional independencies in a graph Example: Suppose that C is determined by A, B In standard DAG we have that d-sep(D; E | A, B ) = no However, observing A and B, implies that we also know the value of C Thus, we can conclude that Ind(D; E | A, B) C B E A D

Deterministic CPDs and d-separation General solution: –Given a query d-sep(X ; Y | Z ) –While there is X i such that P(X i | Pa i ) is a deterministic CPT, and Pa i  Z Z  Z  { X i } – run d-sep( X ; Y ; Z )

Causal Independence Consider the following situation In tabular CPD, we need to assess the probability of fever in eight cases These involve all possible interactions between diseases For three disease, this might be feasible…. For ten diseases, not likely…. Disease 1 Disease 3 Disease 2 Fever

Causal Independence Simplifying assumption: –Each disease attempts to cause fever, independently of the other diseases –The patient has fever if one of the diseases “succeeds” We can model this using a Bayesian network fragment Fever Fever 1 Fever 3 Fever 2 Disease 1 Disease 3 Disease 2 OR gate F = or(SF,F1,F2,F3) Hypothetical variables “Fever caused by Disease i” Spontenuous Fever

Noisy-Or CPD Models P(X|Y 1,…,Y k ), X, Y 1,…, Y k are all binary Paremeters: –p i -- probability of X = 1 due to Y i = 1 –p 0 -- probability of X = 1 due to other causes Plugging these in the model we get

Noisy-Or CPD Benefits of noisy-or –“Reasonable” assumptions in many domains e.g., medical domain –Few parameters. –Each parameter can be estimated independently of the others The same idea can be extended to other functions: noisy-max, noisy-and, etc. Frequently used in large medical expert systems

Context Specific Independence Consider the following examples: Alarm sound depends on –Whether the alarm was set before leaving the house –Burglary –Earthquake Arriving on time depends on –Travel route –The congestion on the two possible routes Set Earthquake Burglary Alarm Travel Route Route 2 traffic Route 1 traffic Arrive on time

Context-Specific Independence In both of these example we have context-specific indepdencies (CSI) –Independencies that depends on a particular value of one or more variables In our examples: –Ind( A ; B, E | S = 0 ) Alarm sound is independent of B and E when the alarm is not set –Ind( A ; R 2 | T = 1 ) Arrival time is independent of traffic on route 2 if we choose to travel on route 1

Representing CSI When we have such CSI, P(X | Y 1,…,Y k ) is the same for several values of Y 1,…,Y k There are many ways of representing these regularities A natural representation: decision trees –Internal nodes: tests on parents –Leaves: probability distributions on X Evaluate P(X | Y 1,…,Y k ) by traversing tree S B.0 E

Detecting CSI Given evidence on some nodes, we can identify the “relevant” parts of the trees –This consists of the paths in the tree that are consistent with context Example –Context S = 0 –Only one path of tree is relevant A parent is independent given the context if it does not appear on one of the relevant paths S B.0 E

CSI and d-seperation Once we represent local CSI in CPDS, we can deduce about additional CSI in the DAG Example: –Context S = 0 –Ind(A ; B, E | S = 0 ) –Thus, two edges become inactive in the DAG –Possible conclusion Ind(C ; B | S = 0) Earthquake Radio Burglary Alarm Call Set

Decision Tree CPDs Benefits Decision trees offer a flexible and intuitive language to represent CSI Incorporated into several commercial tools for constructing Bayesian networks Comparison to noisy-or Noisy-or CPDs require full trees to represent General decision tree CPDs cannot be represented by noisy-or

Continuous CPDs When X is a continuous variables, we need to represent the density of X, given any value of its parents We do not have a general representation that can capture all possible conditional densities

Gaussian Distribution One of the most common representations Unconditional density:

Linear-Gaussian CPDs Represent P(X | Y 1,…,Y k ) as a Gaussian –Fixed variance –Mean depends on the value of Y 1,…,Y k

Linear Gaussian CPDs Let B be a Bayesian network of continuous variables with linear-Gaussian CPDs Then B defines a multivariate Gaussian distribution

Conditional Gaussian CPDs A model for networks that combine discrete and continuous variables If X is continuous –Y 1,…,Y k are continuous –Z 1,…,Z l are discrete Conditional Gaussian (CG) CPD: For each joint value of Z 1,…,Z l define a different linear- Gaussian parameters Resulting multivariate distribution: mixture of multivariate Gaussians –Each assignment of values to discrete variables selects a multivariate Gaussian over continuous variables

Summary Many choices for representing CPDs Any “statistical” model of conditional distribution can be used –e.g., any regression model Representing structure in CPDs can have implications on independencies among variables

Inference in Bayesian Networks

Inference We now have compact representations of probability distributions: –Bayesian Networks –Markov Networks Network describes a unique probability distribution P How do we answer queries about P ? We use inference as a name for the process of computing answers to such queries

Queries: Likelihood There are many types of queries we might ask. Most of these involve evidence –An evidence e is an assignment of values to a set E variables in the domain –Without loss of generality E = { X k+1, …, X n } Simplest query: compute probability of evidence This is often referred to as computing the likelihood of the evidence

Queries: A posteriori belief Often we are interested in the conditional probability of a variable given the evidence This is the a posteriori belief in X, given evidence e A related task is computing the term P(X, e) –i.e., the likelihood of e and X = x for values of X –we can recover the a posteriori belief by

A posteriori belief This query is useful in many cases: Prediction: what is the probability of an outcome given the starting condition –Target is a descendent of the evidence Diagnosis: what is the probability of disease/fault given symptoms –Target is an ancestor of the evidence As we shall see, the direction between variables does not restrict the directions of the queries –Probabilistic inference can combine evidence form all parts of the network

Queries: A posteriori joint In this query, we are interested in the conditional probability of several variables, given the evidence P(X, Y, … | e ) Note that the size of the answer to query is exponential in the number of variables in the joint

Queries: MAP In this query we want to find the maximum a posteriori assignment for some variable of interest (say X 1,…,X l ) That is, x 1,…,x l maximize the probability P(x 1,…,x l | e) Note that this is equivalent to maximizing P(x 1,…,x l, e)

Queries: MAP We can use MAP for: Classification –find most likely label, given the evidence Explanation –What is the most likely scenario, given the evidence

Queries: MAP Cautionary note: The MAP depends on the set of variables Example: –MAP of X –MAP of (X, Y)

Complexity of Inference Thm: Computing P(X = x) in a Bayesian network is NP-hard Not surprising, since we can simulate Boolean gates.

Hardness Hardness does not mean we cannot solve inference –It implies that we cannot find a general procedure that works efficiently for all networks –For particular families of networks, we can have provably efficient procedures

Approaches to inference Exact inference –Inference in Simple Chains –Variable elimination –Clustering / join tree algorithms Approximate inference –Stochastic simulation / sampling methods –Markov chain Monte Carlo methods –Mean field theory

Inference in Simple Chains How do we compute P(X 2 ) ? X1X1 X2X2

Inference in Simple Chains (cont.) How do we compute P(X 3 ) ? we already know how to compute P(X 2 )... X1X1 X2X2 X3X3

Inference in Simple Chains (cont.) How do we compute P(X n ) ? Compute P(X 1 ), P(X 2 ), P(X 3 ), … We compute each term by using the previous one X1X1 X2X2 X3X3 XnXn... Complexity: Each step costs O(|Val(X i )|*|Val(X i+1 )|) operations Compare to naïve evaluation, that requires summing over joint values of n-1 variables

Inference in Simple Chains (cont.) Suppose that we observe the value of X 2 =x 2 How do we compute P(X 1 |x 2 ) ? –Recall that we it suffices to compute P(X 1,x 2 ) X1X1 X2X2

Inference in Simple Chains (cont.) Suppose that we observe the value of X 3 =x 3 How do we compute P(X 1,x 3 ) ? How do we compute P(x 3 |x 1 ) ? X1X1 X2X2 X3X3

Inference in Simple Chains (cont.) Suppose that we observe the value of X n =x n How do we compute P(X 1,x n ) ? We compute P(x n |x n-1 ), P(x n |x n-2 ), … iteratively X1X1 X2X2 X3X3 XnXn...

Inference in Simple Chains (cont.) Suppose that we observe the value of X n =x n We want to find P(X k |x n ) How do we compute P(X k,x n ) ? We compute P(X k ) by forward iterations We compute P(x n | X k ) by backward iterations X1X1 X2X2 XkXk XnXn...

Elimination in Chains We now try to understand the simple chain example using first-order principles Using definition of probability, we have ABC E D

Elimination in Chains By chain decomposition, we get ABC E D

Elimination in Chains Rearranging terms... ABC E D

Elimination in Chains Now we can perform innermost summation This summation, is exactly the first step in the forward iteration we describe before ABC E D X

Elimination in Chains Rearranging and then summing again, we get ABC E D X X

Elimination in Chains with Evidence Similarly, we understand the backward pass We write the query in explicit form ABC E D

Elimination in Chains with Evidence Eliminating d, we get ABC E D X

Elimination in Chains with Evidence Eliminating c, we get ABC E D X X

Elimination in Chains with Evidence Finally, we eliminate b ABC E D X X X

Variable Elimination General idea: Write query in the form Iteratively –Move all irrelevant terms outside of innermost sum –Perform innermost sum, getting a new term –Insert the new term into the product

A More Complex Example Visit to Asia Smoking Lung Cancer Tuberculosis Abnormality in Chest Bronchitis X-Ray Dyspnea “Asia” network:

V S L T A B XD We want to compute P(d) Need to eliminate: v,s,x,t,l,a,b Initial factors

V S L T A B XD We want to compute P(d) Need to eliminate: v,s,x,t,l,a,b Initial factors Eliminate: v Note: f v (t) = P(t) In general, result of elimination is not necessarily a probability term Compute:

V S L T A B XD We want to compute P(d) Need to eliminate: s,x,t,l,a,b Initial factors Eliminate: s Summing on s results in a factor with two arguments f s (b,l) In general, result of elimination may be a function of several variables Compute:

V S L T A B XD We want to compute P(d) Need to eliminate: x,t,l,a,b Initial factors Eliminate: x Note: f x (a) = 1 for all values of a !! Compute:

V S L T A B XD We want to compute P(d) Need to eliminate: t,l,a,b Initial factors Eliminate: t Compute:

V S L T A B XD We want to compute P(d) Need to eliminate: l,a,b Initial factors Eliminate: l Compute:

V S L T A B XD We want to compute P(d) Need to eliminate: b Initial factors Eliminate: a,b Compute:

Variable Elimination We now understand variable elimination as a sequence of rewriting operations Actual computation is done in elimination step Exactly the same computation procedure applies to Markov networks Computation depends on order of elimination

Dealing with evidence How do we deal with evidence? Suppose get evidence V = t, S = f, D = t We want to compute P(L, V = t, S = f, D = t) V S L T A B XD

Dealing with Evidence We start by writing the factors: Since we know that V = t, we don’t need to eliminate V Instead, we can replace the factors P(V) and P(T|V) with These “select” the appropriate parts of the original factors given the evidence Note that f p(V) is a constant, and thus does not appear in elimination of other variables V S L T A B XD

Dealing with Evidence Given evidence V = t, S = f, D = t Compute P(L, V = t, S = f, D = t ) Initial factors, after setting evidence: V S L T A B XD

Dealing with Evidence Given evidence V = t, S = f, D = t Compute P(L, V = t, S = f, D = t ) Initial factors, after setting evidence: Eliminating x, we get V S L T A B XD

Dealing with Evidence Given evidence V = t, S = f, D = t Compute P(L, V = t, S = f, D = t ) Initial factors, after setting evidence: Eliminating x, we get Eliminating t, we get V S L T A B XD

Dealing with Evidence Given evidence V = t, S = f, D = t Compute P(L, V = t, S = f, D = t ) Initial factors, after setting evidence: Eliminating x, we get Eliminating t, we get Eliminating a, we get V S L T A B XD

Dealing with Evidence Given evidence V = t, S = f, D = t Compute P(L, V = t, S = f, D = t ) Initial factors, after setting evidence: Eliminating x, we get Eliminating t, we get Eliminating a, we get Eliminating b, we get V S L T A B XD

Complexity of variable elimination Suppose in one elimination step we compute This requires multiplications –For each value for x, y 1, …, y k, we do m multiplications additions –For each value of y 1, …, y k, we do |Val(X)| additions Complexity is exponential in number of variables in the intermediate factor!

Understanding Variable Elimination We want to select “good” elimination orderings that reduce complexity We start by attempting to understand variable elimination via the graph we are working with This will reduce the problem of finding good ordering to graph-theoretic operation that is well-understood

Undirected graph representation At each stage of the procedure, we have an algebraic term that we need to evaluate In general this term is of the form: where Z i are sets of variables We now plot a graph where there is undirected edge X--Y if X,Y are arguments of some factor –that is, if X,Y are in some Z i Note: this is the Markov network that describes the probability on the variables we did not eliminate yet

Chordal Graphs elimination ordering  undirected chordal graph Graph: Maximal cliques are factors in elimination Factors in elimination are cliques in the graph Complexity is exponential in size of the largest clique in graph L T A B X V S D V S L T A B XD

Induced Width The size of the largest clique in the induced graph is thus an indicator for the complexity of variable elimination This quantity is called the induced width of a graph according to the specified ordering Finding a good ordering for a graph is equivalent to finding the minimal induced width of the graph

General Networks From graph theory: Thm: Finding an ordering that minimizes the induced width is NP-Hard However, There are reasonable heuristic for finding “relatively” good ordering There are provable approximations to the best induced width If the graph has a small induced width, there are algorithms that find it in polynomial time

Elimination on Trees Formally, for any tree, there is an elimination ordering with induced width = 1 Thm Inference on trees is linear in number of variables

PolyTrees A polytree is a network where there is at most one path from one variable to another Thm: Inference in a polytree is linear in the representation size of the network –This assumes tabular CPT representation A C B D E FG H

Approaches to inference Exact inference –Inference in Simple Chains –Variable elimination –Clustering / join tree algorithms Approximate inference –Stochastic simulation / sampling methods –Markov chain Monte Carlo methods –Mean field theory

Stochastic simulation Suppose you are given values for some subset of the variables, G, and want to infer values for unknown variables, U Randomly generate a very large number of instantiations from the BN –Generate instantiations for all variables – start at root variables and work your way “forward” Only keep those instantiations that are consistent with the values for G Use the frequency of values for U to get estimated probabilities Accuracy of the results depends on the size of the sample (asymptotically approaches exact results)

Markov chain Monte Carlo methods So called because –Markov chain – each instance generated in the sample is dependent on the previous instance –Monte Carlo – statistical sampling method Perform a random walk through variable assignment space, collecting statistics as you go –Start with a random instantiation, consistent with evidence variables –At each step, for some nonevidence variable, randomly sample its value, consistent with the other current assignments Given enough samples, MCMC gives an accurate estimate of the true distribution of values

Learning Bayesian Networks

Learning Bayesian networks Inducer Data + Prior information E R B A C.9.1 e b e be b b e BEP(A | E,B)

Known Structure -- Complete Data E, B, A. Inducer E B A.9.1 e b e be b b e BEP(A | E,B) ?? e b e ?? ? ? ?? be b b e BE E B A Network structure is specified –Inducer needs to estimate parameters Data does not contain missing values

Unknown Structure -- Complete Data E, B, A. Inducer E B A.9.1 e b e be b b e BEP(A | E,B) ?? e b e ?? ? ? ?? be b b e BE E B A Network structure is not specified –Inducer needs to select arcs & estimate parameters Data does not contain missing values

Known Structure -- Incomplete Data Inducer E B A.9.1 e b e be b b e BEP(A | E,B) ?? e b e ?? ? ? ?? be b b e BE E B A Network structure is specified Data contains missing values –We consider assignments to missing values E, B, A.

Known Structure / Complete Data Given a network structure G –And choice of parametric family for P(X i |Pa i ) Learn parameters for network Goal Construct a network that is “closest” to probability that generated the data

Learning Parameters for a Bayesian Network E B A C Training data has the form:

Learning Parameters for a Bayesian Network E B A C Since we assume i.i.d. samples, likelihood function is

Learning Parameters for a Bayesian Network E B A C By definition of network, we get

Learning Parameters for a Bayesian Network E B A C Rewriting terms, we get

General Bayesian Networks Generalizing for any Bayesian network: The likelihood decomposes according to the structure of the network. i.i.d. samples Network factorization

General Bayesian Networks (Cont.) Decomposition  Independent Estimation Problems If the parameters for each family are not related, then they can be estimated independently of each other.

From Binomial to Multinomial For example, suppose X can have the values 1,2,…,K We want to learn the parameters  1,  2. …,  K Sufficient statistics: N 1, N 2, …, N K - the number of times each outcome is observed Likelihood function: MLE:

Likelihood for Multinomial Networks When we assume that P(X i | Pa i ) is multinomial, we get further decomposition:

Likelihood for Multinomial Networks When we assume that P(X i | Pa i ) is multinomial, we get further decomposition: For each value pa i of the parents of X i we get an independent multinomial problem The MLE is

Maximum Likelihood Estimation Consistency Estimate converges to best possible value as the number of examples grow To make this formal, we need to introduce some definitions

KL-Divergence Let P and Q be two distributions over X A measure of distance between P and Q is the Kullback-Leibler Divergence KL(P||Q) = 1 (when logs are in base 2) = –The probability P assigns to an instance is, on average, half the probability Q assigns to it KL(P||Q)  0 KL(P||Q) = 0 iff are P and Q equal

Consistency Let P(X|  ) be a parametric family –We need to make various regularity condition we won’t go into now Let P * (X) be the distribution that generates the data Let be the MLE estimate given a dataset D Thm As N , where with probability 1

Consistency -- Geometric Interpretation P*P* P(X|  * ) Space of probability distribution Distributions that can represented by P(X|  )

Is MLE all we need? Suppose that after 10 observations, –ML estimates P(H) = 0.7 for the thumbtack –Would you bet on heads for the next toss? Suppose now that after 10 observations, ML estimates P(H) = 0.7 for a coin Would you place the same bet?

Bayesian Inference Frequentist Approach: Assumes there is an unknown but fixed parameter  Estimates  with some confidence Prediction by using the estimated parameter value Bayesian Approach: Represents uncertainty about the unknown parameter Uses probability to quantify this uncertainty: –Unknown parameters as random variables Prediction follows from the rules of probability: –Expectation over the unknown parameters

Bayesian Inference (cont.) We can represent our uncertainty about the sampling process using a Bayesian network The values of X are independent given  The conditional probabilities, P(x[m] |  ), are the parameters in the model Prediction is now inference in this network  X[1]X[2]X[m] X[m+1] Observed dataQuery

Bayesian Inference (cont.) Prediction as inference in this network where Posterior Likelihood Prior Probability of data  X[1]X[2]X[m] X[m+1]

Example: Binomial Data Revisited Prior: uniform for  in [0,1] –P(  ) = 1 Then P(  |D) is proportional to the likelihood L(  :D) (N H,N T ) = (4,1) MLE for P(X = H ) is 4/5 = 0.8 Bayesian prediction is

Bayesian Inference and MLE In our example, MLE and Bayesian prediction differ But… If prior is well-behaved Does not assign 0 density to any “feasible” parameter value Then: both MLE and Bayesian prediction converge to the same value Both are consistent

Dirichlet Priors Recall that the likelihood function is A Dirichlet prior with hyperparameters  1,…,  K is defined as for legal  1,…,  K Then the posterior has the same form, with hyperparameters  1 +N 1,…,  K +N K

Dirichlet Priors (cont.) We can compute the prediction on a new event in closed form: If P(  ) is Dirichlet with hyperparameters  1,…,  K then Since the posterior is also Dirichlet, we get

Dirichlet Priors -- Example Dirichlet(1,1) Dirichlet(2,2) Dirichlet(0.5,0.5) Dirichlet(5,5)

Prior Knowledge The hyperparameters  1,…,  K can be thought of as “imaginary” counts from our prior experience Equivalent sample size =  1 +…+  K The larger the equivalent sample size the more confident we are in our prior

Effect of Priors Prediction of P(X=H ) after seeing data with N H = 0.25N T for different sample sizes Different strength  H +  T Fixed ratio  H /  T Fixed strength  H +  T Different ratio  H /  T

Effect of Priors (cont.) In real data, Bayesian estimates are less sensitive to noise in the data P(X = 1|D) N MLE Dirichlet(.5,.5) Dirichlet(1,1) Dirichlet(5,5) Dirichlet(10,10) N 0 1 Toss Result

Conjugate Families The property that the posterior distribution follows the same parametric form as the prior distribution is called conjugacy –Dirichlet prior is a conjugate family for the multinomial likelihood Conjugate families are useful since: –For many distributions we can represent them with hyperparameters –They allow for sequential update within the same representation –In many cases we have closed-form solution for prediction

Bayesian Networks and Bayesian Prediction Priors for each parameter group are independent Data instances are independent given the unknown parameters XX X[1]X[2] X[M] X[M+1] Observed data Plate notation Y[1]Y[2] Y[M] Y[M+1]  Y|X XX m X[m] Y[m] Query

Bayesian Networks and Bayesian Prediction (Cont.) We can also “read” from the network: Complete data  posteriors on parameters are independent XX X[1]X[2] X[M] X[M+1] Observed data Plate notation Y[1]Y[2] Y[M] Y[M+1]  Y|X XX m X[m] Y[m] Query

Bayesian Prediction(cont.) Since posteriors on parameters for each family are independent, we can compute them separately Posteriors for parameters within families are also independent: Complete data  independent posteriors on  Y|X=0 and  Y|X=1 XX  Y|X m X[m] Y[m] Refined model XX  Y|X=0 m X[m] Y[m]  Y|X=1

Bayesian Prediction(cont.) Given these observations, we can compute the posterior for each multinomial  X i | pa i independently –The posterior is Dirichlet with parameters  (X i =1|pa i )+N (X i =1|pa i ),…,  (X i =k|pa i )+N (X i =k|pa i ) The predictive distribution is then represented by the parameters

Assessing Priors for Bayesian Networks We need the  (x i,pa i ) for each node x j We can use initial parameters  0 as prior information –Need also an equivalent sample size parameter M 0 –Then, we let  (x i,pa i ) = M 0  P(x i,pa i |  0 ) This allows to update a network using new data

Learning Parameters: Case Study (cont.) Experiment: Sample a stream of instances from the alarm network Learn parameters using –MLE estimator –Bayesian estimator with uniform prior with different strengths

Learning Parameters: Case Study (cont.) KL Divergence M MLE Bayes w/ Uniform Prior, M'=5 Bayes w/ Uniform Prior, M'=10 Bayes w/ Uniform Prior, M'=20 Bayes w/ Uniform Prior, M'=50

Learning Parameters: Summary Estimation relies on sufficient statistics –For multinomial these are of the form N (x i,pa i ) –Parameter estimation Bayesian methods also require choice of priors Both MLE and Bayesian are asymptotically equivalent and consistent Both can be implemented in an on-line manner by accumulating sufficient statistics MLE Bayesian (Dirichlet)

Learning Structure from Complete Data

Benefits of Learning Structure Efficient learning -- more accurate models with less data –Compare: P(A) and P(B) vs. joint P(A,B) Discover structural properties of the domain –Ordering of events –Relevance Identifying independencies  faster inference Predict effect of actions –Involves learning causal relationship among variables

Why Struggle for Accurate Structure? Increases the number of parameters to be fitted Wrong assumptions about causality and domain structure Cannot be compensated by accurate fitting of parameters Also misses causality and domain structure EarthquakeAlarm Set Sound Burglary EarthquakeAlarm Set Sound Burglary Earthquake Alarm Set Sound Burglary Adding an arcMissing an arc

Approaches to Learning Structure Constraint based –Perform tests of conditional independence –Search for a network that is consistent with the observed dependencies and independencies Pros & Cons  Intuitive, follows closely the construction of BNs  Separates structure learning from the form of the independence tests  Sensitive to errors in individual tests

Approaches to Learning Structure Score based –Define a score that evaluates how well the (in)dependencies in a structure match the observations –Search for a structure that maximizes the score Pros & Cons  Statistically motivated  Can make compromises  Takes the structure of conditional probabilities into account  Computationally hard

Likelihood Score for Structures First cut approach: –Use likelihood function Recall, the likelihood score for a network structure and parameters is Since we know how to maximize parameters from now we assume

Likelihood Score for Structure (cont.) Rearranging terms: where H(X) is the entropy of X I(X;Y) is the mutual information between X and Y –I(X;Y) measures how much “information” each variables provides about the other –I(X;Y)  0 –I(X;Y) = 0 iff X and Y are independent –I(X;Y) = H(X) iff X is totally predictable given Y

Likelihood Score for Structure (cont.) Good news: Intuitive explanation of likelihood score: –The larger the dependency of each variable on its parents, the higher the score Likelihood as a compromise among dependencies, based on their strength

Likelihood Score for Structure (cont.) Bad news: Adding arcs always helps –I(X;Y)  I(X;Y,Z) –Maximal score attained by fully connected networks –Such networks can overfit the data --- parameters capture the noise in the data

Avoiding Overfitting “Classic” issue in learning. Approaches: Restricting the hypotheses space –Limits the overfitting capability of the learner –Example: restrict # of parents or # of parameters Minimum description length –Description length measures complexity –Prefer models that compactly describes the training data Bayesian methods –Average over all possible parameter values –Use prior knowledge

Bayesian Inference Bayesian Reasoning---compute expectation over unknown G Assumption: G s are mutually exclusive and exhaustive We know how to compute P(x[M+1]|G,D) –Same as prediction with fixed structure How do we compute P(G|D) ?

Marginal likelihood Prior over structures Using Bayes rule: P(D) is the same for all structures G Can be ignored when comparing structures Probability of Data Posterior Score

Marginal Likelihood By introduction of variables, we have that This integral measures sensitivity to choice of parameters Likelihood Prior over parameters

Marginal Likelihood: Binomial case Assume we observe a sequence of coin tosses…. By the chain rule we have: recall that where N m H is the number of heads in first m examples.

Marginal Likelihood: Binomials (cont.) We simplify this by using Thus

Binomial Likelihood: Example Idealized experiment with P(H) = M Dirichlet(.5,.5) Dirichlet(1,1) Dirichlet(5,5) (log P(D))/M

Marginal Likelihood: Example (cont.) Actual experiment with P(H) = (log P(D))/M M Dirichlet(.5,.5) Dirichlet(1,1) Dirichlet(5,5)

Marginal Likelihood: Multinomials The same argument generalizes to multinomials with Dirichlet prior P(  ) is Dirichlet with hyperparameters  1,…,  K D is a dataset with sufficient statistics N 1,…,N K Then

Marginal Likelihood: Bayesian Networks HTTHTHH HTHHTTH X Y Network structure determines form of marginal likelihood Network 1: Two Dirichlet marginal likelihoods P(X[1],…,X[7]) P(Y[1],…,Y[7]) XY

Marginal Likelihood: Bayesian Networks HTTHTHH HTHHTTH X Y Network structure determines form of marginal likelihood Network 2: Three Dirichlet marginal likelihoods P(X[1],…,X[7]) P(Y[1],Y[4],Y[6],Y[7]) P(Y[2],Y[3],Y[5]) XY

Idealized Experiment P(X = H) = 0.5 P(Y = H|X = H) = pP(Y = H|X = T) = p Independent P = 0.05 P = 0.10 P = 0.15 P = 0.20 (log P(D))/M M

Marginal Likelihood for General Network The marginal likelihood has the form: where N(..) are the counts from the data  (..) are the hyperparameters for each family given G Dirichlet Marginal Likelihood For the sequence of values of X i when X i ’ s parents have a particular value

Priors We need: prior counts  (..) for each network structure G This can be a formidable task –There are exponentially many structures…

BDe Score Possible solution: The BDe prior Represent prior using two elements M 0, B 0 –M 0 - equivalent sample size –B 0 - network representing the prior probability of events

BDe Score Intuition: M 0 prior examples distributed by B 0 Set  (x i,pa i G ) = M 0 P(x i,pa i G | B 0 ) –Note that pa i G are not the same as the parents of X i in B 0. –Compute P(x i,pa i G | B 0 ) using standard inference procedures Such priors have desirable theoretical properties –Equivalent networks are assigned the same score

Bayesian Score: Asymptotic Behavior Theorem: If the prior P(  |G) is “well-behaved”, then

Asymptotic Behavior: Consequences Bayesian score is consistent –As M  the “true” structure G* maximizes the score (almost surely) –For sufficiently large M, the maximal scoring structures are equivalent to G* Observed data eventually overrides prior information –Assuming that the prior assigns positive probability to all cases

Asymptotic Behavior This score can also be justified by the Minimal Description Length (MDL) principle This equation explicitly shows the tradeoff between –Fitness to data --- likelihood term –Penalty for complexity --- regularization term

Scores -- Summary Likelihood, MDL, (log) BDe have the form BDe requires assessing prior network. It can naturally incorporate prior knowledge and previous experience BDe is consistent and asymptotically equivalent (up to a constant) to MDL All are score-equivalent –G equivalent to G’  Score(G) = Score(G’)

Optimization Problem Input: –Training data –Scoring function (including priors, if needed) –Set of possible structures Including prior knowledge about structure Output: –A network (or networks) that maximize the score Key Property: –Decomposability: the score of a network is a sum of terms.

Learning Trees Trees: –At most one parent per variable Why trees? –Elegant math  we can solve the optimization problem efficiently (with a greedy algorithm) –Sparse parameterization  avoid overfitting while adapting to the data

Learning Trees (cont.) Let p(i) denote the parent of X i, or 0 if X i has no parents We can write the score as Score = sum of edge scores + constant Score of “empty” network Improvement over “empty” network

Learning Trees (cont) Algorithm: Construct graph with vertices: 1, 2, … Set w(i  j) be Score( X j | X i ) - Score(X j ) Find tree (or forest) with maximal weight –This can be done using standard algorithms in low-order polynomial time by building a tree in a greedy fashion (Kruskal’s maximum spanning tree algorithm) Theorem: This procedure finds the tree with maximal score When score is likelihood, then w(i  j) is proportional to I(X i ; X j ) this is known as the Chow & Liu method

Not every edge in tree is in the the original network Tree direction is arbitrary --- we can’t learn about arc direction Learning Trees: Example Tree learned from alarm data correct arcs spurious arcs PCWP CO HRBP HREKG HRSAT ERRCAUTER HR HISTORY CATECHOL SAO2 EXPCO2 ARTCO2 VENTALV VENTLUNG VENITUBE DISCONNECT MINVOLSET VENTMACH KINKEDTUBE INTUBATIONPULMEMBOLUS PAPSHUNT ANAPHYLAXIS MINOVL PVSAT FIO2 PRESS INSUFFANESTHTPR LVFAILURE ERRBLOWOUTPUT STROEVOLUMELVEDVOLUME HYPOVOLEMIA CVP BP PCWP CO HRBP HREKG HRSAT ERRCAUTER HR HISTORY CATECHOL SAO2 EXPCO2 ARTCO2 VENTALV VENTLUNG VENITUBE DISCONNECT MINVOLSET VENTMACH KINKEDTUBE INTUBATIONPULMEMBOLUS PAPSHUNT ANAPHYLAXIS MINOVL PVSAT FIO2 PRESS INSUFFANESTHTPR LVFAILURE ERRBLOWOUTPUT STROEVOLUMELVEDVOLUME HYPOVOLEMIA CVP BP

Difficulty Theorem: Finding maximal scoring network structure with at most k parents for each variables is NP-hard for k > 1

Heuristic Search We address the problem by using heuristic search Define a search space: –nodes are possible structures –edges denote adjacency of structures Traverse this space looking for high-scoring structures Search techniques: –Greedy hill-climbing –Best first search –Simulated Annealing –...

Heuristic Search (cont.) Typical operations: S C E D S C E D Reverse C  E Delete C  E Add C  D S C E D S C E D

Exploiting Decomposability in Local Search Caching: To update the score of after a local change, we only need to re-score the families that were changed in the last move S C E D S C E D S C E D S C E D

Greedy Hill-Climbing Simplest heuristic local search –Start with a given network empty network best tree a random network –At each iteration Evaluate all possible changes Apply change that leads to best improvement in score Reiterate –Stop when no modification improves score Each step requires evaluating approximately n new changes

Greedy Hill-Climbing: Possible Pitfalls Greedy Hill-Climbing can get struck in: –Local Maxima: All one-edge changes reduce the score –Plateaus: Some one-edge changes leave the score unchanged Happens because equivalent networks received the same score and are neighbors in the search space Both occur during structure search Standard heuristics can escape both –Random restarts –TABU search

Equivalence Class Search Idea: Search the space of equivalence classes Equivalence classes can be represented by PDAGs (partially ordered graph) Benefits: The space of PDAGs has fewer local maxima and plateaus There are fewer PDAGs than DAGs

Equivalence Class Search (cont.) Evaluating changes is more expensive These algorithms are more complex to implement X Z YX Z YX Z Y Add Y---Z Original PDAG New PDAG Consistent DAG Score

Learning in Practice: Alarm domain KL Divergence M True Structure/BDe M' = 10 Unknown Structure/BDe M' = 10

Model Selection So far, we focused on single model –Find best scoring model –Use it to predict next example Implicit assumption: –Best scoring model dominates the weighted sum Pros: –We get a single structure –Allows for efficient use in our tasks Cons: –We are committing to the independencies of a particular structure –Other structures might be as probable given the data

Model Averaging Recall, Bayesian analysis started with –This requires us to average over all possible models

Model Averaging (cont.) Full Averaging –Sum over all structures –Usually intractable--- there are exponentially many structures Approximate Averaging –Find K largest scoring structures –Approximate the sum by averaging over their prediction –Weight of each structure determined by the Bayes Factor The actual score we compute

Search: Summary Discrete optimization problem In general, NP-Hard –Need to resort to heuristic search –In practice, search is relatively fast (~100 vars in ~10 min): Decomposability Sufficient statistics In some cases, we can reduce the search problem to an easy optimization problem –Example: learning trees