BN Semantics 3 – Now it’s personal! Parameter Learning 1

Slides:



Advertisements
Similar presentations
1 Undirected Graphical Models Graphical Models – Carlos Guestrin Carnegie Mellon University October 29 th, 2008 Readings: K&F: 4.1, 4.2, 4.3, 4.4,
Advertisements

A Tutorial on Learning with Bayesian Networks
Parameter and Structure Learning Dhruv Batra, Recitation 10/02/2008.
Introduction of Probabilistic Reasoning and Bayesian Networks
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
Bayesian Network Representation Continued
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
. PGM: Tirgul 10 Parameter Learning and Priors. 2 Why learning? Knowledge acquisition bottleneck u Knowledge acquisition is an expensive process u Often.
. Maximum Likelihood (ML) Parameter Estimation with applications to reconstructing phylogenetic trees Comput. Genomics, lecture 6b Presentation taken from.
Bayesian Networks Alan Ritter.
Rutgers CS440, Fall 2003 Introduction to Statistical Learning Reading: Ch. 20, Sec. 1-4, AIMA 2 nd Ed.
Learning Bayesian Networks (From David Heckerman’s tutorial)
1 EM for BNs Graphical Models – Carlos Guestrin Carnegie Mellon University November 24 th, 2008 Readings: 18.1, 18.2, –  Carlos Guestrin.
Crash Course on Machine Learning
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
.. . Maximum Likelihood (ML) Parameter Estimation with applications to inferring phylogenetic trees Comput. Genomics, lecture 6a Presentation taken from.
1 Bayesian Param. Learning Bayesian Structure Learning Graphical Models – Carlos Guestrin Carnegie Mellon University October 6 th, 2008 Readings:
Unsupervised Learning: Clustering Some material adapted from slides by Andrew Moore, CMU. Visit for
Bayesian Learning Chapter Some material adapted from lecture notes by Lise Getoor and Ron Parr.
CSE 446: Point Estimation Winter 2012 Dan Weld Slides adapted from Carlos Guestrin (& Luke Zettlemoyer)
1 Variable Elimination Graphical Models – Carlos Guestrin Carnegie Mellon University October 11 th, 2006 Readings: K&F: 8.1, 8.2, 8.3,
1 EM for BNs Kalman Filters Gaussian BNs Graphical Models – Carlos Guestrin Carnegie Mellon University November 17 th, 2006 Readings: K&F: 16.2 Jordan.
Uncertainty in Expert Systems
Learning In Bayesian Networks. General Learning Problem Set of random variables X = {X 1, X 2, X 3, X 4, …} Training set D = { X (1), X (2), …, X (N)
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2008 Readings: K&F: 3.1, 3.2, –  Carlos.
1 Parameter Learning 2 Structure Learning 1: The good Graphical Models – Carlos Guestrin Carnegie Mellon University September 27 th, 2006 Readings:
1 BN Semantics 2 – The revenge of d-separation Graphical Models – Carlos Guestrin Carnegie Mellon University September 20 th, 2006 Readings: K&F:
1 Param. Learning (MLE) Structure Learning The Good Graphical Models – Carlos Guestrin Carnegie Mellon University October 1 st, 2008 Readings: K&F:
1 Learning P-maps Param. Learning Graphical Models – Carlos Guestrin Carnegie Mellon University September 24 th, 2008 Readings: K&F: 3.3, 3.4, 16.1,
1 BN Semantics 2 – Representation Theorem The revenge of d-separation Graphical Models – Carlos Guestrin Carnegie Mellon University September 17.
1 Structure Learning (The Good), The Bad, The Ugly Inference Graphical Models – Carlos Guestrin Carnegie Mellon University October 13 th, 2008 Readings:
1 Variable Elimination Graphical Models – Carlos Guestrin Carnegie Mellon University October 15 th, 2008 Readings: K&F: 8.1, 8.2, 8.3,
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2006 Readings: K&F: 3.1, 3.2, 3.3.
1 BN Semantics 3 – Now it’s personal! Parameter Learning 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 22 nd, 2006 Readings:
BN Semantic II d-Separation, PDAGs, etc
Statistical Estimation
CS 2750: Machine Learning Directed Graphical Models
Probability Theory and Parameter Estimation I
Read R&N Ch Next lecture: Read R&N
Bayesian Networks: A Tutorial
Oliver Schulte Machine Learning 726
Tutorial #3 by Ma’ayan Fishelson
Bayesian Networks: Motivation
Read R&N Ch Next lecture: Read R&N
CS498-EA Reasoning in AI Lecture #20
Readings: K&F: 15.1, 15.2, 15.3, 15.4, 15.5 K&F: 7 (overview of inference) K&F: 8.1, 8.2 (Variable Elimination) Structure Learning in BNs 3: (the good,
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
CSCI 5822 Probabilistic Models of Human and Machine Learning
Important Distinctions in Learning BNs
CS 188: Artificial Intelligence
An Algorithm for Bayesian Network Construction from Data
Bayesian Learning Chapter
Class #16 – Tuesday, October 26
Variable Elimination 2 Clique Trees
Parameter Learning 2 Structure Learning 1: The good
EM for BNs Kalman Filters Gaussian BNs
Readings: K&F: 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7 Markov networks, Factor graphs, and an unified view Start approximate inference If we are lucky… Graphical.
Approximate Inference by Sampling
Parametric Methods Berlin Chen, 2005 References:
Unifying Variational and GBP Learning Parameters of MNs EM for BNs
Junction Trees 3 Undirected Graphical Models
Readings: K&F: 11.3, 11.5 Yedidia et al. paper from the class website
Read R&N Ch Next lecture: Read R&N
Markov Networks.
Variable Elimination Graphical Models – Carlos Guestrin
Generalized Belief Propagation
BN Semantics 2 – The revenge of d-separation
Presentation transcript:

BN Semantics 3 – Now it’s personal! Parameter Learning 1 Readings: K&F: 3.4, 14.1, 14.2 BN Semantics 3 – Now it’s personal! Parameter Learning 1 Graphical Models – 10708 Carlos Guestrin Carnegie Mellon University September 22nd, 2006

Building BNs from independence properties From d-separation we learned: Start from local Markov assumptions, obtain all independence assumptions encoded by graph For most P’s that factorize over G, I(G) = I(P) All of this discussion was for a given G that is an I-map for P Now, give me a P, how can I get a G? i.e., give me the independence assumptions entailed by P Many G are “equivalent”, how do I represent this? Most of this discussion is not about practical algorithms, but useful concepts that will be used by practical algorithms Practical algs next week 10-708 – Carlos Guestrin 2006

Minimal I-maps One option: G is an I-map for P G is as simple as possible G is a minimal I-map for P if deleting any edges from G makes it no longer an I-map 10-708 – Carlos Guestrin 2006

Obtaining a minimal I-map Flu, Allergy, SinusInfection, Headache Given a set of variables and conditional independence assumptions Choose an ordering on variables, e.g., X1, …, Xn For i = 1 to n Add Xi to the network Define parents of Xi, PaXi, in graph as the minimal subset of {X1,…,Xi-1} such that local Markov assumption holds – Xi independent of rest of {X1,…,Xi-1}, given parents PaXi Define/learn CPT – P(Xi| PaXi) 10-708 – Carlos Guestrin 2006

Minimal I-map not unique (or minimal) Flu, Allergy, SinusInfection, Headache Given a set of variables and conditional independence assumptions Choose an ordering on variables, e.g., X1, …, Xn For i = 1 to n Add Xi to the network Define parents of Xi, PaXi, in graph as the minimal subset of {X1,…,Xi-1} such that local Markov assumption holds – Xi independent of rest of {X1,…,Xi-1}, given parents PaXi Define/learn CPT – P(Xi| PaXi) 10-708 – Carlos Guestrin 2006

Perfect maps (P-maps) I-maps are not unique and often not simple enough Define “simplest” G that is I-map for P A BN structure G is a perfect map for a distribution P if I(P) = I(G) Our goal: Find a perfect map! Must address equivalent BNs 10-708 – Carlos Guestrin 2006

Inexistence of P-maps 1 XOR (this is a hint for the homework) 10-708 – Carlos Guestrin 2006

Inexistence of P-maps 2 (Slightly un-PC) swinging couples example 10-708 – Carlos Guestrin 2006

Obtaining a P-map Given the independence assertions that are true for P Assume that there exists a perfect map G* Want to find G* Many structures may encode same independencies as G*, when are we done? Find all equivalent structures simultaneously! 10-708 – Carlos Guestrin 2006

I-Equivalence Two graphs G1 and G2 are I-equivalent if I(G1) = I(G2) Equivalence class of BN structures Mutually-exclusive and exhaustive partition of graphs How do we characterize these equivalence classes? 10-708 – Carlos Guestrin 2006

Skeleton of a BN Skeleton of a BN structure G is an undirected graph over the same variables that has an edge X–Y for every X!Y or Y!X in G (Little) Lemma: Two I-equivalent BN structures must have the same skeleton A H C E G D B F K J I 10-708 – Carlos Guestrin 2006

What about V-structures? G D B F K J I V-structures are key property of BN structure Theorem: If G1 and G2 have the same skeleton and V-structures, then G1 and G2 are I-equivalent 10-708 – Carlos Guestrin 2006

Same V-structures not necessary Theorem: If G1 and G2 have the same skeleton and V-structures, then G1 and G2 are I-equivalent Though sufficient, same V-structures not necessary 10-708 – Carlos Guestrin 2006

Immoralities & I-Equivalence Key concept not V-structures, but “immoralities” (unmarried parents ) X ! Z Ã Y, with no arrow between X and Y Important pattern: X and Y independent given their parents, but not given Z (If edge exists between X and Y, we have covered the V-structure) Theorem: G1 and G2 have the same skeleton and immoralities if and only if G1 and G2 are I-equivalent 10-708 – Carlos Guestrin 2006

Obtaining a P-map Given the independence assertions that are true for P Obtain skeleton Obtain immoralities From skeleton and immoralities, obtain every (and any) BN structure from the equivalence class 10-708 – Carlos Guestrin 2006

Identifying the skeleton 1 When is there an edge between X and Y? When is there no edge between X and Y? 10-708 – Carlos Guestrin 2006

Identifying the skeleton 2 Assume d is max number of parents (d could be n) For each Xi and Xj Eij à true For each Uµ X – {Xi,Xj}, |U|· 2d Is (Xi  Xj | U) ? If Eij is true Add edge X – Y to skeleton 10-708 – Carlos Guestrin 2006

Identifying immoralities Consider X – Z – Y in skeleton, when should it be an immorality? Must be X ! Z Ã Y (immorality): When X and Y are never independent given U, if Z2U Must not be X ! Z Ã Y (not immorality): When there exists U with Z2U, such that X and Y are independent given U 10-708 – Carlos Guestrin 2006

From immoralities and skeleton to BN structures Representing BN equivalence class as a partially-directed acyclic graph (PDAG) Immoralities force direction on other BN edges Full (polynomial-time) procedure described in reading 10-708 – Carlos Guestrin 2006

What you need to know Minimal I-map Perfect map every P has one, but usually many Perfect map better choice for BN structure not every P has one can find one (if it exists) by considering I-equivalence Two structures are I-equivalent if they have same skeleton and immoralities 10-708 – Carlos Guestrin 2006

Announcements I’ll lead a special discussion session: Today 2-3pm in NSH 1507 talk about homework, especially programming question 10-708 – Carlos Guestrin 2006

Review Bayesian Networks Next – Learn BNs Compact representation for probability distributions Exponential reduction in number of parameters Exploits independencies Next – Learn BNs parameters structure Flu Allergy Sinus Headache Nose 10-708 – Carlos Guestrin 2006

Thumbtack – Binomial Distribution P(Heads) = , P(Tails) = 1- Flips are i.i.d.: Independent events Identically distributed according to Binomial distribution Sequence D of H Heads and T Tails 10-708 – Carlos Guestrin 2006

Maximum Likelihood Estimation Data: Observed set D of H Heads and T Tails Hypothesis: Binomial distribution Learning  is an optimization problem What’s the objective function? MLE: Choose  that maximizes the probability of observed data: 10-708 – Carlos Guestrin 2006

Your first learning algorithm Set derivative to zero: 10-708 – Carlos Guestrin 2006

Learning Bayes nets CPTs – P(Xi| PaXi) Known structure Unknown structure Fully observable data Missing data Data CPTs – P(Xi| PaXi) x(1) … x(m) structure parameters 10-708 – Carlos Guestrin 2006

Learning the CPTs For each discrete variable Xi Data x(1) … x(m) 10-708 – Carlos Guestrin 2006

Learning the CPTs WHY?????????? For each discrete variable Xi Data … x(m) WHY?????????? 10-708 – Carlos Guestrin 2006

Maximum likelihood estimation (MLE) of BN parameters – example Flu Allergy Sinus Headache Nose Given structure, log likelihood of data: 10-708 – Carlos Guestrin 2006

Maximum likelihood estimation (MLE) of BN parameters – General case Data: x(1),…,x(m) Restriction: x(j)[PaXi] ! assignment to PaXi in x(j) Given structure, log likelihood of data: 10-708 – Carlos Guestrin 2006

Taking derivatives of MLE of BN parameters – General case 10-708 – Carlos Guestrin 2006

General MLE for a CPT Take a CPT: P(X|U) Log likelihood term for this CPT Parameter X=x|U=u : 10-708 – Carlos Guestrin 2006

Parameter sharing (basics now, more later in the semester) Suppose we want to model customers’ rating for books You know: features of customers, e.g., age, gender, income,… features of books, e.g., genre, awards, # of pages, has pictures,… ratings: each user rates a few books A simple BN: 10-708 – Carlos Guestrin 2006

Using recommender system Answer probabilistic question: 10-708 – Carlos Guestrin 2006

Learning parameters of recommender system BN How many parameters do I have to learn? How many samples do I have? 10-708 – Carlos Guestrin 2006

Parameter sharing for recommender system BN Use same parameters in many CPTs How many parameters do I have to learn? How many samples do I have? 10-708 – Carlos Guestrin 2006

MLE with simple parameter sharing Estimating : Estimating : Estimating : 10-708 – Carlos Guestrin 2006

What you need to know about learning BNs thus far Maximum likelihood estimation decomposition of score computing CPTs Simple parameter sharing why share parameters? computing MLE for shared parameters 10-708 – Carlos Guestrin 2006