Learning In Bayesian Networks. General Learning Problem Set of random variables X = {X 1, X 2, X 3, X 4, …} Training set D = { X (1), X (2), …, X (N)

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

CS188: Computational Models of Human Behavior
Basics of Statistical Estimation
A Tutorial on Learning with Bayesian Networks
Probabilistic models Haixu Tang School of Informatics.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Learning: Parameter Estimation
LECTURE 11: BAYESIAN PARAMETER ESTIMATION
Learning Bayesian Networks. Dimensions of Learning ModelBayes netMarkov net DataCompleteIncomplete StructureKnownUnknown ObjectiveGenerativeDiscriminative.
Flipping A Biased Coin Suppose you have a coin with an unknown bias, θ ≡ P(head). You flip the coin multiple times and observe the outcome. From observations,
Bayesian Wrap-Up (probably). 5 minutes of math... Marginal probabilities If you have a joint PDF:... and want to know about the probability of just one.
Introduction of Probabilistic Reasoning and Bayesian Networks
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
Visual Recognition Tutorial
Software Engineering Laboratory1 Introduction of Bayesian Network 4 / 20 / 2005 CSE634 Data Mining Prof. Anita Wasilewska Hiroo Kusaba.
Descriptive statistics Experiment  Data  Sample Statistics Sample mean Sample variance Normalize sample variance by N-1 Standard deviation goes as square-root.
Machine Learning CMPT 726 Simon Fraser University CHAPTER 1: INTRODUCTION.
Bayesian learning finalized (with high probability)
Probabilistic Graphical Models Tool for representing complex systems and performing sophisticated reasoning tasks Fundamental notion: Modularity Complex.
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Learning Bayesian Networks. Dimensions of Learning ModelBayes netMarkov net DataCompleteIncomplete StructureKnownUnknown ObjectiveGenerativeDiscriminative.
Machine Learning CMPT 726 Simon Fraser University
. PGM: Tirgul 10 Parameter Learning and Priors. 2 Why learning? Knowledge acquisition bottleneck u Knowledge acquisition is an expensive process u Often.
Visual Recognition Tutorial
Bayesian Networks Alan Ritter.
Learning Bayesian Networks (From David Heckerman’s tutorial)
Learning In Bayesian Networks. Learning Problem Set of random variables X = {W, X, Y, Z, …} Training set D = { x 1, x 2, …, x N }  Each observation specifies.
Crash Course on Machine Learning
Recitation 1 Probability Review
Machine Learning CUNY Graduate Center Lecture 21: Graphical Models.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Bayesian Inference Ekaterina Lomakina TNU seminar: Bayesian inference 1 March 2013.
A statistical model Μ is a set of distributions (or regression functions), e.g., all uni-modal, smooth distributions. Μ is called a parametric model if.
Bayesian networks Classification, segmentation, time series prediction and more. Website: Twitter:
IID Samples In supervised learning, we usually assume that data points are sampled independently and from the same distribution IID assumption: data are.
Bayesian Networks for Data Mining David Heckerman Microsoft Research (Data Mining and Knowledge Discovery 1, (1997))
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Binomial Experiment A binomial experiment (also known as a Bernoulli trial) is a statistical experiment that has the following properties:
Statistical Learning (From data to distributions).
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Revised by M.-O. Heo Summarized by J.W. Nam Biointelligence Laboratory,
Announcements Spring Courses Somewhat Relevant to Machine Learning 5314: Algorithms for molecular bio (who’s teaching?) 5446: Chaotic dynamics (Bradley)
CS498-EA Reasoning in AI Lecture #10 Instructor: Eyal Amir Fall Semester 2009 Some slides in this set were adopted from Eran Segal.
CHAPTER 6 Naive Bayes Models for Classification. QUESTION????
Conditional Probability Mass Function. Introduction P[A|B] is the probability of an event A, giving that we know that some other event B has occurred.
Lecture 2: Statistical learning primer for biologists
Maximum Likelihood Estimation
1 Learning P-maps Param. Learning Graphical Models – Carlos Guestrin Carnegie Mellon University September 24 th, 2008 Readings: K&F: 3.3, 3.4, 16.1,
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
The Uniform Prior and the Laplace Correction Supplemental Material not on exam.
Statistical NLP: Lecture 4 Mathematical Foundations I: Probability Theory (Ch2)
CHAPTER 3: BAYESIAN DECISION THEORY. Making Decision Under Uncertainty Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Spring 2014 Course PSYC Thinking Proseminar – Matt Jones Provides beginning Ph.D. students with a basic introduction to research on complex human.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Classification COMP Seminar BCB 713 Module Spring 2011.
Bayesian Estimation and Confidence Intervals Lecture XXII.
Oliver Schulte Machine Learning 726
CS 2750: Machine Learning Review
ICS 280 Learning in Graphical Models
Ch3: Model Building through Regression
Bayes Net Learning: Bayesian Approaches
Oliver Schulte Machine Learning 726
Distributions and Concepts in Probability Theory
CSCI 5822 Probabilistic Models of Human and Machine Learning
Statistical NLP: Lecture 4
CSCI 5822 Probabilistic Models of Human and Machine Learning
LECTURE 07: BAYESIAN ESTIMATION
Parametric Methods Berlin Chen, 2005 References:
Mathematical Foundations of BME Reza Shadmehr
Presentation transcript:

Learning In Bayesian Networks

General Learning Problem Set of random variables X = {X 1, X 2, X 3, X 4, …} Training set D = { X (1), X (2), …, X (N) }  Each observation specifies values of subset of variables  X (1) = {x 1, x 2, ?, x 4, …}  X (2) = {x 1, x 2, x 3, x 4, …}  X (3) = {?, x 2, x 3, x 4, …} Goal  Predict joint distribution over some variables given other variables  E.g., P(X 1, X 3 | X 2, X 4 )

Classes Of Graphical Model Learning Problems  Network structure known All variables observed  Network structure known Some missing data (or latent variables)  Network structure not known All variables observed  Network structure not known Some missing data (or latent variables) today and next class going to skip (not too relevant for papers we’ll read; see optional readings for more info)

If Network Structure Is Known, The Problem Involves Learning Distributions The distributions are characterized by parameters. Goal: infer that best explains the data D. Bayesian methods Maximum a posteriori Maximum likelihood Max. pseudo-likelihood Moment matching

Learning CPDs When All Variables Are Observed And Network Structure Is Known Trivial problem if we’re doing Maximum Likelihood estimation XY Z XYP(Z|X,Y) 00? 01? 10? 11? P(X) ? ? P(Y) ? XYZ Training Data

Recasting Learning As Bayesian Inference We’ve already used Bayesian inference in probabilistic models to compute posteriors on latent (a.k.a. hidden, nonobservable) variables from data. E.g., Weiss model  Direction of motion E.g., Gaussian mixture model  To which cluster does each data point belong Why not treat unknown parameters in the same way? E.g., Gaussian mixture parameters E.g., entries in conditional prob. tables

Recasting Learning As Inference Suppose you have a coin with an unknown bias, θ ≡ P(head). You flip the coin multiple times and observe the outcome. From observations, you can infer the bias of the coin This is learning. This is inference.

Treating Conditional Probabilities As Latent Variables Graphical model probabilities (priors, conditional distributions) can also be cast as random variables  E.g., Gaussian mixture model Remove the knowledge “built into” the links (conditional distributions) and into the nodes (prior distributions). Create new random variables to represent the knowledge  Hierarchical Bayesian Inference zλ x z x zλ x

Slides stolen from David Heckerman tutorial I’m going to flip a coin, with unknown bias. Depending on whether it comes up heads or tails, I will flip one of two other coins, each with an unknown bias. X: outcome of first flip Y: outcome of second flip

training example 1 training example 2

Parameters might not be independent training example 1 training example 2

General Approach: Bayesian Learning of Probabilities in a Bayes Net If network structure S h known and no missing data… We can express joint distribution over variables X in terms of model parameter vector θ s Given random sample D = { X (1), X (2), …, X (N) }, compute the posterior distribution p(θ s | D, S h ) Probabilistic formulation of all supervised and unsupervised learning problems.

Computing Parameter Posteriors E.g., net structure X→Y

Computing Parameter Posteriors Given complete data (all X,Y observed) and no direct dependencies among parameters, Explanation Given complete data, each set of parameters is disconnected from each other set of parameters in the graph θxθx X Yθ y|x θxθx θxθx D separation parameter independence

Posterior Predictive Distribution Given parameter posteriors What is prediction of next observation X (N+1)? How can this be used for unsupervised and supervised learning? What does this formula look like if we’re doing approximate inference by sampling θ s and/or X (N+1)? What we talked about the past three classes What we just discussed

Prediction Directly From Data In many cases, prediction can be made without explicitly computing posteriors over parameters E.g., coin toss example from earlier class Posterior distribution is Prediction of next coin outcome

Terminology Digression Bernoulli : Binomial :: Categorical : Multinomial single coin flip N flips of coin (e.g., exam score) single roll of die N rolls of die (e.g., # of each major)

Generalizing To Categorical RVs In Bayes Net Variable X i is discrete, with values x i 1,... x i r i i: index of categorical RV j: index over configurations of the parents of node i k: index over values of node i unrestricted distribution: one parameter per probability XiXi XbXb XaXa notation change

Prediction Directly From Data: Categorical Random Variables Prior distribution is Posterior distribution is Posterior predictive distribution: I: index over nodes j: index over values of parents of I k: index over values of node i

Other Easy Cases Members of the exponential family  see Barber text 8.5 Linear regression with Gaussian noise  see Barber text 18.1