Bayes Net Learning: Bayesian Approaches

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

A Tutorial on Learning with Bayesian Networks
Probabilistic models Haixu Tang School of Informatics.
INTRODUCTION TO MACHINE LEARNING Bayesian Estimation.
Week 11 Review: Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution.
Bayesian inference “Very much lies in the posterior distribution” Bayesian definition of sufficiency: A statistic T (x 1, …, x n ) is sufficient for 
Learning: Parameter Estimation
Oliver Schulte Machine Learning 726
Flipping A Biased Coin Suppose you have a coin with an unknown bias, θ ≡ P(head). You flip the coin multiple times and observe the outcome. From observations,
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Parameter Estimation using likelihood functions Tutorial #1
Descriptive statistics Experiment  Data  Sample Statistics Sample mean Sample variance Normalize sample variance by N-1 Standard deviation goes as square-root.
Machine Learning CMPT 726 Simon Fraser University CHAPTER 1: INTRODUCTION.
Bayesian learning finalized (with high probability)
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Machine Learning CMPT 726 Simon Fraser University
. PGM: Tirgul 10 Parameter Learning and Priors. 2 Why learning? Knowledge acquisition bottleneck u Knowledge acquisition is an expensive process u Often.
Visual Recognition Tutorial
Thanks to Nir Friedman, HU
Learning Bayesian Networks (From David Heckerman’s tutorial)
Crash Course on Machine Learning
Recitation 1 Probability Review
Chapter Two Probability Distributions: Discrete Variables
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Bayesian Inference Ekaterina Lomakina TNU seminar: Bayesian inference 1 March 2013.
11-1 Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Probability and Statistics Chapter 11.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Statistical Learning (From data to distributions).
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Likelihood function and Bayes Theorem In simplest case P(B|A) = P(A|B) P(B)/P(A) and we consider the likelihood function in which we view the conditional.
Bayesian statistics Probabilities for everything.
1 Since everything is a reflection of our minds, everything can be changed by our minds.
Elementary manipulations of probabilities Set probability of multi-valued r.v. P({x=Odd}) = P(1)+P(3)+P(5) = 1/6+1/6+1/6 = ½ Multi-variant distribution:
- 1 - Bayesian inference of binomial problem Estimating a probability from binomial data –Objective is to estimate unknown proportion (or probability of.
Probability Course web page: vision.cis.udel.edu/cv March 19, 2003  Lecture 15.
Learning In Bayesian Networks. General Learning Problem Set of random variables X = {X 1, X 2, X 3, X 4, …} Training set D = { X (1), X (2), …, X (N)
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.
Gaussian Processes For Regression, Classification, and Prediction.
1 Learning P-maps Param. Learning Graphical Models – Carlos Guestrin Carnegie Mellon University September 24 th, 2008 Readings: K&F: 3.3, 3.4, 16.1,
The Uniform Prior and the Laplace Correction Supplemental Material not on exam.
CS B 351 S TATISTICAL L EARNING. A GENDA Learning coin flips, learning Bayes net parameters Likelihood functions, maximum likelihood estimation (MLE)
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Bayesian Estimation and Confidence Intervals Lecture XXII.
MLPR - Questions. Can you go through integration, differentiation etc. Why do we need priors? Difference between prior and posterior. What does Bayesian.
Crash course in probability theory and statistics – part 2 Machine Learning, Wed Apr 16, 2008.
Bayesian Learning Reading: Tom Mitchell, “Generative and discriminative classifiers: Naive Bayes and logistic regression”, Sections 1-2. (Linked from.
Oliver Schulte Machine Learning 726
Bayesian Estimation and Confidence Intervals
Oliver Schulte Machine Learning 726
Bayesian approach to the binomial distribution with a discrete prior
Naive Bayes Classifier
COMP61011 : Machine Learning Probabilistic Models + Bayes’ Theorem
Data Science Algorithms: The Basic Methods
Appendix A: Probability Theory
CS 2750: Machine Learning Density Estimation
Oliver Schulte Machine Learning 726
CSCI 5822 Probabilistic Models of Human and Machine Learning
OVERVIEW OF BAYESIAN INFERENCE: PART 1
More about Posterior Distributions
CSCI 5822 Probabilistic Models of Human and Machine Learning
Important Distinctions in Learning BNs
Play Tennis ????? Day Outlook Temperature Humidity Wind PlayTennis
Parametric Methods Berlin Chen, 2005 References:
CS639: Data Management for Data Science
Mathematical Foundations of BME Reza Shadmehr
Geometric Poisson Negative Binomial Gamma
Presentation transcript:

Bayes Net Learning: Bayesian Approaches Oliver Schulte Machine Learning 726 If you use “insert slide number” under “Footer”, that text box only displays the slide number, not the total number of slides. So I use a new textbox for the slide number in the master.

The Parameter Learning Problem Input: a data table XNxD. One column per node (random variable) One row per instance. How to fill in Bayes net parameters? Day Outlook Temperature Humidity Wind PlayTennis 1 sunny hot high weak no 2 strong 3 overcast yes 4 rain mild 5 cool normal 6 7 8 9 10 11 12 13 14 Humidity What is N? What is D? PlayTennis: Do you play tennis Saturday morning? For now complete data, incomplete data another day (EM). PlayTennis

Bayesian Parameter Learning

Uncertainty in Estimates A single point estimate does not quantify uncertainty. Is 6/10 the same as 6000/10000? Classical statistics: specify confidence interval for estimate. Bayesian approach: Assign a probability to parameter values.

Parameter Probabilities Intuition: Quantify uncertainty about parameter values by assigning a prior probability to parameter values. Not based on data. Example: Hypothesis Chance of Heads Prior probability of Hypothesis 1 100% 10% 2 75% 20% 3 50% 40% 4 25% 5 0% Yes, these are probabilities of probabilities.

Bayesian Prediction/Inference What probability does the Bayesian assign to Coin = heads? I.e., how should we bet on Coin = heads? Answer: Make a prediction for each parameter value. Average the predictions using the prior as weights: Hypothesis Chance of Heads Prior probability weighted chance 1 100% 10% 2 75% 20% 15% 3 50% 40% 4 25% 5% 5 0% Expected Chance = Relationship to BN parameters: assign distribution over numbers

Mean In the binomial case, Bayesian prediction can be seen as the expected value of a probability distribution P. Aka average, expectation, or mean of P. Notation: E, µ. Example Excel Give example of grades.

Variance Variance of a distribution: Find mean of distribution. For each point, find distance to mean. Square it. (Why?) Take expected value of squared distance. Variance of a parameter estimate = uncertainty. Decreases with more data. Example Excel

Continuous priors Probabilities usually range over [0,1]. Then probabilities of probabilities are probabilities of continuous variables = probability density function. p(x) behaves like probability of discrete value, but with integrals replacing sum. E.g. . Exercise: Find the p.d.f. of the uniform distribution over a closed interval [a,b].

Probability Densities x can be anything

Bayesian Prediction With P.D.F.s Suppose we want to predict p(x|θ) Given a distribution over the parameters, we marginalize over θ.

Bayesian Learning

Bayesian Updating Update prior using Bayes’ theorem. P(h|D) = αP(D|h) x P(h). Example: Posterior after observing 10 heads Hypothesis Chance of Heads Prior probability 1 100% 10% 2 75% 20% 3 50% 40% 4 25% 5 0% Answer: theta^h x (1-theta)t/2^{-n}. Notice that the posterior has a different from than the prior. Russell and Norvig, AMAI

Prior ∙ Likelihood = Posterior

Updated Bayesian Predictions Predicted probability that next coin is heads as we observe 10 coins. smooth approach to 1 compared to max likelihood in the limit, Bayes = max likelihood. This is typical.

Updating: Continuous Example Consider again the binomial case where θ= prob of heads. Given n coin tosses and h observed heads, t observed tails, what is the posterior of a uniform distribution over θ in [0,1]? Solved by Laplace in 1814!

Bayesian Prediction How do we predict using the posterior? We can think of this as computing the probability of the next head in the sequence Any ideas? Solution: Laplace 1814!

The Laplace Correction Revisited Suppose I have observed n data points with k heads. Find posterior distribution. Predict probability of heads using posterior distribution. Result h+1/n+2 = m-estimate with uniform prior, m=2.

Parametrized Priors Motivation: Suppose I don’t want a uniform prior. Smooth with m>0. Express prior knowledge. Use parameters for the prior distribution. Called hyperparameters. Chosen so that updating the prior is easy.

Beta Distribution: Definition Hyperparameters a>0,b>0. note the exponential distribution. The Γ term is a normalization constant.

Beta Distribution

Updating the Beta Distribution So what is the normalization constant α? Hyperparameter a-1: like a virtual count of initial heads. Hyperparameter b-1: like a virtual count of initial tails. Beta prior Beta posterior: conjugate prior. h heads, t tails. Answer: the constant for (h+a-1,t+b-1) = Gamma(h+a-1,t+b-1)/Gamma(h+a-1) Gamma(t+b-1) Conjugate priors must be exponential. Why?

Conjugate Prior for non-binary variables Dirichlet distribution: generalizes Beta distribution for variables with >2 values.

Summary Maximum likelihood: general parameter estimation method. Choose parameters that make the data as likely as possible. For Bayes net parameters: MLE = match sample frequency. Typical result! Problems: not defined for 0 count situation. doesn’t quantity uncertainty in estimate. Bayesian approach: Assume prior probability for parameters; prior has hyperparameters. E.g., beta distribution. prior choice not based on data. inferences (averaging) can be hard to compute. should add discussion of Gaussian without parents. Other cases are covered later.