Presentation is loading. Please wait.

Presentation is loading. Please wait.

Oliver Schulte Machine Learning 726 Bayes Net Learning.

Similar presentations


Presentation on theme: "Oliver Schulte Machine Learning 726 Bayes Net Learning."— Presentation transcript:

1 Oliver Schulte Machine Learning 726 Bayes Net Learning

2 2/13 Learning Bayes Nets

3 3/13 Structure Learning Example: Sleep Disorder Network Source: Development of Bayesian Network models for obstructive sleep apnea syndrome assessment Fouron, Anne Gisèle. (2006). M.Sc. Thesis, SFU.Development of Bayesian Network models for obstructive sleep apnea syndrome assessment

4 4/13 Parameter Learning Scenarios Complete data (today). Later: Missing data (EM). Parent Node/ Child Node DiscreteContinuous DiscreteMaximum Likelihood Decision Trees logit distribution (logistic regression) Continuousconditional Gaussian (not discussed) linear Gaussian (linear regression)

5 5/13 The Parameter Learning Problem Input: a data table X NxD. One column per node (random variable) One row per instance. How to fill in Bayes net parameters? PlayTennis Humidity DayOutlookTemperatureHumidityWindPlayTennis 1sunnyhothighweakno 2sunnyhothighstrongno 3overcasthothighweakyes 4rainmildhighweakyes 5raincoolnormalweakyes 6raincoolnormalstrongno 7overcastcoolnormalstrongyes 8sunnymildhighweakno 9sunnycoolnormalweakyes 10rainmildnormalweakyes 11sunnymildnormalstrongyes 12overcastmildhighstrongyes 13overcasthotnormalweakyes 14rainmildhighstrongno

6 6/13 Start Small: Single Node What would you choose? Humidity How about P(Humidity = high) = 50%? DayHumidity 1high 2 3 4 5normal 6 7 8high 9normal 10normal 11normal 12high 13normal 14high P(Humidity = high) θ

7 7/13 Parameters for Two Nodes DayHumidityPlayTennis 1highno 2highno 3highyes 4highyes 5normalyes 6normalno 7normalyes 8highno 9normalyes 10normalyes 11normalyes 12highyes 13normalyes 14highno PlayTennis Humidity P(Humidity = high) θ HP(PlayTennis = yes|H) high θ1θ1 normal θ2θ2 Is θ as in single node model? How about θ 1 =3/7? How about θ 2 =6/7?

8 8/13 Maximum Likelihood Estimation

9 9/13 MLE An important general principle: Choose parameter values that maximize the likelihood of the data. Intuition: Explain the data as well as possible. Recall from Bayes’ theorem that the likelihood is P(data|parameters) = P(D| θ ).

10 10/13 Finding the Maximum Likelihood Solution: Single Node Humidity P(Hi| θ ) high θ θ θ θ normal1- θ normal1- θ normal1- θ high θ normal1- θ normal1- θ normal1- θ high θ normal1- θ high θ P(Humidity = high) θ 1. Write down 2. In example, P(D| θ )= θ 7 (1- θ ) 7. 3. Maximize θ for this function. independent identically distributed data! iid

11 11/13 Solving the Equation 1. Often convenient to apply logarithms to products. ln(P(D| θ ))= 7ln( θ ) + 7 ln(1- θ ). 2. Find derivative, set to 0.

12 12/13 Finding the Maximum Likelihood Solution: Two Nodes HumidityPlayTennisP(H,P| θ, θ 1, θ 2 highno θ x (1- θ 1) highno θ x (1- θ 1) highyes θ x θ 1 highyes θ x θ 1 normalyes(1- θ ) x θ 2 normalno(1- θ ) x (1- θ 2) normalyes(1- θ )x θ 2 highno θ x (1- θ 1) normalyes(1- θ ) x θ 2 normalyes(1- θ ) x θ 2 normalyes(1- θ )x θ 2 highyes θ x θ 1 normalyes(1- θ ) x θ 2 highno θ x (1- θ 1) P(Humidity = high) θ HP(PlayTennis = yes|H) high θ1θ1 normal θ2θ2 PlayTennis Humidity

13 13/13 Finding the Maximum Likelihood Solution: Two Nodes In a Bayes net, can maximize each parameter separately. Fix a parent condition  single node problem. HumidityPlayTennisP(H,P| θ, θ 1, θ 2 highno θ x (1- θ 1) highno θ x (1- θ 1) highyes θ x θ 1 highyes θ x θ 1 normalyes(1- θ ) x θ 2 normalno(1- θ ) x (1- θ 2) normalyes(1- θ )x θ 2 highno θ x (1- θ 1) normalyes(1- θ ) x θ 2 normalyes(1- θ ) x θ 2 normalyes(1- θ )x θ 2 highyes θ x θ 1 normalyes(1- θ ) x θ 2 highno θ x (1- θ 1) 1.In example, P(D| θ, θ 1, θ 2)= θ 7 (1- θ ) 7 ( θ 1) 3 (1- θ 1) 4 ( θ 2) 6 (1- θ 2). 2.Take logs and set to 0.

14 14/13 Finding the Maximum Likelihood Solution: Single Node, >2 possible values. DayOutlook 1sunny 2 3overcast 4rain 5 6 7overcast 8sunny 9 10rain 11sunny 12overcast 13overcast 14rain Outlook P(Outlook) sunny θ1θ1 overcast θ2θ2 rain θ3θ3 1.In example, P(D| θ 1, θ 2, θ 3)= ( θ 1) 5 ( θ 2) 4 ( θ 3) 5. 2.Take logs and set to 0??

15 15/13 Constrained Optimization 1.Write constraint as g(x) = 0. e.g., g( θ 1, θ 2, θ 3)=(1-( θ 1+ θ 2+ θ 3)). 2.Minimize Lagrangian of f: L(x, λ ) = f(x) + λ g(x) e.g. L( θ, λ ) =( θ 1) 5 ( θ 2) 4 ( θ 3) 5 + λ (1- θ 1- θ 2- θ 3) 3.A minimizer of L is a constrained minimizer of f. Exercise: try finding the minima of L given above. Hint: try eliminating λ as an unknown.

16 16/13 Smoothing

17 17/13 Motivation MLE goes to extreme values on small unbalanced samples. E.g., observe 5 heads  100% heads. The 0 count problem: there may not be any data in part of the space. E.g., there are no data for Outlook = overcast, PlayTennis = no. PlayTennis Humidity DayOutlookTemperatureHumidityWindPlayTennis 1sunnyhothighweakno 2sunnyhothighstrongno 3overcasthothighweakyes 4rainmildhighweakyes 5raincoolnormalweakyes 6raincoolnormalstrongno 7overcastcoolnormalstrongyes 8sunnymildhighweakno 9sunnycoolnormalweakyes 10rainmildnormalweakyes 11sunnymildnormalstrongyes 12overcastmildhighstrongyes 13overcasthotnormalweakyes 14rainmildhighstrongno Outlook

18 18/13 Smoothing Frequency Estimates h heads, t tails, n = h+t. Prior probability estimate p. Equivalent Sample Size m. m-estimate = Interpretation: we started with a “virtual” sample of m tosses with mp heads. p = ½,m=2  Laplace correction =

19 19/13 Exercise Apply the Laplace correction to estimate 1. P(outlook = overcast| PlayTennis = no) 2. P(outlook = sunny| PlayTennis = no) 3. P(outlook = rain| PlayTennis = no) OutlookPlayTennis sunnyno sunnyno overcastyes rainyes rainyes rainno overcastyes sunnyno sunnyyes rainyes sunnyyes overcastyes overcastyes rainno

20 20/13 Bayesian Parameter Learning

21 21/13 Uncertainty in Estimates A single point estimate does not quantify uncertainty. Is 6/10 the same as 6000/10000? Classical statistics: specify confidence interval for estimate. Bayesian approach: Assign a probability to parameter values.

22 22/13 Parameter Probabilities Intuition: Quantify uncertainty about parameter values by assigning a prior probability to parameter values. Not based on data. Example: HypothesisChance of Heads Prior probability of Hypothesis 1100%10% 275%20% 350%40% 425%20% 50%10%

23 23/13 Bayesian Prediction/Inference What probability does the Bayesian assign to Coin = heads? I.e., how should we bet on Coin = heads? Answer: 1. Make a prediction for each parameter value. 2. Average the predictions using the prior as weights: HypothesisChance of Heads Prior probabilityweighted chance 1100%10% 275%20%15% 350%40%20% 425%20%5% 50%10%0% Expected Chance =50%

24 24/13 Mean In the binomial case, Bayesian prediction can be seen as the expected value of a probability distribution P. Aka average, expectation, or mean of P. Notation: E, µ. Example Excel

25 25/13 Variance Variance of a distribution: 1. Find mean of distribution. 2. For each point, find distance to mean. Square it. (Why?) 3. Take expected value of squared distance. Variance of a parameter estimate = uncertainty. Decreases with more data. Example Excel

26 26/13 Continuous priors Probabilities usually range over [0,1]. Then probabilities of probabilities are probabilities of continuous variables = probability density function. p(x) behaves like probability of discrete value, but with integrals replacing sum. E.g.. Exercise: Find the p.d.f. of the uniform distribution over a closed interval [a,b].

27 27/13 Probability Densities

28 28/13 Bayesian Prediction With P.D.F.s Suppose we want to predict p(x| θ ) Given a distribution over the parameters, we marginalize over θ.

29 29/13 Bayesian Learning

30 30/13 Bayesian Updating Russell and Norvig, AMAI Update prior using Bayes’ theorem. P(h|D) = α P(D|h) x P(h). Example: Posterior after observing 10 heads HypothesisChance of Heads Prior probability 1100%10% 275%20% 350%40% 425%20% 50%10%

31 31/13 Prior ∙ Likelihood = Posterior

32 32/13 Updated Bayesian Predictions Predicted probability that next coin is heads as we observe 10 coins.

33 33/13 Updating: Continuous Example Consider again the binomial case where θ = prob of heads. Given n coin tosses and h observed heads, t observed tails, what is the posterior of a uniform distribution over θ in [0,1]? Solved by Laplace in 1814!

34 34/13 Bayesian Prediction How do we predict using the posterior? We can think of this as computing the probability of the next head in the sequence Any ideas? Solution: Laplace 1814!

35 35/13 The Laplace Correction Revisited Suppose I have observed n data points with k heads. 1. Find posterior distribution. 2. Predict probability of heads using posterior distribution. 3. Result h+1/n+2 = m-estimate with uniform prior, m=2.

36 36/13 Parametrized Priors Motivation: Suppose I don’t want a uniform prior. Smooth with m>0. Express prior knowledge. Use parameters for the prior distribution. Called hyperparameters. Chosen so that updating the prior is easy.

37 37/13 Beta Distribution: Definition Hyperparameters a>0,b>0. The Γ term is a normalization constant.

38 38/13 Beta Distribution

39 39/13 Updating the Beta Distribution So what is the normalization constant α ? Hyperparameter a-1: like a virtual count of initial heads. Hyperparameter b-1: like a virtual count of initial tails. Beta prior  Beta posterior: conjugate prior.

40 40/13 Conjugate Prior for non-binary variables Dirichlet distribution: generalizes Beta distribution for variables with >2 values.

41 41/13 Summary Maximum likelihood: general parameter estimation method. Choose parameters that make the data as likely as possible. For Bayes net parameters: MLE = match sample frequency. Typical result! Problems: not defined for 0 count situation. doesn’t quantity uncertainty in estimate. Bayesian approach: Assume prior probability for parameters; prior has hyperparameters. E.g., beta distribution. Problems: prior choice not based on data. inferences (averaging) can be hard to compute.


Download ppt "Oliver Schulte Machine Learning 726 Bayes Net Learning."

Similar presentations


Ads by Google