Download presentation

Presentation is loading. Please wait.

1
Markov Networks

2
**Overview Markov networks Inference in Markov networks**

Computing probabilities Markov chain Monte Carlo Belief propagation MAP inference Learning Markov networks Weight learning Generative Discriminative (a.k.a. conditional random fields) Structure learning

3
**Markov Networks Smoking Cancer Asthma Cough**

Undirected graphical models Smoking Cancer Asthma Cough Potential functions defined over cliques Smoking Cancer Ф(S,C) False 4.5 True 2.7

4
**Markov Networks Smoking Cancer Asthma Cough**

Undirected graphical models Smoking Cancer Asthma Cough Log-linear model: Weight of Feature i Feature i

5
**Hammersley-Clifford Theorem**

If Distribution is strictly positive (P(x) > 0) And Graph encodes conditional independences Then Distribution is product of potentials over cliques of graph Inverse is also true. (“Markov network = Gibbs distribution”)

6
**Markov Nets vs. Bayes Nets**

Property Markov Nets Bayes Nets Form Prod. potentials Potentials Arbitrary Cond. probabilities Cycles Allowed Forbidden Partition func. Z = ? Z = 1 Indep. check Graph separation D-separation Indep. props. Some Inference MCMC, BP, etc. Convert to Markov

7
**Inference in Markov Networks**

Computing probabilities Markov chain Monte Carlo Belief propagation MAP inference

8
**Computing Probabilities**

Goal: Compute marginals & conditionals of Exact inference is #P-complete Approximate inference Monte Carlo methods Belief propagation Variational approximations

9
**Markov Chain Monte Carlo**

General algorithm: Metropolis-Hastings Sample next state given current one according to transition probability Reject new state with some probability to maintain detailed balance Simplest (and most popular) algorithm: Gibbs sampling Sample one variable at a time given the rest

10
**Gibbs Sampling state ← random truth assignment**

for i ← 1 to num-samples do for each variable x sample x according to P(x|neighbors(x)) state ← state with new value of x P(F) ← fraction of states in which F is true

11
Belief Propagation Form factor graph: Bipartite network of variables and features Repeat until convergence: Nodes send messages to their features Features send messages to their variables Messages Current approximation to node marginals Initialize to 1

12
Belief Propagation Features (f) Nodes (x)

13
Belief Propagation Features (f) Nodes (x)

14
**MAP/MPE Inference Goal: Find most likely state of world given evidence**

Query Evidence

15
**MAP Inference Algorithms**

Iterated conditional modes Simulated annealing Belief propagation (max-product) Graph cuts Linear programming relaxations

16
**Learning Markov Networks**

Learning parameters (weights) Generatively Discriminatively Learning structure (features) In this lecture: Assume complete data (If not: EM versions of algorithms)

17
**Generative Weight Learning**

Maximize likelihood or posterior probability Numerical optimization (gradient or 2nd order) No local maxima Requires inference at each step (slow!) No. of times feature i is true in data Expected no. times feature i is true according to model

18
Pseudo-Likelihood Likelihood of each variable given its neighbors in the data Does not require inference at each step Consistent estimator Widely used in vision, spatial statistics, etc. But PL parameters may not work well for long inference chains

19
**Discriminative Weight Learning (a.k.a. Conditional Random Fields)**

Maximize conditional likelihood of query (y) given evidence (x) Voted perceptron: Approximate expected counts by counts in MAP state of y given x No. of true groundings of clause i in data Expected no. true groundings according to model

20
**Other Weight Learning Approaches**

Generative: Iterative scaling Discriminative: Max margin

21
**Structure Learning Start with atomic features**

Greedily conjoin features to improve score Problem: Need to reestimate weights for each new candidate Approximation: Keep weights of previous features constant

Similar presentations

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google