Markov Random Fields & Conditional Random Fields

Slides:



Advertisements
Similar presentations
Mean-Field Theory and Its Applications In Computer Vision1 1.
Advertisements

Bayesian Belief Propagation
Topic models Source: Topic models, David Blei, MLSS 09.
Weakly supervised learning of MRF models for image region labeling Jakob Verbeek LEAR team, INRIA Rhône-Alpes.
CS590M 2008 Fall: Paper Presentation
Constrained Approximate Maximum Entropy Learning (CAMEL) Varun Ganapathi, David Vickrey, John Duchi, Daphne Koller Stanford University TexPoint fonts used.
Introduction to Markov Random Fields and Graph Cuts Simon Prince
Agenda Introduction Bag-of-words models Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
Conditional Random Fields and beyond …
Introduction to Conditional Random Fields John Osborne Sept 4, 2009.
Learning to estimate human pose with data driven belief propagation Gang Hua, Ming-Hsuan Yang, Ying Wu CVPR 05.
Conditional Random Fields - A probabilistic graphical model Stefan Mutter Machine Learning Group Conditional Random Fields - A probabilistic graphical.
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data John Lafferty Andrew McCallum Fernando Pereira.
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
Belief Propagation on Markov Random Fields Aggeliki Tsoli.
1 Active Random Fields Adrian Barbu. FSU 2 The MAP Estimation Problem Estimation problem: Given input data y, solve Example: Image denoising Given noisy.
CS774. Markov Random Field : Theory and Application Lecture 04 Kyomin Jung KAIST Sep
Chapter 8-3 Markov Random Fields 1. Topics 1. Introduction 1. Undirected Graphical Models 2. Terminology 2. Conditional Independence 3. Factorization.
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
Computer Vision Group University of California Berkeley 1 Learning Scale-Invariant Contour Completion Xiaofeng Ren, Charless Fowlkes and Jitendra Malik.
Abstract We present a model of curvilinear grouping using piecewise linear representations of contours and a conditional random field to capture continuity.
1 Computer Vision Research  Huttenlocher, Zabih –Recognition, stereopsis, restoration, learning  Strong algorithmic focus –Combinatorial optimization.
Learning Low-Level Vision William T. Freeman Egon C. Pasztor Owen T. Carmichael.
Conditional Random Fields
Graphical Models Lei Tang. Review of Graphical Models Directed Graph (DAG, Bayesian Network, Belief Network) Typically used to represent causal relationship.
Belief Propagation Kai Ju Liu March 9, Statistical Problems Medicine Finance Internet Computer vision.
An Iterative Optimization Approach for Unified Image Segmentation and Matting Hello everyone, my name is Jue Wang, I’m glad to be here to present our paper.
Measuring Uncertainty in Graph Cut Solutions Pushmeet Kohli Philip H.S. Torr Department of Computing Oxford Brookes University.
Computer vision: models, learning and inference
CPSC 422, Lecture 18Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18 Feb, 25, 2015 Slide Sources Raymond J. Mooney University of.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
A Trainable Graph Combination Scheme for Belief Propagation Kai Ju Liu New York University.
STRUCTURED PERCEPTRON Alice Lai and Shi Zhi. Presentation Outline Introduction to Structured Perceptron ILP-CRF Model Averaged Perceptron Latent Variable.
Reconstructing Relief Surfaces George Vogiatzis, Philip Torr, Steven Seitz and Roberto Cipolla BMVC 2004.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
MRFs and Segmentation with Graph Cuts Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/31/15.
6. Experimental Analysis Visible Boltzmann machine with higher-order potentials: Conditional random field (CRF): Exponential random graph model (ERGM):
Boltzmann Machines and their Extensions S. M. Ali Eslami Nicolas Heess John Winn March 2013 Heriott-Watt University.
Graphical models for part of speech tagging
MRFs and Segmentation with Graph Cuts Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/24/10.
Learning Lateral Connections between Hidden Units Geoffrey Hinton University of Toronto in collaboration with Kejie Bao University of Toronto.
Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient Tijmen Tieleman University of Toronto (Training MRFs using new algorithm.
Markov Random Fields Probabilistic Models for Images
Fast Simulators for Assessment and Propagation of Model Uncertainty* Jim Berger, M.J. Bayarri, German Molina June 20, 2001 SAMO 2001, Madrid *Project of.
Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient Tijmen Tieleman University of Toronto.
Presented by Jian-Shiun Tzeng 5/7/2009 Conditional Random Fields: An Introduction Hanna M. Wallach University of Pennsylvania CIS Technical Report MS-CIS
Inference in generative models of images and video John Winn MSR Cambridge May 2004.
Probabilistic Graphical Models seminar 15/16 ( ) Haim Kaplan Tel Aviv University.
Lecture 2: Statistical learning primer for biologists
Christopher M. Bishop Object Recognition: A Statistical Learning Perspective Microsoft Research, Cambridge Sicily, 2003.
Expectation-Maximization (EM) Algorithm & Monte Carlo Sampling for Inference and Approximation.
A Dynamic Conditional Random Field Model for Object Segmentation in Image Sequences Duke University Machine Learning Group Presented by Qiuhua Liu March.
John Lafferty Andrew McCallum Fernando Pereira
Motivation and Overview
CPSC 422, Lecture 17Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 17 Oct, 19, 2015 Slide Sources D. Koller, Stanford CS - Probabilistic.
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
SA-1 University of Washington Department of Computer Science & Engineering Robotics and State Estimation Lab Dieter Fox Stephen Friedman, Lin Liao, Benson.
Graphical Models for Segmenting and Labeling Sequence Data Manoj Kumar Chinnakotla NLP-AI Seminar.
MRFs (X1,X2) X3 X1 X2 4 (X2,X3,X3) X4. MRFs (X1,X2) X3 X1 X2 4 (X2,X3,X3) X4.
Learning Deep Generative Models by Ruslan Salakhutdinov
Learning Coordination Classifiers
Multimodal Learning with Deep Boltzmann Machines
Markov Networks.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18
Markov Random Fields for Edge Classification
Probabilistic Models with Latent Variables
Expectation-Maximization & Belief Propagation
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18
Sequential Learning with Dependency Nets
Presentation transcript:

Markov Random Fields & Conditional Random Fields John Winn MSR Cambridge

Road map Markov Random Fields Conditional Random Fields What they are Uses in vision/object recognition Advantages Difficulties Conditional Random Fields Further difficulties

Markov Random Fields X1 X2 12 X3 23 X4 234

Examples of use in vision Grid-shaped MRFs for pixel labelling e.g. segmentation MRFs (e.g. stars) over part positions for pictorial structures/constellation models.

Advantages Probabilistic model: Undirected model Captures uncertainty No ‘irreversible’ decisions Iterative reasoning Principled fusing of different cues Undirected model Allows ‘non-causal’ relationships (soft constraints) Efficient algorithms: inference now practical for MRFs with millions variables – can be applied to raw pixels.

Maximum Likelihood Learning Add partition function back in Sufficient statistics of data Expected model sufficient statistics

Difficulty I: Inference Exact inference intractable except in a few cases e.g. small models Must resort to approximate methods Loopy belief propagation MCMC sampling Alpha expansion (MAP solution only)

Difficulty II: Learning Gradient descent – vulnerable to local minima Slow – must perform expensive inference at each iteration. Can stop inference early… Contrastive divergence Piecewise training + variants Need fast + accurate methods

Difficulty III: Large cliques For images, we want to look at patches not pairs of pixels. Therefore would like to use large cliques. Cost of inference (memory and CPU) typically exponential in clique size. Example: Field of Experts, Black + Roth Training: contrastive divergence over a week on a cluster of 50+ machines Test: Gibbs sampling very slow?

Other MRF issues… Local minima when performing inference in high-dimensional latent spaces MRF models often require making inaccurate independence assumptions about the observations.

Conditional Random Fields Lafferty et al., 2001 12 23 X1 X2 X3 234 I X4

Examples of use in vision Grid-shaped CRFs for pixel labelling (e.g. segmentation), using boosted classifiers.

Difficulty IV: CRF Learning Sufficient statistics of labels given the image Expected sufficient statistics given the image

Difficulty V: Scarcity of labels CRF is a conditional model – needs labels. Labels are expensive + increasingly hard to define. Labels are also inherently lower dimensional than the data and hence support learning fewer parameters than generative models.