1 Computer Vision Research  Huttenlocher, Zabih –Recognition, stereopsis, restoration, learning  Strong algorithmic focus –Combinatorial optimization.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Weakly supervised learning of MRF models for image region labeling Jakob Verbeek LEAR team, INRIA Rhône-Alpes.
Linear Time Methods for Propagating Beliefs Min Convolution, Distance Transforms and Box Sums Daniel Huttenlocher Computer Science Department December,
Graph Cut Algorithms for Computer Vision & Medical Imaging Ramin Zabih Computer Science & Radiology Cornell University Joint work with Y. Boykov, V. Kolmogorov,
Introduction to Markov Random Fields and Graph Cuts Simon Prince
I Images as graphs Fully-connected graph – node for every pixel – link between every pair of pixels, p,q – similarity w ij for each link j w ij c Source:
CS774. Markov Random Field : Theory and Application Lecture 17 Kyomin Jung KAIST Nov
Introduction to Belief Propagation and its Generalizations. Max Welling Donald Bren School of Information and Computer and Science University of California.
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
1 s-t Graph Cuts for Binary Energy Minimization  Now that we have an energy function, the big question is how do we minimize it? n Exhaustive search is.
Markov Nets Dhruv Batra, Recitation 10/30/2008.
Markov Random Fields (MRF)
Graphical models, belief propagation, and Markov random fields 1.
Epipolar lines epipolar lines Baseline O O’ epipolar plane.
Chapter 8-3 Markov Random Fields 1. Topics 1. Introduction 1. Undirected Graphical Models 2. Terminology 2. Conditional Independence 3. Factorization.
Self-Validated Labeling of MRFs for Image Segmentation Wei Feng 1,2, Jiaya Jia 2 and Zhi-Qiang Liu 1 1. School of Creative Media, City University of Hong.
1 Can this be generalized?  NP-hard for Potts model [K/BVZ 01]  Two main approaches 1. Exact solution [Ishikawa 03] Large graph, convex V (arbitrary.
Agenda Seam-carving: finish up “Mid-term review” (a look back) Main topic: Feature detection.
Last Time Pinhole camera model, projection
Stereo & Iterative Graph-Cuts Alex Rav-Acha Vision Course Hebrew University.
Conditional Random Fields
The plan for today Camera matrix
Announcements Readings for today:
A TABU SEARCH APPROACH TO POLYGONAL APPROXIMATION OF DIGITAL CURVES.
Belief Propagation Kai Ju Liu March 9, Statistical Problems Medicine Finance Internet Computer vision.
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
Computer vision: models, learning and inference
Perceptual Organization: Segmentation and Optical Flow.
CPSC 422, Lecture 18Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18 Feb, 25, 2015 Slide Sources Raymond J. Mooney University of.
Computer vision: models, learning and inference
Reconstructing Relief Surfaces George Vogiatzis, Philip Torr, Steven Seitz and Roberto Cipolla BMVC 2004.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
What we didn’t have time for CS664 Lecture 26 Thursday 12/02/04 Some slides c/o Dan Huttenlocher, Stefano Soatto, Sebastian Thrun.
Graphical models for part of speech tagging
Graph Cut 韋弘 2010/2/22. Outline Background Graph cut Ford–Fulkerson algorithm Application Extended reading.
City University of Hong Kong 18 th Intl. Conf. Pattern Recognition Self-Validated and Spatially Coherent Clustering with NS-MRF and Graph Cuts Wei Feng.
CS774. Markov Random Field : Theory and Application Lecture 13 Kyomin Jung KAIST Oct
Planar Cycle Covering Graphs for inference in MRFS The Typhon Algorithm A New Variational Approach to Ground State Computation in Binary Planar Markov.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
A Markov Random Field Model for Term Dependencies Donald Metzler W. Bruce Croft Present by Chia-Hao Lee.
Markov Random Fields Probabilistic Models for Images
CS 4487/6587 Algorithms for Image Analysis
1 Markov Random Fields with Efficient Approximations Yuri Boykov, Olga Veksler, Ramin Zabih Computer Science Department CORNELL UNIVERSITY.
Fast and accurate energy minimization for static or time-varying Markov Random Fields (MRFs) Nikos Komodakis (Ecole Centrale Paris) Nikos Paragios (Ecole.
Lecture 19: Solving the Correspondence Problem with Graph Cuts CAP 5415 Fall 2006.
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
Training Conditional Random Fields using Virtual Evidence Boosting Lin Liao, Tanzeem Choudhury †, Dieter Fox, and Henry Kautz University of Washington.
Discussion of Pictorial Structures Pedro Felzenszwalb Daniel Huttenlocher Sicily Workshop September, 2006.
Probabilistic Graphical Models seminar 15/16 ( ) Haim Kaplan Tel Aviv University.
The University of Ontario Yuri Boykov Research Interests n Computer Vision n Medical Image Analysis n Graphics Combinatorial optimization algorithms Geometric,
CPSC 422, Lecture 17Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 17 Oct, 19, 2015 Slide Sources D. Koller, Stanford CS - Probabilistic.
Markov Random Fields & Conditional Random Fields
Contextual models for object detection using boosted random fields by Antonio Torralba, Kevin P. Murphy and William T. Freeman.
A global approach Finding correspondence between a pair of epipolar lines for all pixels simultaneously Local method: no guarantee we will have one to.
Efficient Belief Propagation for Image Restoration Qi Zhao Mar.22,2006.
Graph Algorithms for Vision Amy Gale November 5, 2002.
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
Photoconsistency constraint C2 q C1 p l = 2 l = 3 Depth labels If this 3D point is visible in both cameras, pixels p and q should have similar intensities.
Bayesian Belief Propagation for Image Understanding David Rosenberg.
Markov Random Fields in Vision
Energy minimization Another global approach to improve quality of correspondences Assumption: disparities vary (mostly) smoothly Minimize energy function:
Summary of “Efficient Deep Learning for Stereo Matching”
Semi-Global Stereo Matching with Surface Orientation Priors
Markov Random Fields with Efficient Approximations
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 18
Haim Kaplan and Uri Zwick
Integration and Graphical Models
“Traditional” image segmentation
Presentation transcript:

1 Computer Vision Research  Huttenlocher, Zabih –Recognition, stereopsis, restoration, learning  Strong algorithmic focus –Combinatorial optimization –Geometric algorithms  Application areas –Techniques we developed have Played important role at Xerox and Microsoft Resulted in successful startups –Medical imaging Zabih joint with Radiology department in NYC

2 Markov Random Fields  Many computer vision problems can be formalized using Markov random fields –Set of sites and neighborhood system –Estimate label for each site accounting for Goodness of fit of label to observed data at site Consistency of label with neighbors  MRF’s are undirected graphical models –Probabilistic relational models (directed)  Until recently a formalism used in computer vision, but not very practical

3 Example MRF Problems  Stereopsis –For an image pair, estimate depth at each pixel Sites are pixels, neighbors are 4-connected grid, labels are depths  Object recognition –For an image, estimate location of a multi-part flexible object Sites are parts, neighbors are connected parts, labels are locations

4 MRF Algorithms  Underlying graph G=(S,N) –For tree-structure can solve exactly using variant of Viterbi recurrence But impractical for large label set –For two labels, can solve exactly using min-cut –For three or more labels and grid-graph problem is NP hard  Recent algorithmic progress –For grid graphs, good approximation methods –For low tree-width graphs, exact methods even for large label sets

5 Alpha Expansion Technique [BVZ99]  Use min-cut to efficiently solve a special two label problem –Labels “stay the same” or “replace with ”  Iterate over possible values of  –Each rules out exponentially many labelings Red expansion move from x Input labeling x

6 Graph Cuts for MRF’s on Grid  Best stereo algorithms use alpha expansion technique –Middlebury stereo benchmark  Beyond computer vision: many image compositing, restoration, editing tasks –E.g., SIGGRAPH, Microsoft Ground TruthCorrelationAlpha Expansion

7 Tree-Like MRF’s  Object recognition –Nodes are parts, labels are locations  Small graph, not at all grid-like –Many labels (millions or more)  Viterbi algorithm for trees –Still not practical because O(m 2 n) for n parts and m locations per part –Fast min convolution techniques make finding best labeling O(mn)  More generally for fan-like graphs

8 Fan Structured Models [CFH05]  K-fan, let R  S be a set of reference parts –And R’=S-R be the remaining parts –Complete graph on R and complete bipartite graph on R,R’  Parts local image patches –Probability of (oriented) edge at each pixel

9 Models (Weakly Supervised) Car (Rear) 1-fan Motorbike 2-fan Face 1-fan Training examples only labeled as positive/negative

10 Recognition Results  High detection accuracy –Motorbikes 98.6%, Faces 98.2%, Cars 94.4%, Planes 95.0%  Fast running time –Approx. 2 sec. per image, 2 fans  Exact (global) method for computing highest probability configuration of parts for given image –No approximations or local search techniques  Single overall optimization problem –Does not depend on “feature detection”