1 Fast Primal-Dual Strategies for MRF Optimization (Fast PD) Robot Perception Lab Taha Hamedani Aug 2014.

Slides:



Advertisements
Similar presentations
The Primal-Dual Method: Steiner Forest TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A AA A A A AA A A.
Advertisements

Primal-dual Algorithm for Convex Markov Random Fields Vladimir Kolmogorov University College London GDR (Optimisation Discrète, Graph Cuts et Analyse d'Images)
1 LP, extended maxflow, TRW OR: How to understand Vladimirs most recent work Ramin Zabih Cornell University.
Linear Time Methods for Propagating Beliefs Min Convolution, Distance Transforms and Box Sums Daniel Huttenlocher Computer Science Department December,
Dr. Miguel Bagajewicz Sanjay Kumar DuyQuang Nguyen Novel methods for Sensor Network Design.
Linear Programming (LP) (Chap.29)
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Totally Unimodular Matrices
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Tutorial at ICCV (Barcelona, Spain, November 2011)
1/44 A simple Test For the Consecutive Ones Property.
Computing Kemeny and Slater Rankings Vincent Conitzer (Joint work with Andrew Davenport and Jayant Kalagnanam at IBM Research.)
ICCV 2007 tutorial Part III Message-passing algorithms for energy minimization Vladimir Kolmogorov University College London.
Discrete optimization methods in computer vision Nikos Komodakis Ecole Centrale Paris ICCV 2007 tutorial Rio de Janeiro Brazil, October 2007.
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Approximation Algorithms
What is Linear Programming? A Linear Program is a minimization or maximization problem, subject to several restraints. Linear programs can be set up for.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Integer Programming Difference from linear programming –Variables x i must take on integral values, not real values Lots of interesting problems can be.
Distributed Combinatorial Optimization
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
1 Lecture 4 Maximal Flow Problems Set Covering Problems.
Image Analysis and Markov Random Fields (MRFs) Quanren Xiong.
Decision Procedures An Algorithmic Point of View
V. V. Vazirani. Approximation Algorithms Chapters 3 & 22
Primal-Dual Meets Local Search: Approximating MST’s with Non-uniform Degree Bounds Author: Jochen Könemann R. Ravi From CMU CS 3150 Presentation by Dan.
CS774. Markov Random Field : Theory and Application Lecture 08 Kyomin Jung KAIST Sep
Graph Coalition Structure Generation Maria Polukarov University of Southampton Joint work with Tom Voice and Nick Jennings HUJI, 25 th September 2011.
Procedural Modeling of Architectures towards 3D Reconstruction Nikos Paragios Ecole Centrale Paris / INRIA Saclay Ile-de-France Joint Work: P. Koutsourakis,
Design Techniques for Approximation Algorithms and Approximation Classes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
CS774. Markov Random Field : Theory and Application Lecture 13 Kyomin Jung KAIST Oct
Planar Cycle Covering Graphs for inference in MRFS The Typhon Algorithm A New Variational Approach to Ground State Computation in Binary Planar Markov.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Fast Parallel and Adaptive Updates for Dual-Decomposition Solvers Ozgur Sumer, U. Chicago Umut Acar, MPI-SWS Alexander Ihler, UC Irvine Ramgopal Mettu,
EE 685 presentation Utility-Optimal Random-Access Control By Jang-Won Lee, Mung Chiang and A. Robert Calderbank.
Fast and accurate energy minimization for static or time-varying Markov Random Fields (MRFs) Nikos Komodakis (Ecole Centrale Paris) Nikos Paragios (Ecole.
CS774. Markov Random Field : Theory and Application Lecture 02
Update any set S of nodes simultaneously with step-size We show fixed point update is monotone for · 1/|S| Covering Trees and Lower-bounds on Quadratic.
SemiBoost : Boosting for Semi-supervised Learning Pavan Kumar Mallapragada, Student Member, IEEE, Rong Jin, Member, IEEE, Anil K. Jain, Fellow, IEEE, and.
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
Presenter : Kuang-Jui Hsu Date : 2011/3/24(Thur.).
MRF optimization based on Linear Programming relaxations Nikos Komodakis University of Crete IPAM, February 2008.
Divide and Conquer Optimization problem: z = max{cx : x  S}
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
1 Scale and Rotation Invariant Matching Using Linearly Augmented Tree Hao Jiang Boston College Tai-peng Tian, Stan Sclaroff Boston University.
EMIS 8373: Integer Programming Column Generation updated 12 April 2005.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
1/44 A simple Test For the Consecutive Ones Property Without PC-trees!
Markov Random Fields in Vision
Approximation Algorithms based on linear programming.
SemiBoost : Boosting for Semi-supervised Learning Pavan Kumar Mallapragada, Student Member, IEEE, Rong Jin, Member, IEEE, Anil K. Jain, Fellow, IEEE, and.
Normalized Cuts and Image Segmentation Patrick Denis COSC 6121 York University Jianbo Shi and Jitendra Malik.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Lap Chi Lau we will only use slides 4 to 19
Topics in Algorithms Lap Chi Lau.
The minimum cost flow problem
EMIS 8373: Integer Programming
Integer Programming (정수계획법)
Integer Programming (정수계획법)
Discrete Inference and Learning
Presentation transcript:

1 Fast Primal-Dual Strategies for MRF Optimization (Fast PD) Robot Perception Lab Taha Hamedani Aug 2014

Overview A new efficient MRF optimization algorithm generalizes α-expansion at least 3-9 times faster than α-expansion used for boosting the performance of dynamic MRFs, i.e. MRFs varying over time guarantee an almost optimal solution for a much wider class of NP-hard MRF problems 2

Energy Function weighted graph G (with nodes V, edges E and weights w pq ), one seeks to assign a label x p (from a discrete set of labels L) to each p ∈ V, so that the following cost is minimized: p(·), d(·, ·) determine the singleton and pairwise MRF potential functions 3

Primal-dual MRF optimization algorithms Theorem 1 (Primal-Dual schema). Keep generating pairs of integral-primal, dual solutions (x k, y k ), until the elements of the last pair, (say x, y), are both feasible and have costs that are close enough, e.g. their ratio is ≤ f app : Then x is guaranteed to be an f app -approximate solution to the optimal integral solution x ∗, i.e. cTx ≤ f app · cTx ∗. 4

The primal dual schema 5

Fast primal-dual MRF optimization In the above formulation, θ ={ {θ p } p ∈ V, {θ pq } pq ∈ E } represents a vector of MRF-parameters that consists of all unary θp = {θ p (·)} and pairwise θpq = {θ pq (·, ·)} x ={{x p } p ∈ V, {x pq } pq ∈ E } denotes a vector of binary MRF- variables consisting of all unary subvectors xp = {x p (·)} and pairwise subvectors x pq = {x pq (·, ·)}. (0,1 variables) i.e, they satisfy x p (l) = 1 ⇔ label l is assigned to p, while x pq (l, l′) = 1 ⇔ labels l, l′ are assigned to p, q 6

MRF constraints (first constraint) simply express the fact that a unique label must be assigned to each node p (second constraint) since they ensure that if x p (l) = x q (l′) = 1, then x pq (l, l′) = 1 as well (marginal polytope) 7

local marginal polytope connected with the linear programming (LP) relaxation, which is formed by replacing the integer constraints x p (·), x pq (·, ·) ∈ {0, 1} with the relaxed constraints x p (·), x pq (·, ·) ≥ 0 8

The original (possibly difficult) optimization problem decomposes into easier sub problems (called the slaves) that are coordinated by a master problem via message exchanging 9

decompose the original MRF optimization problem, which is NP-hard (since it is defined on a general graph G ) decompose into a set of easier MRF sub problems, each one defined on a tree T ⊂ G. Needed to transform our problem into a more appropriate form by introducing a set of auxiliary variables. let T (G ) be a set of sub trees of graph G (cover at least one node and edge of graph G) 10

For each tree T ∈ T (G ) we will then imagine that there is a smaller MRF defined just on the nodes and edges of tree T We will associate to it a vector of MRF- parameters θT. as well as a vector of MRF-variables xT (these have similar form to vectors θ and x of the original MRF, except that they are smaller in size) (Decomposition) 11

Redundancy MRF-variables contained in vector xT will be redundant initially assume that they are all equal to the corresponding MRF-variables in vector x, i.e it will hold xT= x|T x|T represents the sub vector of x containing MRF-variables only for nodes and edges of tree T 12

all the vectors {θT} will be defined so that they satisfy the following conditions: Here, T (p) and T (pq) denote the set of all trees of T (G ) that contain node p and edge pq respectively. 13

Energy Decomposition The first constraints can reduced by : MRF problem can decomposed as : 14

It is clear that without constraints xT= x|T, this problem would decouple into a series of smaller MRF problems (one per tree T ) Lagrangian dual form : Eliminate vector x by minimizing over it : 15

The resulting lagrangian dual form is simplified as : Dual from by maximizing over feasible set : Master Slave 16

According to Lemma 1 : λ T must first be updated as Sub gradient of gt is equal to optimal solution of slave problem : 17

Fast PD procedure 18

References [1] Komodakis, N.; Paragios, N.; Tziritas, G., "MRF Energy Minimization and Beyond via Dual Decomposition," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.33, no.3, pp.531,552, March [2] Chaohui Wang, Nikos Komodakis, Nikos Paragios, Markov Random Field modeling, inference & learning in computer vision & image understanding: A survey, Computer Vision and Image Understanding, Volume 117, Issue 11, November 2013, Pages , ISSN Thank You ?