# MAP Estimation Algorithms in

## Presentation on theme: "MAP Estimation Algorithms in"— Presentation transcript:

MAP Estimation Algorithms in
Computer Vision - Part II M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research

Example: Image Segmentation
E: {0,1}n → R 0 → fg 1 → bg E(x) = ∑ ci xi + ∑ cij xi(1-xj) i i,j n = number of pixels Move to front Image (D)

Example: Image Segmentation
E: {0,1}n → R 0 → fg 1 → bg E(x) = ∑ ci xi + ∑ cij xi(1-xj) i i,j n = number of pixels Unary Cost (ci) Dark (negative) Bright (positive)

Example: Image Segmentation
E: {0,1}n → R 0 → fg 1 → bg E(x) = ∑ ci xi + ∑ cij xi(1-xj) i i,j n = number of pixels Discontinuity Cost (cij)

Example: Image Segmentation
E: {0,1}n → R 0 → fg 1 → bg E(x) = ∑ ci xi + ∑ cij xi(1-xj) i i,j n = number of pixels x* = arg min E(x) x How to minimize E(x)? Global Minimum (x*)

Outline of the Tutorial
The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

Outline of the Tutorial
The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

The st-Mincut Problem Graph (V, E, C) 2 9 1 2 5 4 Source
Vertices V = {v1, v2 ... vn} Edges E = {(v1, v2) ....} Costs C = {c(1, 2) ....} 2 9 1 v1 v2 2 5 4 Sink

The st-Mincut Problem What is a st-cut? Source 2 9 1 v1 v2 2 5 4 Sink

What is the cost of a st-cut?
The st-Mincut Problem What is a st-cut? An st-cut (S,T) divides the nodes between source and sink. Source 2 9 What is the cost of a st-cut? 1 Sum of cost of all edges going from S to T v1 v2 2 5 4 Sink = 16

What is the cost of a st-cut?
The st-Mincut Problem What is a st-cut? An st-cut (S,T) divides the nodes between source and sink. Source 2 9 What is the cost of a st-cut? 1 Sum of cost of all edges going from S to T v1 v2 2 5 4 What is the st-mincut? Sink st-cut with the minimum cost = 7

How to compute the st-mincut?
Solve the dual maximum flow problem Compute the maximum flow between Source and Sink Source Constraints Edges: Flow < Capacity Nodes: Flow in = Flow out 2 9 1 v1 v2 2 5 4 Min-cut\Max-flow Theorem Sink In every network, the maximum flow equals the cost of the st-mincut

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 0 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 2 9 1 v1 v2 2 5 4 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 0 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 2 9 1 v1 v2 2 5 4 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 0 + 2 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 2-2 9 1 v1 v2 2 4 5-2 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 2 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 9 1 v1 v2 2 3 4 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 2 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 9 1 v1 v2 2 3 4 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 2 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 9 1 v1 v2 2 3 4 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 2 + 4 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 5 1 v1 v2 2 3 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 6 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 5 1 v1 v2 2 3 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 6 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 5 1 v1 v2 2 3 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 6 + 1 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 4 1-1 v1 v2 2+1 2 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 7 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 4 v1 v2 3 2 Sink Algorithms assume non-negative capacity

Augmenting Path Based Algorithms
Maxflow Algorithms Flow = 7 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 4 v1 v2 3 2 Sink Algorithms assume non-negative capacity

History of Maxflow Algorithms
n: #nodes m: #edges U: maximum edge weight Augmenting Path and Push-Relabel Algorithms assume non-negative edge weights [Slide credit: Andrew Goldberg]

History of Maxflow Algorithms
n: #nodes m: #edges U: maximum edge weight Augmenting Path and Push-Relabel Algorithms assume non-negative edge weights [Slide credit: Andrew Goldberg]

Augmenting Path based Algorithms
Ford Fulkerson: Choose any augmenting path Source 1000 1000 1 a1 a2 1000 1000 Sink

Augmenting Path based Algorithms
Ford Fulkerson: Choose any augmenting path Source 1000 1000 1 a1 a2 Bad Augmenting Paths 1000 1000 Sink

Augmenting Path based Algorithms
Ford Fulkerson: Choose any augmenting path Source 1000 1000 1 a1 a2 Bad Augmenting Path 1000 1000 Sink

Augmenting Path based Algorithms
Ford Fulkerson: Choose any augmenting path Source 999 1000 a1 a2 1 1000 999 Sink

Augmenting Path based Algorithms
n: #nodes m: #edges Ford Fulkerson: Choose any augmenting path Source 999 1000 a1 a2 1 1000 999 Sink We will have to perform 2000 augmentations! Worst case complexity: O (m x Total_Flow) (Pseudo-polynomial bound: depends on flow)

Augmenting Path based Algorithms
n: #nodes m: #edges Dinic: Choose shortest augmenting path Source 1000 1000 1 a1 a2 1000 1000 Sink Worst case Complexity: O (m n2)

Maxflow in Computer Vision
Specialized algorithms for vision problems Grid graphs Low connectivity (m ~ O(n)) Dual search tree augmenting path algorithm [Boykov and Kolmogorov PAMI 2004] Finds approximate shortest augmenting paths efficiently High worst-case time complexity Empirically outperforms other algorithms on vision problems

Maxflow in Computer Vision
Specialized algorithms for vision problems Grid graphs Low connectivity (m ~ O(n)) Dual search tree augmenting path algorithm [Boykov and Kolmogorov PAMI 2004] Finds approximate shortest augmenting paths efficiently High worst-case time complexity Empirically outperforms other algorithms on vision problems Efficient code available on the web

Outline of the Tutorial
The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

St-mincut and Energy Minimization
Minimizing a Qudratic Pseudoboolean function E(x) Functions of boolean variables Pseudoboolean? E: {0,1}n → R E(x) = ∑ ci xi + ∑ cij xi(1-xj) cij≥0 i i,j Polynomial time st-mincut algorithms require non-negative edge weights

So how does this work? E(x) Construct a graph such that:
Any st-cut corresponds to an assignment of x The cost of the cut is equal to the energy of x : E(x) st-mincut T S Solution E(x)

Graph Construction E(a1,a2) Source (0) a1 a2 Sink (1)

Graph Construction E(a1,a2) = 2a1 Source (0) 2 a1 a2 Sink (1)

Graph Construction E(a1,a2) = 2a1 + 5ā1 Source (0) 2 a1 a2 5 Sink (1)

Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 2 9 5 4 Source (0)
Sink (1)

Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 2 9 2 5 4
Source (0) 2 9 a1 a2 2 5 4 Sink (1)

Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 2 9
Source (0) 2 9 1 a1 a2 2 5 4 Sink (1)

Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 2 9
Source (0) 2 9 1 a1 a2 2 5 4 Sink (1)

Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 2 9
Source (0) 2 9 Cost of cut = 11 1 a1 a2 a1 = 1 a2 = 1 2 E (1,1) = 11 5 4 Sink (1)

Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 2 9
Source (0) 2 9 st-mincut cost = 8 1 a1 a2 a1 = 1 a2 = 0 2 E (1,0) = 8 5 4 Sink (1)

Energy Function Reparameterization
Two functions E1 and E2 are reparameterizations if E1 (x) = E2 (x) for all x For instance: E1 (a1) = 1+ 2a1 + 3ā1 E2 (a1) = 3 + ā1 a1 ā1 1+ 2a1 + 3ā1 3 + ā1 1 4 3

Flow and Reparametrization
E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 Source (0) 2 9 1 a1 a2 2 5 4 Sink (1)

Flow and Reparametrization
E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 Source (0) 2 9 2a1 + 5ā1 1 a1 a2 = 2(a1+ā1) + 3ā1 2 = 2 + 3ā1 5 4 Sink (1)

Flow and Reparametrization
E(a1,a2) = 2 + 3ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 Source (0) 9 2a1 + 5ā1 1 a1 a2 = 2(a1+ā1) + 3ā1 2 = 2 + 3ā1 3 4 Sink (1)

Flow and Reparametrization
E(a1,a2) = 2 + 3ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 Source (1) 9 9a2 + 4ā2 1 a1 a2 = 4(a2+ā2) + 5ā2 2 = 4 + 5ā2 3 4 Sink (0)

Flow and Reparametrization
E(a1,a2) = 2 + 3ā1+ 5a a1ā2 + ā1a2 Source (1) 5 9a2 + 4ā2 1 a1 a2 = 4(a2+ā2) + 5ā2 2 = 4 + 5ā2 3 Sink (0)

Flow and Reparametrization
E(a1,a2) = 6 + 3ā1+ 5a2 + 2a1ā2 + ā1a2 Source (1) 5 1 a1 a2 2 3 Sink (0)

Flow and Reparametrization
E(a1,a2) = 6 + 3ā1+ 5a2 + 2a1ā2 + ā1a2 Source (1) 5 1 a1 a2 2 3 Sink (0)

Flow and Reparametrization
E(a1,a2) = 6 + 3ā1+ 5a2 + 2a1ā2 + ā1a2 3ā1+ 5a2 + 2a1ā2 = 2(ā1+a2+a1ā2) +ā1+3a2 Source (1) = 2(1+ā1a2) +ā1+3a2 5 F1 = ā1+a2+a1ā2 F2 = 1+ā1a2 1 a1 a2 2 a1 a2 F1 F2 1 2 3 Sink (0)

Flow and Reparametrization
E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 3ā1+ 5a2 + 2a1ā2 = 2(ā1+a2+a1ā2) +ā1+3a2 Source (1) = 2(1+ā1a2) +ā1+3a2 3 F1 = ā1+a2+a1ā2 F2 = 1+ā1a2 3 a1 a2 a1 a2 F1 F2 1 2 1 Sink (0)

Flow and Reparametrization
E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 Source (1) 3 3 No more augmenting paths possible a1 a2 1 Sink (0)

Flow and Reparametrization
E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 Residual Graph (positive coefficients) Source (1) Total Flow 3 bound on the optimal solution 3 a1 a2 1 Sink (0) Inference of the optimal solution becomes trivial because the bound is tight

Flow and Reparametrization
E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 Residual Graph (positive coefficients) Source (1) Total Flow 3 st-mincut cost = 8 bound on the optimal solution 3 a1 a2 a1 = 1 a2 = 0 E (1,0) = 8 1 Sink (0) Inference of the optimal solution becomes trivial because the bound is tight

Example: Image Segmentation
E: {0,1}n → R 0 → fg 1 → bg E(x) = ∑ ci xi + ∑ cij xi(1-xj) i i,j x* = arg min E(x) x How to minimize E(x)? Global Minimum (x*)

How does the code look like?
Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost); g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) Source (0) Sink (1)

How does the code look like?
Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost); g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) Source (0) bgCost(a1) bgCost(a2) a1 a2 fgCost(a1) fgCost(a2) Sink (1)

How does the code look like?
Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) Source (0) bgCost(a1) bgCost(a2) cost(p,q) a1 a2 fgCost(a1) fgCost(a2) Sink (1)

How does the code look like?
Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) Source (0) bgCost(a1) bgCost(a2) cost(p,q) a1 a2 fgCost(a1) fgCost(a2) Sink (1) a1 = bg a2 = fg

Image Segmentation in Video
n-links st-cut = 0 x* E(x) t = 1 correct Image Flow Global Optimum

Image Segmentation in Video
correct Global Optimum Flow

Dynamic Energy Minimization
SA minimize EA Recycling Solutions Can we do better? EB SB computationally expensive operation Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07)

Dynamic Energy Minimization
SA minimize EA Simpler energy EB* differences between A and B similar Reuse flow Reparametrization cheaper operation EB SB 3 – time speedup! computationally expensive operation Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) Kohli & Torr (ICCV05, PAMI07)

Dynamic Energy Minimization
Original Energy E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 Reparametrized Energy E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 New Energy E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 7a1ā2 + ā1a2 New Reparametrized Energy E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 + 5a1ā2 Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) Kohli & Torr (ICCV05, PAMI07)

Outline of the Tutorial
The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

Minimizing Energy Functions
General Energy Functions NP-hard to minimize Only approximate minimization possible Easy energy functions Solvable in polynomial time Submodular ~ O(n6) MAXCUT NP-Hard Submodular Functions Functions defined on trees Space of Function Minimization Problems

Submodular Set Functions
2|E| = #subsets of E Let E= {a1,a2, .... an} be a set Set function f  2|E|  ℝ

Submodular Set Functions
2|E| = #subsets of E Let E= {a1,a2, .... an} be a set Set function f  2|E|  ℝ is submodular if f(A) + f(B)  f(AB) + f(AB) for all A,B  E E A B Important Property Sum of two submodular functions is submodular

Minimizing Submodular Functions
Minimizing general submodular functions O(n5 Q + n6) where Q is function evaluation time [Orlin, IPCO 2007] Symmetric submodular functions E (x) = E (1 - x) O(n3) [Queyranne 1998] Quadratic pseudoboolean Can be transformed to st-mincut One node per variable [ O(n3) complexity] Very low empirical running time

Submodular Pseudoboolean Functions
Function defined over boolean vectors x = {x1,x2, .... xn} Definition: All functions for one boolean variable (f: {0,1} -> R) are submodular A function of two boolean variables (f: {0,1}2 -> R) is submodular if f(0,1) + f(1,0)  f(0,0) + f(1,1) A general pseudoboolean function f  2n  ℝ is submodular if all its projections fp are submodular i.e. fp(0,1) + fp (1,0)  fp (0,0) + fp (1,1)

E(x) = ∑ θi (xi) + ∑ θij (xi,xj) i,j i For all ij θij(0,1) + θij (1,0)  θij (0,0) + θij (1,1)

E(x) = ∑ θi (xi) + ∑ θij (xi,xj) i,j i For all ij θij(0,1) + θij (1,0)  θij (0,0) + θij (1,1) Equivalent (transformable) E(x) = ∑ ci xi + ∑ cij xi(1-xj) cij≥0 i i,j i.e. All submodular QPBFs are st-mincut solvable

How are they equivalent?
A = θij (0,0) B = θij(0,1) C = θij (1,0) D = θij (1,1) xj A B C D C-A D-C B+C-A-D xi = A + + + 1 1 1 1 if x1=1 add C-A if x2 = 1 add D-C θij (xi,xj) = θij(0,0) + (θij(1,0)-θij(0,0)) xi + (θij(1,0)-θij(0,0)) xj + (θij(1,0) + θij(0,1) - θij(0,0) - θij(1,1)) (1-xi) xj B+C-A-D  0 is true from the submodularity of θij

How are they equivalent?
A = θij (0,0) B = θij(0,1) C = θij (1,0) D = θij (1,1) xj A B C D C-A D-C B+C-A-D xi = A + + + 1 1 1 1 if x1=1 add C-A if x2 = 1 add D-C θij (xi,xj) = θij(0,0) + (θij(1,0)-θij(0,0)) xi + (θij(1,0)-θij(0,0)) xj + (θij(1,0) + θij(0,1) - θij(0,0) - θij(1,1)) (1-xi) xj B+C-A-D  0 is true from the submodularity of θij

How are they equivalent?
A = θij (0,0) B = θij(0,1) C = θij (1,0) D = θij (1,1) xj A B C D C-A D-C B+C-A-D xi = A + + + 1 1 1 1 if x1=1 add C-A if x2 = 1 add D-C θij (xi,xj) = θij(0,0) + (θij(1,0)-θij(0,0)) xi + (θij(1,0)-θij(0,0)) xj + (θij(1,0) + θij(0,1) - θij(0,0) - θij(1,1)) (1-xi) xj B+C-A-D  0 is true from the submodularity of θij

How are they equivalent?
A = θij (0,0) B = θij(0,1) C = θij (1,0) D = θij (1,1) xj A B C D C-A D-C B+C-A-D xi = A + + + 1 1 1 1 if x1=1 add C-A if x2 = 1 add D-C θij (xi,xj) = θij(0,0) + (θij(1,0)-θij(0,0)) xi + (θij(1,0)-θij(0,0)) xj + (θij(1,0) + θij(0,1) - θij(0,0) - θij(1,1)) (1-xi) xj B+C-A-D  0 is true from the submodularity of θij

How are they equivalent?
A = θij (0,0) B = θij(0,1) C = θij (1,0) D = θij (1,1) xj A B C D C-A D-C B+C-A-D xi = A + + + 1 1 1 1 if x1=1 add C-A if x2 = 1 add D-C θij (xi,xj) = θij(0,0) + (θij(1,0)-θij(0,0)) xi + (θij(1,0)-θij(0,0)) xj + (θij(1,0) + θij(0,1) - θij(0,0) - θij(1,1)) (1-xi) xj B+C-A-D  0 is true from the submodularity of θij

x in {0,1}n E(x) = ∑ θi (xi) + ∑ θij (xi,xj) i,j i For all ij θij(0,1) + θij (1,0)  θij (0,0) + θij (1,1) Equivalent (transformable) st-mincut T S

Minimizing Non-Submodular Functions
E(x) = ∑ θi (xi) + ∑ θij (xi,xj) i,j i θij(0,1) + θij (1,0) ≤ θij (0,0) + θij (1,1) for some ij Minimizing general non-submodular functions is NP-hard. Commonly used method is to solve a relaxation of the problem [Slide credit: Carsten Rother]

Minimization using Roof-dual Relaxation
unary pairwise submodular pairwise nonsubmodular [Slide credit: Carsten Rother]

Minimization using Roof-dual Relaxation
Double number of variables: [Slide credit: Carsten Rother]

Minimization using Roof-dual Relaxation
Double number of variables: Submodular Non- submodular

Minimization using Roof-dual Relaxation
Double number of variables: Property of the problem: is submodular ! Ignore (solvable using st-mincut)

Minimization using Roof-dual Relaxation
Double number of variables: Property of the solution: is the optimal label

Recap Exact minimization of Submodular QBFs using graph cuts.
Obtaining partially optimal solutions of non-submodular QBFs using graph cuts.

E(x) = ∑ θi (xi) + ∑ θij (xi,xj) + ∑ θc (xc)
But ... Need higher order energy functions to model image structure Field of experts [Roth and Black] Many problems in computer vision involve multiple labels E(x) = ∑ θi (xi) + ∑ θij (xi,xj) + ∑ θc (xc) i,j i c x ϵ Labels L = {l1, l2, … , lk} Clique c  V

Transforming problems in QBFs
Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions Multi-label Functions Pseudoboolean Functions

Transforming problems in QBFs
Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions Multi-label Functions Pseudoboolean Functions

Simple Example using Auxiliary variables { 0 if all xi = 0 C1 otherwise f(x) = x ϵ L = {0,1}n min f(x) = min C1a + C1 ∑ ā xi x x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function ∑xi = 0 a=0 (ā=1) f(x) = 0 ∑xi ≥ 1 a=1 (ā=0) f(x) = C1

min f(x) = min C1a + C1 ∑ ā xi x x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function C1∑xi C1 1 2 3 ∑xi

min f(x) = min C1a + C1 ∑ ā xi x x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function C1∑xi a=1 a=0 Lower envelop of concave functions is concave C1 1 2 3 ∑xi

min f(x) = min f1 (x)a + f2(x)ā x x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function f2(x) a=1 Lower envelop of concave functions is concave f1(x) 1 2 3 ∑xi

min f(x) = min f1 (x)a + f2(x)ā x x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function f2(x) a=1 a=0 Lower envelop of concave functions is concave f1(x) 1 2 3 ∑xi

Transforming problems in QBFs
Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions Multi-label Functions Pseudoboolean Functions

Multi-label to Pseudo-boolean
So what is the problem? Em (y1,y2, ..., yn) Eb (x1,x2, ..., xm) yi ϵ L = {l1, l2, … , lk} xi ϵ L = {0,1} Multi-label Problem Binary label Problem such that: Let Y and X be the set of feasible solutions, then For each binary solution x ϵ X with finite energy there exists exactly one multi-label solution y ϵ Y -> One-One encoding function T:X->Y 2. arg min Em(y) = T(arg min Eb(x))

Multi-label to Pseudo-boolean
Popular encoding scheme [Roy and Cox ’98, Ishikawa ’03, Schlesinger & Flach ’06]

Multi-label to Pseudo-boolean
Popular encoding scheme [Roy and Cox ’98, Ishikawa ’03, Schlesinger & Flach ’06] Ishikawa’s result: E(y) = ∑ θi (yi) + ∑ θij (yi,yj) i,j i y ϵ Labels L = {l1, l2, … , lk} Convex Function θij (yi,yj) = g(|yi-yj|) g(|yi-yj|) |yi-yj|

Multi-label to Pseudo-boolean
Popular encoding scheme [Roy and Cox ’98, Ishikawa ’03, Schlesinger & Flach ’06] Schlesinger & Flach ’06: E(y) = ∑ θi (yi) + ∑ θij (yi,yj) i,j i y ϵ Labels L = {l1, l2, … , lk} θij(li+1,lj) + θij (li,lj+1)  θij (li,lj) + θij (li+1,lj+1) Covers all Submodular multi-label functions More general than Ishikawa

Multi-label to Pseudo-boolean
Problems Applicability Only solves restricted class of energy functions Cannot handle Potts model potentials Computational Cost Very high computational cost Problem size = |Variables| x |Labels| Gray level image denoising (1 Mpixel image) (~2.5 x 108 graph nodes)

Outline of the Tutorial
The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

St-mincut based Move algorithms
E(x) = ∑ θi (xi) + ∑ θij (xi,xj) i,j i x ϵ Labels L = {l1, l2, … , lk} Commonly used for solving non-submodular multi-label problems Extremely efficient and produce good solutions Not Exact: Produce local optima

Move Making Algorithms
Energy Solution Space

Move Making Algorithms
Current Solution Search Neighbourhood Optimal Move Energy Solution Space

Computing the Optimal Move
Current Solution Search Neighbourhood Optimal Move xc (t) Key Property Move Space Energy Solution Space Better solutions Finding the optimal move hard Bigger move space

Moves using Graph Cuts Expansion and Swap move algorithms
[Boykov Veksler and Zabih, PAMI 2001] Makes a series of changes to the solution (moves) Each move results in a solution with smaller energy Current Solution Search Neighbourhood N Number of Variables Move Space (t) : 2N L Number of Labels Space of Solutions (x) : LN

Moves using Graph Cuts Expansion and Swap move algorithms
[Boykov Veksler and Zabih, PAMI 2001] Makes a series of changes to the solution (moves) Each move results in a solution with smaller energy Current Solution Move to new solution How to minimize move functions? Construct a move function Minimize move function to get optimal move

General Binary Moves x = t x1 + (1- t) x2 Em(t) = E(t x1 + (1- t) x2)
New solution Current Solution Second solution Em(t) = E(t x1 + (1- t) x2) Minimize over move variables t to get the optimal move Move energy is a submodular QPBF (Exact Minimization Possible) Boykov, Veksler and Zabih, PAMI 2001

Swap Move Variables labeled α, β can swap their labels
Chi square [Boykov, Veksler, Zabih]

Swap Move Variables labeled α, β can swap their labels Swap Sky, House
Tree Ground Swap Sky, House House Sky [Boykov, Veksler, Zabih]

Swap Move Move energy is submodular if:
Variables labeled α, β can swap their labels Move energy is submodular if: Unary Potentials: Arbitrary Pairwise potentials: Semimetric θij (la,lb) ≥ 0 θij (la,lb) = a = b Chi square Examples: Potts model, Truncated Convex [Boykov, Veksler, Zabih]

Expansion Move Variables take label a or retain current label
[Boykov, Veksler, Zabih] [Boykov, Veksler, Zabih]

Expansion Move Variables take label a or retain current label
Tree Ground House Status: Initialize with Tree Expand House Expand Sky Expand Ground Sky [Boykov, Veksler, Zabih] [Boykov, Veksler, Zabih]

θij (la,lb) + θij (lb,lc) ≥ θij (la,lc)
Expansion Move Variables take label a or retain current label Move energy is submodular if: Unary Potentials: Arbitrary Pairwise potentials: Metric Semi metric + Triangle Inequality θij (la,lb) + θij (lb,lc) ≥ θij (la,lc) Examples: Potts model, Truncated linear Cannot solve truncated quadratic [Boykov, Veksler, Zabih] [Boykov, Veksler, Zabih]

Minimize over move variables t
General Binary Moves x = t x1 + (1-t) x2 New solution First solution Second solution Minimize over move variables t Move Type First Solution Second Solution Guarantee Expansion Old solution All alpha Metric Fusion Any solution Move functions can be non-submodular!!

Solving Continuous Problems using Fusion Move
x = t x1 + (1-t) x2 x1, x2 can be continuous Optical Flow Example Solution from Method 2 x2 Solution from Method 1 F Final Solution x1 x (Lempitsky et al. CVPR08, Woodford et al. CVPR08)

x = (t ==1) x1 + (t==2) x2 +… +(t==k) xk
Range Moves Move variables can be multi-label Optimal move found out by using the Ishikawa Useful for minimizing energies with truncated convex pairwise potentials x = (t ==1) x1 + (t==2) x2 +… +(t==k) xk T θij (yi,yj) = min(|yi-yj|,T) θij (yi,yj) |yi-yj| O. Veksler, CVPR 2007

Move Algorithms for Solving Higher Order Energies
E(x) = ∑ θi (xi) + ∑ θij (xi,xj) + ∑ θc (xc) i,j i c x ϵ Labels L = {l1, l2, … , lk} Clique c  V Higher order functions give rise to higher order move energies Move energies for certain classes of higher order energies can be transformed to QPBFs. [Kohli, Kumar and Torr, CVPR07] [Kohli, Ladicky and Torr, CVPR08]

Outline of the Tutorial
The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

Solving Mixed Programming Problems
x – binary image segmentation (xi ∊ {0,1}) ω – non-local parameter (lives in some large set Ω) E(x,ω) = C(ω) + ∑ θi (ω, xi) + ∑ θij (ω,xi,xj) i,j i unary potentials pairwise potentials ≥ 0 constant ω θi (ω, xi) Pose Shape Prior Stickman Model Rough Shape Prior

Open Problems Characterization of Problems Solvable using st-mincut
What functions can be transformed to submodular QBFs? Submodular Functions st-mincut Equivalent

Minimizing General Higher Order Functions
We saw how simple higher order potentials can be solved How more sophisticated higher order potentials can be solved?

Summary Labelling Problem Submodular Quadratic Pseudoboolean Function
Exact Transformation (global optimum) Or Relaxed transformation (partially optimal) Labelling Problem Submodular Quadratic Pseudoboolean Function st-mincut T S Sub-problem Move making algorithms

Thanks. Questions?

Use of Higher order Potentials
E(x1,x2,x3) = θ12 (x1,x2) + θ23 (x2,x3) { 0 if xi=xj C otherwise θij (xi,xj) = 8 E(6,6,6) = = 0 7 Disparity Labels 6 5 P1 P2 P3 Pixels Stereo - Woodford et al. CVPR 2008

Use of Higher order Potentials
E(x1,x2,x3) = θ12 (x1,x2) + θ23 (x2,x3) { 0 if xi=xj C otherwise θij (xi,xj) = 8 E(6,6,6) = = 0 E(6,7,7) = = 1 7 Disparity Labels 6 5 P1 P2 P3 Pixels Stereo - Woodford et al. CVPR 2008

Use of Higher order Potentials
E(x1,x2,x3) = θ12 (x1,x2) + θ23 (x2,x3) { 0 if xi=xj C otherwise θij (xi,xj) = 8 E(6,6,6) = = 0 E(6,7,7) = = 1 E(6,7,8) = = 2 7 Disparity Labels 6 5 Pairwise potential penalize slanted planar surfaces P1 P2 P3 Pixels Stereo - Woodford et al. CVPR 2008

Computing the Optimal Move
Current Solution Search Neighbourhood Optimal Move xc T Transformation function (t) E(x) x T(xc, t) = xn = xc + t

Computing the Optimal Move
Current Solution Search Neighbourhood Optimal Move xc T Transformation function (t) E(x) Em Move Energy x T(xc, t) = xn = xc + t Em(t) = E(T(xc, t))

Computing the Optimal Move
Current Solution Search Neighbourhood Optimal Move xc T Transformation function (t) E(x) Em Move Energy x T(xc, t) = xn = xc + t minimize t* Em(t) = E(T(xc, t)) Optimal Move