Download presentation

Presentation is loading. Please wait.

1
**MAP Estimation Algorithms in**

Computer Vision - Part II M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research

2
**Example: Image Segmentation**

E: {0,1}n → R 0 → fg 1 → bg E(x) = ∑ ci xi + ∑ cij xi(1-xj) i i,j n = number of pixels Move to front Image (D)

3
**Example: Image Segmentation**

E: {0,1}n → R 0 → fg 1 → bg E(x) = ∑ ci xi + ∑ cij xi(1-xj) i i,j n = number of pixels Unary Cost (ci) Dark (negative) Bright (positive)

4
**Example: Image Segmentation**

E: {0,1}n → R 0 → fg 1 → bg E(x) = ∑ ci xi + ∑ cij xi(1-xj) i i,j n = number of pixels Discontinuity Cost (cij)

5
**Example: Image Segmentation**

E: {0,1}n → R 0 → fg 1 → bg E(x) = ∑ ci xi + ∑ cij xi(1-xj) i i,j n = number of pixels x* = arg min E(x) x How to minimize E(x)? Global Minimum (x*)

6
**Outline of the Tutorial**

The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

7
**Outline of the Tutorial**

The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

8
**The st-Mincut Problem Graph (V, E, C) 2 9 1 2 5 4 Source**

Vertices V = {v1, v2 ... vn} Edges E = {(v1, v2) ....} Costs C = {c(1, 2) ....} 2 9 1 v1 v2 2 5 4 Sink

9
The st-Mincut Problem What is a st-cut? Source 2 9 1 v1 v2 2 5 4 Sink

10
**What is the cost of a st-cut?**

The st-Mincut Problem What is a st-cut? An st-cut (S,T) divides the nodes between source and sink. Source 2 9 What is the cost of a st-cut? 1 Sum of cost of all edges going from S to T v1 v2 2 5 4 Sink = 16

11
**What is the cost of a st-cut?**

The st-Mincut Problem What is a st-cut? An st-cut (S,T) divides the nodes between source and sink. Source 2 9 What is the cost of a st-cut? 1 Sum of cost of all edges going from S to T v1 v2 2 5 4 What is the st-mincut? Sink st-cut with the minimum cost = 7

12
**How to compute the st-mincut?**

Solve the dual maximum flow problem Compute the maximum flow between Source and Sink Source Constraints Edges: Flow < Capacity Nodes: Flow in = Flow out 2 9 1 v1 v2 2 5 4 Min-cut\Max-flow Theorem Sink In every network, the maximum flow equals the cost of the st-mincut

13
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 0 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 2 9 1 v1 v2 2 5 4 Sink Algorithms assume non-negative capacity

14
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 0 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 2 9 1 v1 v2 2 5 4 Sink Algorithms assume non-negative capacity

15
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 0 + 2 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 2-2 9 1 v1 v2 2 4 5-2 Sink Algorithms assume non-negative capacity

16
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 2 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 9 1 v1 v2 2 3 4 Sink Algorithms assume non-negative capacity

17
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 2 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 9 1 v1 v2 2 3 4 Sink Algorithms assume non-negative capacity

18
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 2 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 9 1 v1 v2 2 3 4 Sink Algorithms assume non-negative capacity

19
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 2 + 4 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 5 1 v1 v2 2 3 Sink Algorithms assume non-negative capacity

20
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 6 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 5 1 v1 v2 2 3 Sink Algorithms assume non-negative capacity

21
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 6 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 5 1 v1 v2 2 3 Sink Algorithms assume non-negative capacity

22
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 6 + 1 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 4 1-1 v1 v2 2+1 2 Sink Algorithms assume non-negative capacity

23
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 7 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 4 v1 v2 3 2 Sink Algorithms assume non-negative capacity

24
**Augmenting Path Based Algorithms**

Maxflow Algorithms Flow = 7 Augmenting Path Based Algorithms Source Find path from source to sink with positive capacity Push maximum possible flow through this path Repeat until no path can be found 4 v1 v2 3 2 Sink Algorithms assume non-negative capacity

25
**History of Maxflow Algorithms**

n: #nodes m: #edges U: maximum edge weight Augmenting Path and Push-Relabel Algorithms assume non-negative edge weights [Slide credit: Andrew Goldberg]

26
**History of Maxflow Algorithms**

n: #nodes m: #edges U: maximum edge weight Augmenting Path and Push-Relabel Algorithms assume non-negative edge weights [Slide credit: Andrew Goldberg]

27
**Augmenting Path based Algorithms**

Ford Fulkerson: Choose any augmenting path Source 1000 1000 1 a1 a2 1000 1000 Sink

28
**Augmenting Path based Algorithms**

Ford Fulkerson: Choose any augmenting path Source 1000 1000 1 a1 a2 Bad Augmenting Paths 1000 1000 Sink

29
**Augmenting Path based Algorithms**

Ford Fulkerson: Choose any augmenting path Source 1000 1000 1 a1 a2 Bad Augmenting Path 1000 1000 Sink

30
**Augmenting Path based Algorithms**

Ford Fulkerson: Choose any augmenting path Source 999 1000 a1 a2 1 1000 999 Sink

31
**Augmenting Path based Algorithms**

n: #nodes m: #edges Ford Fulkerson: Choose any augmenting path Source 999 1000 a1 a2 1 1000 999 Sink We will have to perform 2000 augmentations! Worst case complexity: O (m x Total_Flow) (Pseudo-polynomial bound: depends on flow)

32
**Augmenting Path based Algorithms**

n: #nodes m: #edges Dinic: Choose shortest augmenting path Source 1000 1000 1 a1 a2 1000 1000 Sink Worst case Complexity: O (m n2)

33
**Maxflow in Computer Vision**

Specialized algorithms for vision problems Grid graphs Low connectivity (m ~ O(n)) Dual search tree augmenting path algorithm [Boykov and Kolmogorov PAMI 2004] Finds approximate shortest augmenting paths efficiently High worst-case time complexity Empirically outperforms other algorithms on vision problems

34
**Maxflow in Computer Vision**

Specialized algorithms for vision problems Grid graphs Low connectivity (m ~ O(n)) Dual search tree augmenting path algorithm [Boykov and Kolmogorov PAMI 2004] Finds approximate shortest augmenting paths efficiently High worst-case time complexity Empirically outperforms other algorithms on vision problems Efficient code available on the web

35
**Outline of the Tutorial**

The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

36
**St-mincut and Energy Minimization**

Minimizing a Qudratic Pseudoboolean function E(x) Functions of boolean variables Pseudoboolean? E: {0,1}n → R E(x) = ∑ ci xi + ∑ cij xi(1-xj) cij≥0 i i,j Polynomial time st-mincut algorithms require non-negative edge weights

37
**So how does this work? E(x) Construct a graph such that:**

Any st-cut corresponds to an assignment of x The cost of the cut is equal to the energy of x : E(x) st-mincut T S Solution E(x)

38
Graph Construction E(a1,a2) Source (0) a1 a2 Sink (1)

39
Graph Construction E(a1,a2) = 2a1 Source (0) 2 a1 a2 Sink (1)

40
Graph Construction E(a1,a2) = 2a1 + 5ā1 Source (0) 2 a1 a2 5 Sink (1)

41
**Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 2 9 5 4 Source (0)**

Sink (1)

42
**Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 2 9 2 5 4**

Source (0) 2 9 a1 a2 2 5 4 Sink (1)

43
**Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 2 9**

Source (0) 2 9 1 a1 a2 2 5 4 Sink (1)

44
**Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 2 9**

Source (0) 2 9 1 a1 a2 2 5 4 Sink (1)

45
**Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 2 9**

Source (0) 2 9 Cost of cut = 11 1 a1 a2 a1 = 1 a2 = 1 2 E (1,1) = 11 5 4 Sink (1)

46
**Graph Construction E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 2 9**

Source (0) 2 9 st-mincut cost = 8 1 a1 a2 a1 = 1 a2 = 0 2 E (1,0) = 8 5 4 Sink (1)

47
**Energy Function Reparameterization**

Two functions E1 and E2 are reparameterizations if E1 (x) = E2 (x) for all x For instance: E1 (a1) = 1+ 2a1 + 3ā1 E2 (a1) = 3 + ā1 a1 ā1 1+ 2a1 + 3ā1 3 + ā1 1 4 3

48
**Flow and Reparametrization**

E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 Source (0) 2 9 1 a1 a2 2 5 4 Sink (1)

49
**Flow and Reparametrization**

E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 Source (0) 2 9 2a1 + 5ā1 1 a1 a2 = 2(a1+ā1) + 3ā1 2 = 2 + 3ā1 5 4 Sink (1)

50
**Flow and Reparametrization**

E(a1,a2) = 2 + 3ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 Source (0) 9 2a1 + 5ā1 1 a1 a2 = 2(a1+ā1) + 3ā1 2 = 2 + 3ā1 3 4 Sink (1)

51
**Flow and Reparametrization**

E(a1,a2) = 2 + 3ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 Source (1) 9 9a2 + 4ā2 1 a1 a2 = 4(a2+ā2) + 5ā2 2 = 4 + 5ā2 3 4 Sink (0)

52
**Flow and Reparametrization**

E(a1,a2) = 2 + 3ā1+ 5a a1ā2 + ā1a2 Source (1) 5 9a2 + 4ā2 1 a1 a2 = 4(a2+ā2) + 5ā2 2 = 4 + 5ā2 3 Sink (0)

53
**Flow and Reparametrization**

E(a1,a2) = 6 + 3ā1+ 5a2 + 2a1ā2 + ā1a2 Source (1) 5 1 a1 a2 2 3 Sink (0)

54
**Flow and Reparametrization**

E(a1,a2) = 6 + 3ā1+ 5a2 + 2a1ā2 + ā1a2 Source (1) 5 1 a1 a2 2 3 Sink (0)

55
**Flow and Reparametrization**

E(a1,a2) = 6 + 3ā1+ 5a2 + 2a1ā2 + ā1a2 3ā1+ 5a2 + 2a1ā2 = 2(ā1+a2+a1ā2) +ā1+3a2 Source (1) = 2(1+ā1a2) +ā1+3a2 5 F1 = ā1+a2+a1ā2 F2 = 1+ā1a2 1 a1 a2 2 a1 a2 F1 F2 1 2 3 Sink (0)

56
**Flow and Reparametrization**

E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 3ā1+ 5a2 + 2a1ā2 = 2(ā1+a2+a1ā2) +ā1+3a2 Source (1) = 2(1+ā1a2) +ā1+3a2 3 F1 = ā1+a2+a1ā2 F2 = 1+ā1a2 3 a1 a2 a1 a2 F1 F2 1 2 1 Sink (0)

57
**Flow and Reparametrization**

E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 Source (1) 3 3 No more augmenting paths possible a1 a2 1 Sink (0)

58
**Flow and Reparametrization**

E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 Residual Graph (positive coefficients) Source (1) Total Flow 3 bound on the optimal solution 3 a1 a2 1 Sink (0) Inference of the optimal solution becomes trivial because the bound is tight

59
**Flow and Reparametrization**

E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 Residual Graph (positive coefficients) Source (1) Total Flow 3 st-mincut cost = 8 bound on the optimal solution 3 a1 a2 a1 = 1 a2 = 0 E (1,0) = 8 1 Sink (0) Inference of the optimal solution becomes trivial because the bound is tight

60
**Example: Image Segmentation**

E: {0,1}n → R 0 → fg 1 → bg E(x) = ∑ ci xi + ∑ cij xi(1-xj) i i,j x* = arg min E(x) x How to minimize E(x)? Global Minimum (x*)

61
**How does the code look like?**

Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost); g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) Source (0) Sink (1)

62
**How does the code look like?**

Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost); g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) Source (0) bgCost(a1) bgCost(a2) a1 a2 fgCost(a1) fgCost(a2) Sink (1)

63
**How does the code look like?**

Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) Source (0) bgCost(a1) bgCost(a2) cost(p,q) a1 a2 fgCost(a1) fgCost(a2) Sink (1)

64
**How does the code look like?**

Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) Source (0) bgCost(a1) bgCost(a2) cost(p,q) a1 a2 fgCost(a1) fgCost(a2) Sink (1) a1 = bg a2 = fg

65
**Image Segmentation in Video**

n-links st-cut = 0 x* E(x) t = 1 correct Image Flow Global Optimum

66
**Image Segmentation in Video**

correct Global Optimum Flow

67
**Dynamic Energy Minimization**

SA minimize EA Recycling Solutions Can we do better? EB SB computationally expensive operation Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07)

68
**Dynamic Energy Minimization**

SA minimize EA Simpler energy EB* differences between A and B similar Reuse flow Reparametrization cheaper operation EB SB 3 – time speedup! computationally expensive operation Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) Kohli & Torr (ICCV05, PAMI07)

69
**Dynamic Energy Minimization**

Original Energy E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 2a1ā2 + ā1a2 Reparametrized Energy E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 New Energy E(a1,a2) = 2a1 + 5ā1+ 9a2 + 4ā2 + 7a1ā2 + ā1a2 New Reparametrized Energy E(a1,a2) = 8 + ā1+ 3a2 + 3ā1a2 + 5a1ā2 Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) Kohli & Torr (ICCV05, PAMI07)

70
**Outline of the Tutorial**

The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

71
**Minimizing Energy Functions**

General Energy Functions NP-hard to minimize Only approximate minimization possible Easy energy functions Solvable in polynomial time Submodular ~ O(n6) MAXCUT NP-Hard Submodular Functions Functions defined on trees Space of Function Minimization Problems

72
**Submodular Set Functions**

2|E| = #subsets of E Let E= {a1,a2, .... an} be a set Set function f 2|E| ℝ

73
**Submodular Set Functions**

2|E| = #subsets of E Let E= {a1,a2, .... an} be a set Set function f 2|E| ℝ is submodular if f(A) + f(B) f(AB) + f(AB) for all A,B E E A B Important Property Sum of two submodular functions is submodular

74
**Minimizing Submodular Functions**

Minimizing general submodular functions O(n5 Q + n6) where Q is function evaluation time [Orlin, IPCO 2007] Symmetric submodular functions E (x) = E (1 - x) O(n3) [Queyranne 1998] Quadratic pseudoboolean Can be transformed to st-mincut One node per variable [ O(n3) complexity] Very low empirical running time

75
**Submodular Pseudoboolean Functions**

Function defined over boolean vectors x = {x1,x2, .... xn} Definition: All functions for one boolean variable (f: {0,1} -> R) are submodular A function of two boolean variables (f: {0,1}2 -> R) is submodular if f(0,1) + f(1,0) f(0,0) + f(1,1) A general pseudoboolean function f 2n ℝ is submodular if all its projections fp are submodular i.e. fp(0,1) + fp (1,0) fp (0,0) + fp (1,1)

76
**Quadratic Submodular Pseudoboolean Functions**

E(x) = ∑ θi (xi) + ∑ θij (xi,xj) i,j i For all ij θij(0,1) + θij (1,0) θij (0,0) + θij (1,1)

77
**Quadratic Submodular Pseudoboolean Functions**

E(x) = ∑ θi (xi) + ∑ θij (xi,xj) i,j i For all ij θij(0,1) + θij (1,0) θij (0,0) + θij (1,1) Equivalent (transformable) E(x) = ∑ ci xi + ∑ cij xi(1-xj) cij≥0 i i,j i.e. All submodular QPBFs are st-mincut solvable

78
**How are they equivalent?**

A = θij (0,0) B = θij(0,1) C = θij (1,0) D = θij (1,1) xj A B C D C-A D-C B+C-A-D xi = A + + + 1 1 1 1 if x1=1 add C-A if x2 = 1 add D-C θij (xi,xj) = θij(0,0) + (θij(1,0)-θij(0,0)) xi + (θij(1,0)-θij(0,0)) xj + (θij(1,0) + θij(0,1) - θij(0,0) - θij(1,1)) (1-xi) xj B+C-A-D 0 is true from the submodularity of θij

79
**How are they equivalent?**

A = θij (0,0) B = θij(0,1) C = θij (1,0) D = θij (1,1) xj A B C D C-A D-C B+C-A-D xi = A + + + 1 1 1 1 if x1=1 add C-A if x2 = 1 add D-C θij (xi,xj) = θij(0,0) + (θij(1,0)-θij(0,0)) xi + (θij(1,0)-θij(0,0)) xj + (θij(1,0) + θij(0,1) - θij(0,0) - θij(1,1)) (1-xi) xj B+C-A-D 0 is true from the submodularity of θij

80
**How are they equivalent?**

A = θij (0,0) B = θij(0,1) C = θij (1,0) D = θij (1,1) xj A B C D C-A D-C B+C-A-D xi = A + + + 1 1 1 1 if x1=1 add C-A if x2 = 1 add D-C θij (xi,xj) = θij(0,0) + (θij(1,0)-θij(0,0)) xi + (θij(1,0)-θij(0,0)) xj + (θij(1,0) + θij(0,1) - θij(0,0) - θij(1,1)) (1-xi) xj B+C-A-D 0 is true from the submodularity of θij

81
**How are they equivalent?**

A = θij (0,0) B = θij(0,1) C = θij (1,0) D = θij (1,1) xj A B C D C-A D-C B+C-A-D xi = A + + + 1 1 1 1 if x1=1 add C-A if x2 = 1 add D-C θij (xi,xj) = θij(0,0) + (θij(1,0)-θij(0,0)) xi + (θij(1,0)-θij(0,0)) xj + (θij(1,0) + θij(0,1) - θij(0,0) - θij(1,1)) (1-xi) xj B+C-A-D 0 is true from the submodularity of θij

82
**How are they equivalent?**

A = θij (0,0) B = θij(0,1) C = θij (1,0) D = θij (1,1) xj A B C D C-A D-C B+C-A-D xi = A + + + 1 1 1 1 if x1=1 add C-A if x2 = 1 add D-C θij (xi,xj) = θij(0,0) + (θij(1,0)-θij(0,0)) xi + (θij(1,0)-θij(0,0)) xj + (θij(1,0) + θij(0,1) - θij(0,0) - θij(1,1)) (1-xi) xj B+C-A-D 0 is true from the submodularity of θij

83
**Quadratic Submodular Pseudoboolean Functions**

x in {0,1}n E(x) = ∑ θi (xi) + ∑ θij (xi,xj) i,j i For all ij θij(0,1) + θij (1,0) θij (0,0) + θij (1,1) Equivalent (transformable) st-mincut T S

84
**Minimizing Non-Submodular Functions**

E(x) = ∑ θi (xi) + ∑ θij (xi,xj) i,j i θij(0,1) + θij (1,0) ≤ θij (0,0) + θij (1,1) for some ij Minimizing general non-submodular functions is NP-hard. Commonly used method is to solve a relaxation of the problem [Slide credit: Carsten Rother]

85
**Minimization using Roof-dual Relaxation**

unary pairwise submodular pairwise nonsubmodular [Slide credit: Carsten Rother]

86
**Minimization using Roof-dual Relaxation**

Double number of variables: [Slide credit: Carsten Rother]

87
**Minimization using Roof-dual Relaxation**

Double number of variables: Submodular Non- submodular

88
**Minimization using Roof-dual Relaxation**

Double number of variables: Property of the problem: is submodular ! Ignore (solvable using st-mincut)

89
**Minimization using Roof-dual Relaxation**

Double number of variables: Property of the solution: is the optimal label

90
**Recap Exact minimization of Submodular QBFs using graph cuts.**

Obtaining partially optimal solutions of non-submodular QBFs using graph cuts.

91
**E(x) = ∑ θi (xi) + ∑ θij (xi,xj) + ∑ θc (xc)**

But ... Need higher order energy functions to model image structure Field of experts [Roth and Black] Many problems in computer vision involve multiple labels E(x) = ∑ θi (xi) + ∑ θij (xi,xj) + ∑ θc (xc) i,j i c x ϵ Labels L = {l1, l2, … , lk} Clique c V

92
**Transforming problems in QBFs**

Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions Multi-label Functions Pseudoboolean Functions

93
**Transforming problems in QBFs**

Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions Multi-label Functions Pseudoboolean Functions

94
**Higher order to Quadratic**

Simple Example using Auxiliary variables { 0 if all xi = 0 C1 otherwise f(x) = x ϵ L = {0,1}n min f(x) = min C1a + C1 ∑ ā xi x x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function ∑xi = 0 a=0 (ā=1) f(x) = 0 ∑xi ≥ 1 a=1 (ā=0) f(x) = C1

95
**Higher order to Quadratic**

min f(x) = min C1a + C1 ∑ ā xi x x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function C1∑xi C1 1 2 3 ∑xi

96
**Higher order to Quadratic**

min f(x) = min C1a + C1 ∑ ā xi x x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function C1∑xi a=1 a=0 Lower envelop of concave functions is concave C1 1 2 3 ∑xi

97
**Higher order to Quadratic**

min f(x) = min f1 (x)a + f2(x)ā x x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function f2(x) a=1 Lower envelop of concave functions is concave f1(x) 1 2 3 ∑xi

98
**Higher order to Quadratic**

min f(x) = min f1 (x)a + f2(x)ā x x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function f2(x) a=1 a=0 Lower envelop of concave functions is concave f1(x) 1 2 3 ∑xi

99
**Transforming problems in QBFs**

Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions Multi-label Functions Pseudoboolean Functions

100
**Multi-label to Pseudo-boolean**

So what is the problem? Em (y1,y2, ..., yn) Eb (x1,x2, ..., xm) yi ϵ L = {l1, l2, … , lk} xi ϵ L = {0,1} Multi-label Problem Binary label Problem such that: Let Y and X be the set of feasible solutions, then For each binary solution x ϵ X with finite energy there exists exactly one multi-label solution y ϵ Y -> One-One encoding function T:X->Y 2. arg min Em(y) = T(arg min Eb(x))

101
**Multi-label to Pseudo-boolean**

Popular encoding scheme [Roy and Cox ’98, Ishikawa ’03, Schlesinger & Flach ’06]

102
**Multi-label to Pseudo-boolean**

Popular encoding scheme [Roy and Cox ’98, Ishikawa ’03, Schlesinger & Flach ’06] Ishikawa’s result: E(y) = ∑ θi (yi) + ∑ θij (yi,yj) i,j i y ϵ Labels L = {l1, l2, … , lk} Convex Function θij (yi,yj) = g(|yi-yj|) g(|yi-yj|) |yi-yj|

103
**Multi-label to Pseudo-boolean**

Popular encoding scheme [Roy and Cox ’98, Ishikawa ’03, Schlesinger & Flach ’06] Schlesinger & Flach ’06: E(y) = ∑ θi (yi) + ∑ θij (yi,yj) i,j i y ϵ Labels L = {l1, l2, … , lk} θij(li+1,lj) + θij (li,lj+1) θij (li,lj) + θij (li+1,lj+1) Covers all Submodular multi-label functions More general than Ishikawa

104
**Multi-label to Pseudo-boolean**

Problems Applicability Only solves restricted class of energy functions Cannot handle Potts model potentials Computational Cost Very high computational cost Problem size = |Variables| x |Labels| Gray level image denoising (1 Mpixel image) (~2.5 x 108 graph nodes)

105
**Outline of the Tutorial**

The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

106
**St-mincut based Move algorithms**

E(x) = ∑ θi (xi) + ∑ θij (xi,xj) i,j i x ϵ Labels L = {l1, l2, … , lk} Commonly used for solving non-submodular multi-label problems Extremely efficient and produce good solutions Not Exact: Produce local optima

107
**Move Making Algorithms**

Energy Solution Space

108
**Move Making Algorithms**

Current Solution Search Neighbourhood Optimal Move Energy Solution Space

109
**Computing the Optimal Move**

Current Solution Search Neighbourhood Optimal Move xc (t) Key Property Move Space Energy Solution Space Better solutions Finding the optimal move hard Bigger move space

110
**Moves using Graph Cuts Expansion and Swap move algorithms**

[Boykov Veksler and Zabih, PAMI 2001] Makes a series of changes to the solution (moves) Each move results in a solution with smaller energy Current Solution Search Neighbourhood N Number of Variables Move Space (t) : 2N L Number of Labels Space of Solutions (x) : LN

111
**Moves using Graph Cuts Expansion and Swap move algorithms**

[Boykov Veksler and Zabih, PAMI 2001] Makes a series of changes to the solution (moves) Each move results in a solution with smaller energy Current Solution Move to new solution How to minimize move functions? Construct a move function Minimize move function to get optimal move

112
**General Binary Moves x = t x1 + (1- t) x2 Em(t) = E(t x1 + (1- t) x2)**

New solution Current Solution Second solution Em(t) = E(t x1 + (1- t) x2) Minimize over move variables t to get the optimal move Move energy is a submodular QPBF (Exact Minimization Possible) Boykov, Veksler and Zabih, PAMI 2001

113
**Swap Move Variables labeled α, β can swap their labels**

Chi square [Boykov, Veksler, Zabih]

114
**Swap Move Variables labeled α, β can swap their labels Swap Sky, House**

Tree Ground Swap Sky, House House Sky [Boykov, Veksler, Zabih]

115
**Swap Move Move energy is submodular if:**

Variables labeled α, β can swap their labels Move energy is submodular if: Unary Potentials: Arbitrary Pairwise potentials: Semimetric θij (la,lb) ≥ 0 θij (la,lb) = a = b Chi square Examples: Potts model, Truncated Convex [Boykov, Veksler, Zabih]

116
**Expansion Move Variables take label a or retain current label**

[Boykov, Veksler, Zabih] [Boykov, Veksler, Zabih]

117
**Expansion Move Variables take label a or retain current label**

Tree Ground House Status: Initialize with Tree Expand House Expand Sky Expand Ground Sky [Boykov, Veksler, Zabih] [Boykov, Veksler, Zabih]

118
**θij (la,lb) + θij (lb,lc) ≥ θij (la,lc)**

Expansion Move Variables take label a or retain current label Move energy is submodular if: Unary Potentials: Arbitrary Pairwise potentials: Metric Semi metric + Triangle Inequality θij (la,lb) + θij (lb,lc) ≥ θij (la,lc) Examples: Potts model, Truncated linear Cannot solve truncated quadratic [Boykov, Veksler, Zabih] [Boykov, Veksler, Zabih]

119
**Minimize over move variables t**

General Binary Moves x = t x1 + (1-t) x2 New solution First solution Second solution Minimize over move variables t Move Type First Solution Second Solution Guarantee Expansion Old solution All alpha Metric Fusion Any solution Move functions can be non-submodular!!

120
**Solving Continuous Problems using Fusion Move**

x = t x1 + (1-t) x2 x1, x2 can be continuous Optical Flow Example Solution from Method 2 x2 Solution from Method 1 F Final Solution x1 x (Lempitsky et al. CVPR08, Woodford et al. CVPR08)

121
**x = (t ==1) x1 + (t==2) x2 +… +(t==k) xk**

Range Moves Move variables can be multi-label Optimal move found out by using the Ishikawa Useful for minimizing energies with truncated convex pairwise potentials x = (t ==1) x1 + (t==2) x2 +… +(t==k) xk T θij (yi,yj) = min(|yi-yj|,T) θij (yi,yj) |yi-yj| O. Veksler, CVPR 2007

122
**Move Algorithms for Solving Higher Order Energies**

E(x) = ∑ θi (xi) + ∑ θij (xi,xj) + ∑ θc (xc) i,j i c x ϵ Labels L = {l1, l2, … , lk} Clique c V Higher order functions give rise to higher order move energies Move energies for certain classes of higher order energies can be transformed to QPBFs. [Kohli, Kumar and Torr, CVPR07] [Kohli, Ladicky and Torr, CVPR08]

123
**Outline of the Tutorial**

The st-mincut problem Connection between st-mincut and energy minimization? What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems

124
**Solving Mixed Programming Problems**

x – binary image segmentation (xi ∊ {0,1}) ω – non-local parameter (lives in some large set Ω) E(x,ω) = C(ω) + ∑ θi (ω, xi) + ∑ θij (ω,xi,xj) i,j i unary potentials pairwise potentials ≥ 0 constant ω θi (ω, xi) Pose Shape Prior Stickman Model Rough Shape Prior

125
**Open Problems Characterization of Problems Solvable using st-mincut**

What functions can be transformed to submodular QBFs? Submodular Functions st-mincut Equivalent

126
**Minimizing General Higher Order Functions**

We saw how simple higher order potentials can be solved How more sophisticated higher order potentials can be solved?

127
**Summary Labelling Problem Submodular Quadratic Pseudoboolean Function**

Exact Transformation (global optimum) Or Relaxed transformation (partially optimal) Labelling Problem Submodular Quadratic Pseudoboolean Function st-mincut T S Sub-problem Move making algorithms

128
Thanks. Questions?

129
**Use of Higher order Potentials**

E(x1,x2,x3) = θ12 (x1,x2) + θ23 (x2,x3) { 0 if xi=xj C otherwise θij (xi,xj) = 8 E(6,6,6) = = 0 7 Disparity Labels 6 5 P1 P2 P3 Pixels Stereo - Woodford et al. CVPR 2008

130
**Use of Higher order Potentials**

E(x1,x2,x3) = θ12 (x1,x2) + θ23 (x2,x3) { 0 if xi=xj C otherwise θij (xi,xj) = 8 E(6,6,6) = = 0 E(6,7,7) = = 1 7 Disparity Labels 6 5 P1 P2 P3 Pixels Stereo - Woodford et al. CVPR 2008

131
**Use of Higher order Potentials**

E(x1,x2,x3) = θ12 (x1,x2) + θ23 (x2,x3) { 0 if xi=xj C otherwise θij (xi,xj) = 8 E(6,6,6) = = 0 E(6,7,7) = = 1 E(6,7,8) = = 2 7 Disparity Labels 6 5 Pairwise potential penalize slanted planar surfaces P1 P2 P3 Pixels Stereo - Woodford et al. CVPR 2008

132
**Computing the Optimal Move**

Current Solution Search Neighbourhood Optimal Move xc T Transformation function (t) E(x) x T(xc, t) = xn = xc + t

133
**Computing the Optimal Move**

Current Solution Search Neighbourhood Optimal Move xc T Transformation function (t) E(x) Em Move Energy x T(xc, t) = xn = xc + t Em(t) = E(T(xc, t))

134
**Computing the Optimal Move**

Current Solution Search Neighbourhood Optimal Move xc T Transformation function (t) E(x) Em Move Energy x T(xc, t) = xn = xc + t minimize t* Em(t) = E(T(xc, t)) Optimal Move

Similar presentations

OK

Markov Random Fields Tomer Michaeli Graduate Course 048926.

Markov Random Fields Tomer Michaeli Graduate Course 048926.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google