Presentation is loading. Please wait.

Presentation is loading. Please wait.

Schedule 8 30 -9 00 Introduction 9 00 -10 00 Models: small cliques and special potentials 10 00 -10 30 Tea break 10 30 -12 00 Inference: Relaxation techniques:

Similar presentations


Presentation on theme: "Schedule 8 30 -9 00 Introduction 9 00 -10 00 Models: small cliques and special potentials 10 00 -10 30 Tea break 10 30 -12 00 Inference: Relaxation techniques:"— Presentation transcript:

1 Schedule 8 30 -9 00 Introduction 9 00 -10 00 Models: small cliques and special potentials 10 00 -10 30 Tea break 10 30 -12 00 Inference: Relaxation techniques: LP, Lagrangian, Dual Decomposition 12 00 -12 30 Models: global potentials and global parameters + discussion

2 MRF with global potential GrabCut model [Rother et. al. ‘04] F i = -log Pr(z i |θ F )B i = -log Pr(z i |θ B ) Background Foreground G R θ F/B Gaussian Mixture models E(x,θ F,θ B ) = Problem: for unknown x,θ F,θ B the optimization is NP-hard! [Vicente et al. ‘09] Image z Output x ∑ F i (θ F )x i + B i (θ B )(1-x i )+ ∑ |x i -x j | i,j Є N i θ F/B

3 GrabCut: Iterated Graph Cuts [Rother et al. Siggraph ‘04] Learning of the colour distributions Graph cut to infer segmentation F x min E(x, θ F, θ B ) θF,θBθF,θB B Most systems with global variables work like that e.g. [ObjCut Kumar et. al. ‘05, PoseCut Bray et al. ’06, LayoutCRF Winn et al. ’06] θ F/B

4 1 2 34 GrabCut: Iterated Graph Cuts Energy after each Iteration Result Guaranteed to converge

5 Colour Model Background Foreground & Background G R Background Foreground G R Iterated graph cut

6 Optimizing over θ’s help after convergence [GrabCut ‘04] no iteration [Boykov&Jolly ‘01] Input after convergence [GrabCut ‘04]

7 Global optimality? GrabCut (local optimum) Global Optimum [Vicente et al. ‘09] Is it a problem of the optimization or the model?

8 … first attempt to solve it [Lempisky et al. ECCV ‘08] Model a discrete subset: w F = (1,1,0,1,0,0,0,0); w B = (1,0,0,0,0,0,0,1) #solutions: w F *w B = 2 16 Global Optimum: Exhaustive Search: 65.536 Graph Cuts Branch-and-MinCut: ~ 130-500 Graph Cuts (depends on image) E(x,θ F,θ B )= ∑ F i (θ F )x i + B i (θ B )(1-x i ) + ∑ w ij |x i -x j | 8 Gaussians whole image G R 1 2 3 4 5 67 8 w F,B

9 Branch-and-MinCut w F = (1,1,1,1,0,0,0,0) w B = (1,0,1,1,0,1,0,0) w F = (0,0,*,*,*,*,*,*) w B = (0,*,*,*,*,*,*,*) w F = (*,*,*,*,*,*,*,*) w B = (*,*,*,*,*,*,*,*) min E(x,w F,w B ) = min [ ∑ F i (w F )x i + B i (w B )(1-x i ) + ∑ w ij (x i,x j ) ] ≥ min [∑ min F i (w F )x i + min B i (w B )(1-x i ) + ∑ w ij (x i,x j )] x,w F,w B x wBwB wFwF

10 Results … E=-618 GrabCut E=-624 (speed-up 481) Branch-and-MinCut E=-593 GrabCut E=-584 (speed-up 141) Branch-and-MinCut

11 Object Recognition & Segmentation Given exemplar shapes: Test: Speed-up ~900; accuracy 98.8% |w| ~ 2.000.000 min E(x,w) with: w = Templates x Position w

12 … second attempt to solve it [Vicente et al. ICCV ‘09] Eliminate global color model θ F,θ B : θF,θBθF,θB E’(x) = min E(x,θ F,θ B )

13 Eliminate color model E(x,θ F,θ B )= ∑ F i (θ F )x i + B i (θ B )(1-x i ) + ∑ w ij |x i -x j | Image histogram k given x K = 16 3 k θBθB θ F є [0,1] K is a distributions (∑ θ F = 1) (background same) θFθF k background distribution foreground distribution Optimal θ F/B given by empirical histograms: θ F = n F k /n F n F = ∑x i #fgd. pixel n F = ∑x i #fgd. pixel in bin k Optimal θ F/B given by empirical histograms: θ F = n F k /n F n F = ∑x i #fgd. pixel n F = ∑x i #fgd. pixel in bin k k k Image discretized in bins K i Є B k K K

14 Eliminate color model E’(x)= g(n F ) + ∑ h k (n F ) + ∑ w ij |x i -x j | with n F = ∑x i, n F = ∑x i min θ F,θ B k k E(x,θ F,θ B )= ∑ F i (θ F )x i + B i (θ B )(1-x i ) + ∑ w ij |x i -x j | k nFnF 0n/2n g Prefers “equal area” segmentation Each color either fore- or background hkhk nFnF 0 max k convex concave i Є B k i E(x,θ F,θ B )= ∑ -n F k log θ F k -n B k log θ B k + ∑ w ij |x i -x j | k (θ F = n F k /n F ) K

15 How to optimize … Dual Decomposition E(x)= g(n F ) + ∑ h k (n F k ) + ∑ w ij |x i -x j | E 1 (x)E 2 (x) min E(x) = min [ E 1 (x) + y T x + E 2 (x) – y T x ] ≥ min [ E 1 (x’) + y T x’ ] + min [E 2 (x) – y T x] =: L(y) Goal: - maximize concave function L(y) using sub-gradient - no guarantees on E (NP-hard) L(y) E(x’) k x’xxx “paramteric maxflow” gives optimal y=λ1 efficiently [Vicente et al. ICCV ‘09] Simple (no MRF) Robust P n Potts

16 Some results… Global optimum in 61% of cases (GrabCut database) InputGrabCut Global Optimum (DD) Local Optimum (DD)

17 Insights on the GrabCut model g 0.4 g0.3 g1.5 g hkhk nFnF Each color either fore- or background 0 max k nFnF 0n/2n g Prefers “equal area” segmentation concave convex

18 Relationship to Soft P n Potts Image Pairwise CRF only TextonBoost [Shotton et al. ‘06] robust P n Potts [Kohli et al ‘08] One super- pixelization another super- pixelization GrabCut: cluster all colors together Just different type of clustering:

19 Marginal Probability Field (MPF) What is the prior of a MAP-MRF solution: [Woodford et. al. ICCV ‘09] Training image: 60% black, 40% white MRF is a bad prior since ignores shape of the (feature) distribution ! MAP: prior(x) = 0.6 = 0.016 8 Others less likely : prior(x) = 0.6 * 0.4 = 0.005 53 Introduce a global term, which controls global statistic

20 Marginal Probability Field (MPF) [Woodford et. al. ICCV ‘09] Optimization done with Dual Decomposition (different ones) max 0 0 MRF True energy

21 Examples Segmentation: In-painting: Pairwise MRF – Increase Prior strength Ground truth Noisy input Global gradient prior

22 Schedule 8 30 -9 00 Introduction 9 00 -10 00 Models: small cliques and special potentials 10 00 -10 30 Tea break 10 30 -12 00 Inference: Relaxation techniques: LP, Lagrangian, Dual Decomposition 12 00 -12 30 Models: global potentials and global parameters + discussion

23 Open Questions Many exciting future directions – Exploiting latest ideas for applications (object recognition etc.) – Many other higher-order cliques: Topology, Grammars, etc. (this conference). Comparison of inference techniques needed: – Factor graph message passing vs. transformation vs. LP relaxation? Learning higher order Random Fields


Download ppt "Schedule 8 30 -9 00 Introduction 9 00 -10 00 Models: small cliques and special potentials 10 00 -10 30 Tea break 10 30 -12 00 Inference: Relaxation techniques:"

Similar presentations


Ads by Google