Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Vibhav Vineet, Jonathan Warrell, Paul Sturgess, Philip H.S. Torr Improved Initialisation and Gaussian Mixture Pairwise Terms for Dense Random Fields.

Similar presentations


Presentation on theme: "1 Vibhav Vineet, Jonathan Warrell, Paul Sturgess, Philip H.S. Torr Improved Initialisation and Gaussian Mixture Pairwise Terms for Dense Random Fields."— Presentation transcript:

1 1 Vibhav Vineet, Jonathan Warrell, Paul Sturgess, Philip H.S. Torr Improved Initialisation and Gaussian Mixture Pairwise Terms for Dense Random Fields with Mean-field Inference http://cms.brookes.ac.uk/research/visiongroup/

2 Labelling problem 2 Stereo Object detection Assign a label to each image pixel Object segmentation

3 Problem Formulation Find a labelling that maximizes the conditional probability or minimizes the energy function 3

4 Problem Formulation 4 Grid CRF leads to over smoothing around boundaries Grid CRF construction Inference

5 Problem Formulation 5 Grid CRF leads to over smoothing around boundaries Dense CRF is able to recover fine boundaries Grid CRF construction Dense CRF construction Inference

6 Inference in Dense CRF 6 Very high time complexity graph-cuts based methods not feasible

7 Inference in Dense CRF 7 Filter-based mean-field inference method takes 0.2 secs * * Krahenbuhl et al. Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials, NIPS 11 Efficient inference under two assumptions Mean-field approximation to CRF Pairwise weights take Gaussian weights

8 Efficient Inference in Dense CRF 8 Intractable inference with distribution Approximate distribution from tractable family Mean-fields methods (Jordan et.al., 1999)

9 Naïve Mean Field 9 Mean-field approximation to CRF Assume all variables are independent

10 Efficient Inference in Dense CRF 10 Assume Gaussian pairwise weight Mixture of Gaussians Bilateral Spatial

11 Marginal update 11 Marginal update involve expectation of cost over distribution Q given that x_i takes label l Expensive message passing step is solved using highly efficient permutohedral lattice based filtering approach MPM with approximate distribution:

12 Q distribution 12 Iter 0 Iter 1 Iter 2 Iter 10 Q distribution for different classes across different iterations

13 13 Sensitive to initialisation Restrictive Gaussian pairwise weights Two issues associated with the method

14 Our Contributions 14 Sensitive to initialisation Propose SIFT-flow based initialisation method Restrictive Gaussian pairwise weights Expectation maximisation (EM) based strategy to learn more general Gaussian mixture model Resolve two issues associated with the method

15 Sensitivity to initialisation 15 Experiment on PascalVOC-10 segmentation dataset Good initialisation can lead to better solution Propose a SIFT-flow based better initialisation method Mean-fieldAlpha-expansion Unary potential28.52 %27.88% Ground truth label41 %27.88%

16 SIFT-flow based correspondence 16 Given a test image, we first retrieve a set of nearest neighbours from training set using GIST features Test image Nearest neighbours retrieved from training set

17 SIFT-flow based correspondence 17 K-nearest neighbours warped to the test image 23.31 13.3114.31 18.3822 30.8727.2 Test image Warped nearest neighbours and corresponding flows

18 SIFT-flow based correspondence 18 Pick the best nearest neighbour based on the flow value Test image 13.31 Nearest neighbour Warped image Flow:

19 Label transfer Warp the ground truth according to correspondence Transfer labels from top 1 using flow 19 Ground truth of the best nearest neighbour Flow Warped ground truth according to flow Ground truth of test image

20 SIFT-flow based initialisation 20 Rescore the unary potential Test image Ground truth image After rescoring Without rescoring Qualitative improvement in accuracy after using rescored unary potential

21 SIFT-flow based initialisation 21 Initialise mean-field solution Test image Ground truth image With initialisation Without initialisation Qualitative improvement in accuracy after initialisation of mean-field

22 Gaussian pairwise weights 22 Mixture of Gaussians bilateral spatial

23 Gaussian pairwise weights 23 Zero mean Mixture of Gaussians bilateral spatial

24 Gaussian pairwise weights 24 Zero mean Same Gaussian mixture model for every label pair bilateral spatial Mixture of Gaussians

25 Gaussian pairwise weights 25 Zero mean Same Gaussian mixture model for every label pair bilateral spatial Mixture of Gaussians Arbitrary standard deviation

26 Our approach 26 Incorporate a general Gaussian mixture model

27 Gaussian pairwise weights 27 Learn arbitrary mean Learn standard deviation

28 Gaussian pairwise weights 28 Learn arbitrary mean Learn standard deviation Learn mixing coefficients

29 Gaussian pairwise weights 29 Learn arbitrary mean Learn standard deviation Learn mixing coefficients Different Gaussian mixture for different label pairs

30 Learning mixture model 30 Propose piecewise learning framework

31 Learning mixture model 31 First learn the parameters of unary potential

32 Learning mixture model 32 First learn the parameters of unary potential Learn the label compatibility function

33 Learning mixture model 33 Learn the label compatibility function First learn the parameters of unary potential Set the Gaussian model following Krahenbuhl et.al

34 Learning mixture model 34 Learn the label compatibility function First learn the parameters of unary potential Set the Gaussian model following Krahenbuhl et.al Learn the parameters of the Gaussian mixture

35 Learning mixture model 35 Learn the label compatibility function First learn the parameters of unary potential Set the Gaussian model following Krahenbuhl et.al Lambda is set through cross validation Learn the parameters of the Gaussian mixture

36 Our model 36 Generative training Maximise joint likelihood of pair of labels and features: : latent variable: number of mixture components

37 Learning mixture model 37 Maximize the log-likelihood function Expectation maximization based method Our learnt mixture model Zero-mean Gaussian

38 Inference with mixture model 38 Involves evaluating M extra Gaussian terms: Perform blurring on mean-shifted points Increases time complexity

39 Experiments on PascalVOC-10 39 Qualitative results of SIFT-flow method Image Warped nearest ground truth image Output without SIFT-flow Output with SIFT-flow

40 Experiments on PascalVOC-10 40 Quantitative results PascalVOC-10 segmentation dataset AlgorithmTime(s)Overall(%-corr)Av. RecallAv. U/I Alpha-exp3.079.5236.0827.88 AHCRF+Cooc3681.4338.0130.9 Dense CRF0.6771.6334.5328.4 Ours1(U+P+GM)26.780.2336.4128.73 Ours2 (U+P+I)0.9079.6541.8430.95 Ours3 (U+P+I+GM)26.778.9644.0531.48 Our model with unary and pairwise terms achieves better accuracy than other complex models Generally achieves very high efficiency compared to other methods

41 Experiments on PascalVOC-10 41 Qualitative results on PascalVOC-10 segmentation dataset Image Alpha-expansionDense CRF Ours Able to recover missing object parts

42 Experiments on Camvid 42 Quantitative results on Camvid dataset AlgorithmTime(s)Overall(%-corr)Av. RecallAv. U/I Alpha-exp0.9678.8458.6443.89 APST(U+P+H)1.685.1860.0650.62 denseCRF0.279.9659.2945.18 Ours (U+P+I)0.3585.3159.7550.56 Our model with unary and pairwise terms achieve better accuracy than other complex models Generally achieve very high efficiency compared to other methods

43 Experiments on Camvid 43 Qualitative results on Camvid dataset Image Alpha-expansionOurs Able to recover missing object parts

44 Conclusion 44 Filter-based mean-field inference promises high efficiency and accuracy Proposed methods to robustify basic mean-field method SIFT-flow based method for better initialisation EM based algorithm for learning general Gaussian mixture model More complex higher order models can be incorporated into pairwise model

45 45

46 46 Thank You

47 Q distribution 47

48 Learning mixture model 48 For every label pair: Maximize the log-likelihood function

49 Learning mixture model 49 For every label pair: Maximize the log-likelihood function Expectation maximization based method


Download ppt "1 Vibhav Vineet, Jonathan Warrell, Paul Sturgess, Philip H.S. Torr Improved Initialisation and Gaussian Mixture Pairwise Terms for Dense Random Fields."

Similar presentations


Ads by Google