Presentation is loading. Please wait.

Presentation is loading. Please wait.

O BJ C UT M. Pawan Kumar Philip Torr Andrew Zisserman UNIVERSITY OF OXFORD.

Similar presentations


Presentation on theme: "O BJ C UT M. Pawan Kumar Philip Torr Andrew Zisserman UNIVERSITY OF OXFORD."— Presentation transcript:

1 O BJ C UT M. Pawan Kumar Philip Torr Andrew Zisserman UNIVERSITY OF OXFORD

2 Aim Given an image, to segment the object Segmentation should (ideally) be shaped like the object e.g. cow-like obtained efficiently in an unsupervised manner able to handle self-occlusion Segmentation Object Category Model Cow Image Segmented Cow

3 Challenges Self Occlusion Intra-Class Shape Variability Intra-Class Appearance Variability

4 Motivation Magic Wand Current methods require user intervention Object and background seed pixels (Boykov and Jolly, ICCV 01) Bounding Box of object (Rother et al. SIGGRAPH 04) Cow Image Object Seed Pixels

5 Motivation Magic Wand Current methods require user intervention Object and background seed pixels (Boykov and Jolly, ICCV 01) Bounding Box of object (Rother et al. SIGGRAPH 04) Cow Image Object Seed Pixels Background Seed Pixels

6 Motivation Magic Wand Current methods require user intervention Object and background seed pixels (Boykov and Jolly, ICCV 01) Bounding Box of object (Rother et al. SIGGRAPH 04) Segmented Image

7 Motivation Magic Wand Current methods require user intervention Object and background seed pixels (Boykov and Jolly, ICCV 01) Bounding Box of object (Rother et al. SIGGRAPH 04) Cow Image Object Seed Pixels Background Seed Pixels

8 Motivation Magic Wand Current methods require user intervention Object and background seed pixels (Boykov and Jolly, ICCV 01) Bounding Box of object (Rother et al. SIGGRAPH 04) Segmented Image

9 Problem Manually intensive Segmentation is not guaranteed to be object-like Non Object-like Segmentation Motivation

10 Our Method Combine object detection with segmentation –Borenstein and Ullman, ECCV 02 –Leibe and Schiele, BMVC 03 Incorporate global shape priors in MRF Detection provides – Object Localization – Global shape priors Automatically segments the object –Note our method is completely generic –Applicable to any object category model

11 Outline Problem Formulation Form of Shape Prior Optimization Results

12 Problem Labelling m over the set of pixels D Shape prior provided by parameter Energy E (m, ) = x (D|m x )+ x (m x | ) + xy (m x,m y )+ (D|m x,m y ) Unary terms –Likelihood based on colour –Unary potential based on distance from Pairwise terms –Prior –Contrast term Find best labelling m* = arg min w i E (m, i ) –w i is the weight for sample i Unary termsPairwise terms

13 MRF Probability for a labelling consists of Likelihood Unary potential based on colour of pixel Prior which favours same labels for neighbours (pairwise potentials) D (pixels) m (labels) Image Plane x y mxmx mymy Unary Potential x (D|m x ) Pairwise Potential xy (m x, m y )

14 Example Cow Image Object Seed Pixels Background Seed Pixels Prior x … y … … … x … y … … … x (D|obj) x (D|bkg) xy (m x,m y ) Likelihood Ratio (Colour)

15 Example Cow Image Object Seed Pixels Background Seed Pixels Prior Likelihood Ratio (Colour)

16 Contrast-Dependent MRF Probability of labelling in addition has Contrast term which favours boundaries to lie on image edges D (pixels) m (labels) Image Plane Contrast Term (D|m x,m y ) x y mxmx mymy

17 Example Cow Image Object Seed Pixels Background Seed Pixels Prior + Contrast x … y … … … x … y … … … Likelihood Ratio (Colour) x (D|obj) x (D|bkg) xy (m x,m y )+ xy (D|m x,m y )

18 Example Cow Image Object Seed Pixels Background Seed Pixels Prior + Contrast Likelihood Ratio (Colour)

19 Our Model Probability of labelling in addition has Unary potential which depend on distance from (shape parameter) D (pixels) m (labels) (shape parameter) Image Plane Object Category Specific MRF x y mxmx mymy Unary Potential x (m x | )

20 Example Cow Image Object Seed Pixels Background Seed Pixels Prior + Contrast Distance from Shape Prior

21 Example Cow Image Object Seed Pixels Background Seed Pixels Prior + Contrast Likelihood + Distance from Shape Prior

22 Example Cow Image Object Seed Pixels Background Seed Pixels Prior + Contrast Likelihood + Distance from Shape Prior

23 Outline Problem Formulation –Energy E (m, ) = x (D|m x )+ x (m x | ) + xy (m x,m y )+ (D|m x,m y ) Form of Shape Prior Optimization Results

24 Layered Pictorial Structures (LPS) Generative model Composition of parts + spatial layout Layer 2 Layer 1 Parts in Layer 2 can occlude parts in Layer 1 Spatial Layout (Pairwise Configuration)

25 Layer 2 Layer 1 Transformations 1 P( 1 ) = 0.9 Cow Instance Layered Pictorial Structures (LPS)

26 Layer 2 Layer 1 Transformations 2 P( 2 ) = 0.8 Cow Instance Layered Pictorial Structures (LPS)

27 Layer 2 Layer 1 Transformations 3 P( 3 ) = 0.01 Unlikely Instance Layered Pictorial Structures (LPS)

28 LPS for Detection Learning – Learnt automatically using a set of videos – Part correspondence using Shape Context Shape Context Matching Multiple Shape Exemplars

29 LPS for Detection Detection – Putative parts found using tree cascade of classifiers (x,y)

30 LPS for Detection MRF over parts Labels represent putative poses Prior (pairwise potential) - Robust Truncated Model Match LPS by obtaining MAP configuration Potts Model Linear ModelQuadratic Model

31 LPS for Detection Efficient Belief Propagation j i k Likelihood i (x i ) tree cascade of classifiers Prior ij (x i,x j ) f ij (x i,x j ), if x i C i (x j ) ij, otherwise Pr(x) i (x i ) ij (x i,x j ) xixi xjxj xkxk ij jk ki i j k Messages m j->i

32 LPS for Detection Efficient Belief Propagation j i k Likelihood i (x i ) tree cascade of classifiers Prior ij (x i,x j ) f ij (x i,x j ), if x i C i (x j ) ij, otherwise Pr(x) i (x i ) ij (x i,x j ) xixi xjxj xkxk Messages calculated as

33 LPS for Detection Efficient Generalized Belief Propagation j i k Likelihood i (x i ) tree cascade of classifiers Prior ij (x i,x j ) f ij (x i,x j ), if x i C i (x j ) ij, otherwise Pr(x) i (x i ) ij (x i,x j ) xixi xjxj xkxk ij jk ki i j k Messages m k->ij ijk

34 LPS for Detection Efficient Generalized Belief Propagation j i k Likelihood i (x i ) tree cascade of classifiers Prior ij (x i,x j ) f ij (x i,x j ), if x i C i (x j ) ij, otherwise Pr(x) i (x i ) ij (x i,x j ) xixi xjxj xkxk Messages calculated as

35 LPS for Detection Second Order Cone Programming Relaxations j i k Likelihood i (x i ) tree cascade of classifiers Prior ij (x i,x j ) f ij (x i,x j ), if x i C i (x j ) ij, otherwise Pr(x) i (x i ) ij (x i,x j ) xixi xjxj xkxk

36 LPS for Detection Second Order Cone Programming Relaxations j i k Likelihood i (x i ) tree cascade of classifiers Prior ij (x i,x j ) f ij (x i,x j ), if x i C i (x j ) ij, otherwise Pr(x) i (x i ) ij (x i,x j ) 0 1 0 0 0 1 1 0 0 m - Concatenation of all binary vectors l - Likelihood vector P - Prior matrix

37 LPS for Detection Second Order Cone Programming Relaxations j i k 0 1 0 0 0 1 1 0 0

38 LPS for Detection Second Order Cone Programming Relaxations j i k 0 1 0 0 0 1 1 0 0

39 LPS for Detection Second Order Cone Programming Relaxations j i k 0 1 0 0 0 1 1 0 0

40 Outline Problem Formulation Form of Shape Prior Optimization Results

41 Optimization Given image D, find best labelling as m* = arg max p(m|D) Treat LPS parameter as a latent (hidden) variable EM framework –E : sample the distribution over –M : obtain the labelling m

42 E-Step Given initial labelling m, determine p( | m,D) Problem Efficiently sampling from p( | m,D) Solution We develop efficient sum-product Loopy Belief Propagation (LBP) for matching LPS. Similar to efficient max-product LBP for MAP estimate

43 Results Different samples localize different parts well. We cannot use only the MAP estimate of the LPS.

44 M-Step Given samples from p( |m,D), get new labelling m new Sample i provides –Object localization to learn RGB distributions of object and background –Shape prior for segmentation Problem –Maximize expected log likelihood using all samples –To efficiently obtain the new labelling

45 M-Step Cow Image Shape 1 w 1 = P( 1 |m,D) RGB Histogram for Object RGB Histogram for Background

46 Cow Image M-Step 1 Image Plane D (pixels) m (labels) Best labelling found efficiently using a Single Graph Cut Shape 1 w 1 = P( 1 |m,D)

47 Segmentation using Graph Cuts x … y ……… z …… Obj Bkg Cut x (D|bkg) + x (bkg| ) m z (D|obj) + z (obj| ) xy (m x,m y )+ xy (D|m x,m y )

48 Segmentation using Graph Cuts x … y ……… z …… Obj Bkg m

49 M-Step Cow Image RGB Histogram for Background RGB Histogram for Object Shape 2 w 2 = P( 2 |m,D)

50 M-Step Cow Image 2 Image Plane D (pixels) m (labels) Best labelling found efficiently using a Single Graph Cut Shape 2 w 2 = P( 2 |m,D)

51 M-Step 2 Image Plane 1 w1w1 + w 2 + …. Best labelling found efficiently using a Single Graph Cut m* = arg min w i E (m, i )

52 Outline Problem Formulation Form of Shape Prior Optimization Results

53 SegmentationImage Results Using LPS Model for Cow

54 In the absence of a clear boundary between object and background SegmentationImage Results Using LPS Model for Cow

55 SegmentationImage Results Using LPS Model for Cow

56 SegmentationImage Results Using LPS Model for Cow

57 SegmentationImage Results Using LPS Model for Horse

58 SegmentationImage Results Using LPS Model for Horse

59 Our MethodLeibe and SchieleImage Results

60 Appearance ShapeShape+Appearance Results Without x (D|m x ) Without x (m x | )

61 Conclusions –New model for introducing global shape prior in MRF –Method of combining detection and segmentation –Efficient LBP for detecting articulated objects Future Work –Other shape parameters need to be explored –Method needs to be extended to handle multiple visual aspects


Download ppt "O BJ C UT M. Pawan Kumar Philip Torr Andrew Zisserman UNIVERSITY OF OXFORD."

Similar presentations


Ads by Google