Download presentation

Presentation is loading. Please wait.

Published byClaire Figueroa Modified over 2 years ago

1
1 Image Segmentation Jianbo Shi Robotics Institute Carnegie Mellon University Cuts, Random Walks, and Phase-Space Embedding Joint work with: Malik,Malia,Yu

2
2 Taxonomy of Vision Problems Reconstruction: – estimate parameters of external 3D world. Visual Control: – visually guided locomotion and manipulation. Segmentation: – partition I(x,y,t) into subsets of separate objects. Recognition: – classes: face vs. non-face, – activities: gesture, expression.

3
3

4
4 We see Objects

5
5 Outline Problem formulation Normalized Cut criterion & algorithm The Markov random walks view of Normalized Cut Combining pair-wise attraction & repulsion Conclusions

6
6 Edge-based image segmentation Edge detection by gradient operators Linking by dynamic programming, voting, relaxation, … Montanari 71, Parent&Zucker 89, Guy&Medioni 96, Shaashua&Ullman 88 Williams&Jacobs 95, Geiger&Kumaran 96, Heitger&von der Heydt 93 -Natural for encoding curvilinear grouping -Hard decisions often made prematurely

7
7 Region-based image segmentation Region growing, split-and-merge, etc... -Regions provide closure for free, however, approaches are ad-hoc. Global criterion for image segmentation Markov Random Fields e.g. Geman&Geman 84 Variational approaches e.g. Mumford&Shah 89 Expectation-Maximization e.g. Ayer&Sawhney 95, Weiss 97 - Global method, but computational complexity precludes exact MAP estimation - Problems due to local minima

8
8 Bottom line: It is hard, nothing worked well, use edge detection, or just avoid it.

9
9 Global good, local bad. Global decision good, local bad – Formulate as hierarchical graph partitioning Efficient computation – Draw on ideas from spectral graph theory to define an eigenvalue problem which can be solved for finding segmentation. Develop suitable encoding of visual cues in terms of graph weights. Shi&Malik,97

10
10 Image segmentation by pairwise similarities Image = { pixels } Segmentation = partition of image into segments Similarity between pixels i and j S ij = S ji 0 Objective: similar pixels should be in the same segment, dissimilar pixels should be in different segments S ij

11
11 Segmentation as weighted graph partitioning Pixels i I = vertices of graph G Edges ij = pixel pairs with S ij > 0 Similarity matrix S = [ S ij ] is generalized adjacency matrix d i = j S ij degree of i vol A = i A d i volume of A I S ij i j i A

12
12 Cuts in a graph (edge) cut = set of edges whose removal makes a graph disconnected weight of a cut cut( A, B ) = i A,j B S ij the normalized cut NCut( A,B ) = cut( A,B )( + ) 1. vol A 1. vol B

13
13 Normalized Cut and Normalized Association Minimizing similarity between the groups, and maximizing similarity within the groups can be achieved simultaneously.

14
14 The Normalized Cut (NCut) criterion Criterion min NCut( A,A ) Small cut between subsets of ~ balanced grouping NP-Hard!

15
15 Some definitions

16
16 Normalized Cut As Generalized Eigenvalue problem Rewriting Normalized Cut in matrix form:

17
17 More math…

18
18 Normalized Cut As Generalized Eigenvalue problem after simplification, we get y2iy2i i A y2iy2i i A

19
19 Interpretation as a Dynamical System

20
20 Interpretation as a Dynamical System

21
21 Brightness Image Segmentation

22
22 brightness image segmentation

23
23 Results on color segmentation

24
24 Malik,Belongie,Shi,Leung,99

25
25

26
26 Motion Segmentation with Normalized Cuts Networks of spatial-temporal connections:

27
27 Results

28
28 Results Shi&Malik,98

29
29 Results

30
30 Results

31
31 Stereoscopic data

32
32 Conclusion I Global is good, local is bad – Formulated Ncut grouping criterion – Efficient Ncut algorithm using generalized eigen-system Local pair-wise allows easy encoding and combining of Gestalt grouping cues

33
33 Goals of this work Better understand why spectral segmentation works – random walks view for NCut algorithm – complete characterization of the ideal case ideal case is more realistic/general than previously thought Learning feature combination/object shape model – Max cross-entropy method for learning Malia&Shi,00

34
34 The random walks view Construct the matrix P = D -1 S D = S = P is stochastic matrix j P ij = 1 P is transition matrix of Markov chain with state space I = [ d 1 d 2... d n ] T is stationary distribution d 1 d 2... d n S 11 S 12 S 1n S 21 S 22 S 2n... S n1 S n2 S nn 1. vol I

35
35 Reinterpreting the NCut criterion NCut( A, A ) = P AA + P AA P AB = Pr[ A --> B | A ] under P, NCut looks for sets that trap the random walk Related to Cheeger constant, conductivity in Markov chains

36
36 Reinterpreting the NCut algorithm (D-W)y = Dy 1 = n y 1 y 2... Y n k = 1 - k y k = x k Px = x 1 = n x 1 x 2... x n The NCut algorithm segments based on the second largest eigenvector of P

37
37 So far... We showed that the NCut criterion & its approximation the NCut algorithm have simple interpretations in the Markov chain framework – criterion - finds almost ergodic sets – algorithm - uses x 2 to segment Now: Will use Markov chain view to show when the NCut algorithm is exact, i.e. when P has K piecewise constant eigenvectors

38
38 Piecewise constant eigenvectors: Examples Block diagonal P (and S ) Eigenvalues Eigenvectors S Eigenvalues Eigenvectors P Equal rows in each block

39
39 Piecewise constant eigenvectors: general case Theorem [Meila&Shi] Let P = D -1 S with D non- singular and let be a partition of I. Then P has K piecewise constant eigenvectors w.r.t iff P is block stochastic w.r.t and P non-singular. Eigenvectors Eigenvalues P

40
40 Block stochastic matrices = ( A 1, A 2,... A K ) partition of I P is block stochastic w.r.t j As P ij = j As P ij for i,i in same segment A s Intuitively: Markov chain can be aggregated so that random walk over is Markov P = transition matrix of Markov chain over

41
41 Learning image segmentation Targets P ij * = for i in segment A Model S ij = exp( q q f q ij ) 0, j A 00, j A 1. |A| Model normalize Image Segmentation Learning P ij * P ij S ij f ij q

42
42 The objective function J = - i I j I P ij * log P ij J = KL( P* || P ) where 0 = and P i j * = 0 P ij * the flow i j Max entropy formulation max Pij H( j | i ) s.t. P ij = P ij * for all q Convex problem with convex constraints at most one optimum The gradient = P ij * - P ij 1. |I| ` Qwertyuio p[] Asdfghjkl; zxcvbnm,. / -=\ 1. |I| J q

43
43 Experiments - the features IC - intervening contour f ij IC = max Edge( k ) k (i,j) Edge( k ) = output of edge filter for pixel k Discourages transitions across edges CL - colinearity/cocircularity f ij CL = + Encourages transitions along flow lines Random features 2-cos(2 i)-cos(2 j) 1 - cos( l) 2-cos(2 i + j) 1 - cos( 0 ) i j k i j i j orientation Edgeness

44
44 Training examples IC CL

45
45 Test examples Original Image Edge Map Segmentation

46
46 Conclusions Showed that the NCut segmentation method has a probabilistic interpretation Introduced – a principled method for combining features in image segmentation – supervised learning and synthetic data to find the optimal combination of features Graph Cuts Generalized Eigen-system Markov Random Walks

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google