Presentation is loading. Please wait.

Presentation is loading. Please wait.

 In the previews parts we have seen some kind of segmentation method.  In this lecture we will see graph cut, which is a another segmentation method.

Similar presentations


Presentation on theme: " In the previews parts we have seen some kind of segmentation method.  In this lecture we will see graph cut, which is a another segmentation method."— Presentation transcript:

1

2  In the previews parts we have seen some kind of segmentation method.  In this lecture we will see graph cut, which is a another segmentation method based on a powerful mathematical tool.

3 I am famous for my motivational skills. Everyone always says that they have to work a lot harder when I’m around

4 Global information – graph cut, Not like most of the other segmentation methods we have seen, uses global information Instead of local information

5 Motivation! (app) Original Image Edited Image More interesting examples in the end of the lecture…

6  Definitions and mathematical reminders.  The Normalized cut method as a general tool (this will be the main part)  How to use graph cut on an image for segmentation.  Some examples of usage of graph cut based segmentation.

7

8  Graph-Cut (From Wikipedia): In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets.

9

10  Graph-Cut cost: if we will take a weighted undirected graph, then the cost of the cut is the sum of weights on all the edges connecting the two parts 1 3 3 4 4 3 2

11 1 3 3 4 4 3 2

12 1 3 3 4 4 3 2

13 1 3 3 4 4 3 2

14 1 3 3 4 4 3 2

15 1 3 3 4 4 3 2 Cut = 4 + 2 + 3 = 9

16  Min cut Cost if for two groups of vertices X and Y we define the cut as:  Then min cut is a Partition of the graph that minimizes the cut.

17  Clustering- Dividing data into groups according to some division parameter.

18

19  Given an undirected weighted graph, we would like to find a partition of the graph into two (reasonable sizes) or more clusters.

20 Every node is connected to every other node, so regular partition will rather choose a single node (all weights are equal to 1):

21 Separate one node cut cost is 5 1 1 1 1 1

22 Separate two nodes cut cost is 8 And so on… 1 1 1 1 1 1 1 1

23  So we saw that min cut will favor partitioning the graph so one of the groups will have only one node.  That’s not what we want:

24  We will define now a new definition:  Disassociation the measure of the normalized cut:  Where V is the group of the graph vertices. A B 2 2 4 3 5 7 6 3 3 3 3 4 8 4 3 3

25 A B A B A B

26  Now that we have that definition we can measure the cost of the cut as a fraction of all the edges in the graph.

27  So, we will define Ncut as:  And we will want to minimize it. A B 2 2 4 3 5 7 6 3 3 3 3 4 8 4 3 3

28  To minimize Ncut we need to: ◦ Minimize the: which will grantee strongly associated nodes to stick together. ◦ Maximize the: Which will grantee that every cluster will have reasonable amount of nodes.

29  We can also see it by defining: A B

30  And this expression is getting bigger as assoc(A,A) and assoc(B,B) are bigger as well as when cut(A,B) is getting smaller A B

31  As it turn out by some mathematical manipulations:  This Implies that minimizing Ncut is equivalent to maximizing Nassoc(A,B) that is maximizing assoc(A,A) and assoc(B,B) and minimizing the edges between A and B, we are making Ncut to be minimized. A B

32  So, what is the problem??  The problem of finding the min Ncut is in NP complete!  Solution- instead of an indicator saying which node is in which group (1 or 0) we can work in the real numbers world, where an approximation can be found efficiently.

33

34  For an undirected weighted graph, let W be a matrix where w i,j denotes the weight of the edge e(i,j): 22 7 34 32 4 3 a b c d a a b bc c d d

35  For an undirected weighted graph, let D be a matrix where D i,i denotes the sum of the weights of the edges e(i,j) for some other node j: 22 7 34 32 4 3 a b c d a a b bc c d d

36  For an undirected weighted graph, let L (Laplacian) be a matrix where L = D – W:

37  O.K, so we have the Laplacian matrix… what does it gives us??

38  The Laplacian matrix (L) is a real symmetric matrix and so All eigenvectors of it are perpendicular to each other.

39  L is positive semi-definite: for all non-zero column vector  And thus its Eigen values are non-negative

40  L has n eigen-values with real positive values, where the smallest one is 0 and it is corresponding to eigen-vector of 1’s.

41  The number of times 0 appears as an eigenvalue in L is the number of connected components in the graph (in our case only one).  The smallest non-zero eigenvalue of L is called the spectral gap (we will use this eigenvalue)

42

43  From Wikipedia: In the clustering of data, spectral clustering techniques make use of the eigenvalues of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions.  Ncut- is an example for spectral clustering, lets see how…

44  For a graph partition in to two groups A and B, an indicator vector y is a 1xn bool indicators vector such that for every it tells us if node i is in group A or not (0 or 1). 1 3 3 4 4 3 2 a b c d e f a b c d e f

45  As Jianbo Shi and Jitendra Malik proved in their paper from 1997 using the Laplacian matrix properties, we get that:  Where is a indicator vector, but instead of we replace it with.  What is -b…? L = D-W

46  If we will take the diagonal matrix D, then b is the sum of all, where, divided by the sum of all, where

47 L = D-W D is diagonalizable Decompression d i,I and multiply by 2multiply by 2

48  we get:  And we can see that for minimization, when is big we would like and to be as close as possible, and for that is small we care a little less for it.

49  We said that and we will want to minimize  But keeping is NP-complete.

50  So, instead of making we can perform a relaxation by allowing real values in the, and then it becomes a eigen vector/value problem:  where are the eigenvectors and the eigenvalues are represent the cuts cost

51  Now, we will want to find the smallest eigenvalue (corresponding to the smallest cut) of:  But we saw that it is and eigenvector of ones. That not what we wanted because we will get a all nodes in the same group partition.

52  Solution – we will take the second smallest eigenvalue –the spectral gap.

53

54

55  Eigen-values are corresponding to the cut cost.  The smallest eigen-value is 0 but it’s eigen- vector is all 1’s, so it is not good for us.  We choose the second smallest eigen-value, and his eigen-vector is our partition of the graph.

56  We saw that for node that have strong weights between them we get similar values in their coordinates, so we can take a threshold T on the eigenvector entry's and return to the discrete world! 22 7 34 32 4 3 a b c d -1.6a -0.9b 0.9c 2d 1a 1b -bc d

57  We saw that for node that have strong weights between them we get similar values in their coordinates, so we can take a threshold T on the eigenvector entry's and return to the discrete world! 22 7 34 32 4 3 a b c d -1.6a -0.9b 0.9c 2d 1a 1b -bc d

58  Graph cut will divide our graph to only two parts. But what if we want to divide further?  One option is to use the 3 rd eigenvector to get the next division. But we need to remember that each partition contains some error and that is way it is not recommended.

59  Given G=(V,E) compute matrixes D and W  Compute for eigenvector with the second smallest eigenvalue and perform bipartition of the graph.  Stopping condition: ◦ Check stability of eigenvector values- see if the values are continues. ◦ Ncut < T - where T is a predefined value to indicate if the Ncut should be stopped.

60

61

62  Given an image, we will construct a graph G=(V,E), where for each pixel we assign a node.  Also for pixels and we denote as the similarity between the pixels.

63  Similarity between two pixels i and j can be defined as:  For pixels within radius, where is a feature vector based on intensities, colors etc. and is the spatial location.

64  We can see that for two pixels i,j that values are corresponding to their feature vectors similarity and also their special distance.

65  Graph cut intensity based example:

66

67

68 Movie time

69  Graph-cut is one kind of spectral clustering.  We are using a threshold to divide the graph after the dimension reduction.  Although the math behind it is a little complicated at start, the method itself is simple to implement.

70

71  J. Shi and J.Malik, “Normalized Cuts and Image Segmentation,” Proc. CVPR 1997. also IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 888-905, August 2000.  Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg.  Richard Szeliski Computer Vision: Algorithms and Applications September 3, 2010 draft 296-304  https://www.youtube.com/watch?v=7wCC2NaVLjs


Download ppt " In the previews parts we have seen some kind of segmentation method.  In this lecture we will see graph cut, which is a another segmentation method."

Similar presentations


Ads by Google