Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which.

Similar presentations


Presentation on theme: "1 Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which."— Presentation transcript:

1 1 Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually cover the image 2. into linear structures, such as - line segments - curve segments 3. into 2D shapes, such as - circles - ellipses - ribbons (long, symmetric regions)

2 Types of segmentation 1.Region Based Segmentation: Uses Similarity metric among pixels 2.Edge Based Segmentation: Uses Dissimilarity metric among pixels 2

3 Result of Segmentation Partial Segmentation: Content independent, does not always correspond to objects Complete segmentation: Context dependent. Uses High level info 3

4 4 Example 1: Region Based Segmentations

5 5 Example 2: Edge Based Straight Lines

6 6 Example 3: Lines and Circular Arcs

7 7 Region Based Segmentation: Segmentation Criteria From Pavlidis A segmentation is a partition of an image I into a set of regions S satisfying: 1.  Si = S Partition covers the whole image. 2. Si  Sj = , i  j No regions intersect. 3.  Si, P(Si) = true Homogeneity predicate is satisfied by each region. 4. P(Si  Sj) = false, Union of adjacent regions i  j, Si adjacent Sj does not satisfy it.

8 8 So All we have to do is to define and implement the similarity predicate P. But, what do we want to be similar in each region? Is there any property that will cause the regions to be meaningful objects?

9 9

10 10

11 11 Methods of Region Segmentation 1. Region Growing 2. Split and Merge 3. Clustering

12 12 Region Growing Region growing techniques start with one pixel of a potential region and try to grow it by adding adjacent pixels till the pixels being compared are too disimilar. The first pixel selected can be just the first unlabeled pixel in the image or a set of seed pixels can be chosen from the image. Usually a statistical test is used to decide which pixels can be added to a region.

13 13 The RGGROW Algorithm Let R be the N pixel region so far and P be a neighboring pixel with gray tone y. Define the mean X and scatter S (sample variance) by X = 1/N  I(r,c) S = 1/N  (I(r,c) - X) 2 2 (r,c)  R 2

14 14 The RGGROW Statistical Test The T statistic is defined by (N-1) * N T = -------------- (y - X) / S (N+1) 22 1/2

15 15 Decision and Update For the T distribution, statistical tables give us the probability Pr(T  t) for a given degrees of freedom and a confidence level. From this, pick suitable threshold t. If the computed T  t for desired confidence level, add y to region R and update X and S. If T is too high, the value y is not likely to have arisen from the population of pixels in R. Start a new region. 2

16 16 RGGROW Example image segmentation Not so great and it’s order dependent.

17 17 Split and Merge 1. Start with the whole image 2. If the variance is too high, break into quadrants 3.Merge any adjacent regions that are similar enough. 4.Repeat Steps 2 and 3, iteratively till no more splitting or merging occur Idea: Good Results: Blocky

18 18

19 19 Clustering There are K clusters C 1,…, C K with means m 1,…, m K. The least-squares error is defined as Out of all possible partitions into K clusters, choose the one that minimizes D. Why don’t we just do this? If we could, would we get meaningful objects? D =   || x i - m k ||. k=1 x i  C k K 2

20 20 Some Clustering Methods K-means Clustering and Variants Histogram-Based Clustering and Recursive Variant Graph-Theoretic Clustering EM Clustering

21 21 K-Means Clustering Form K-means clusters from a set of n-dimensional vectors 1. Set ic (iteration count) to 1 2. Choose randomly a set of K color means m1(1), …, mK(1). 3. For each vector xi, compute D(xi,mk(ic)), k=1,…K and assign xi to the cluster Cj with nearest mean. 4. Increment ic by 1, update the means to get m1(ic),…,mK(ic). 5. Repeat steps 3 and 4 until Ck(ic) = Ck(ic+1) for all k.

22 22 K-Means Example 1

23 23 K-Means Example 2

24 24 K-means Variants Different ways to initialize the means Different stopping criteria Dynamic methods for determining the right number of clusters (K) for a given image Isodata: K-means with split and merge

25 ISODATA CLUSTERING 25

26 Histogram thresholding Seek for the modes of multimodal histogram Use knowledge directed thresholding 26

27 HISTOGRAM BASED CLUSTERING 27

28 Valley seeking 28

29 Image Segmentation by Thresholding

30 CSE 803 Fall 2008 Stockman30 Otsu’s method assumes K=2. It searches for the threshold t that optimizes the intra class variance.

31 Thresholding

32

33

34

35 35 Ohlander’s Recursive Histogram- Based Clustering Input: color images of real indoor and outdoor scenes starts with the whole image and finds the histogram selects the R, G, or B histogram with largest peak and finds the connected regions from that peak converts to regions on the image and creates masks for each region and recomputes the histogram for each region pushes each mask onto a stack for further clustering

36 36 Ohlander’s Method sky tree2 tree1 Ohta suggested using (R+G+B)/3, (R-B)/2 and (2G-R-B)/4 instead of (R, G, B). separate R, G, B

37 37 Jianbo Shi’s Graph-Partitioning An image is represented by a graph whose nodes are pixels or small groups of pixels. The goal is to partition the vertices into disjoint sets so that the similarity within each set is high and across different sets is low.

38 38 Minimal Cuts Let G = (V,E) be a graph. Each edge (u,v) has a weight w(u,v) that represents the similarity between u and v. Graph G can be broken into 2 disjoint graphs with node sets A and B by removing edges that connect these sets. Let cut(A,B) =  w(u,v). One way to segment G is to find the minimal cut. u  A, v  B

39 39 Cut(A,B) w1 w2 A B cut(A,B) =  w(u,v). u  A, v  B

40 40 Normalized Cut Minimal cut favors cutting off small node groups, so Shi proposed the normalized cut. cut(A, B) cut(A,B) Ncut(A,B) = ------------- + ------------- asso(A,V) asso(B,V) asso(A,V) =  w(u,t) u  A, t  V Association: How much is A connected to the graph V as a whole. normalized cut

41 41 Example Normalized Cut 2 22 22 4 1 3 2 22 3 2 2 2 1 3 3 Ncut(A,B) = ------- + ------ 21 16 A B

42 42 Shi turned graph cuts into an eigenvector/eigenvalue problem. Set up a weighted graph G=(V,E) –V is the set of (N) pixels –E is a set of weighted edges (weight w ij gives the similarity between nodes i and j)

43 Define two matrices: D and W –Length N vector d: d i is the sum of the weights from node i to all other nodes –N x N matrix D: D is a diagonal matrix with d on its diagonal –Similarity matrix W: N x N symmetric matrix W: W ij = w ij 43

44 Edge weights 44

45 45 Let x be a characteristic vector of a set A of nodes – x i = 1 if node i is in a set A – x i = -1 otherwise Let y be a continuous approximation to x

46 Solve the system of equations (D – W) y = D y for the eigenvectors y and eigenvalues Use the eigenvector y with second smallest eigenvalue to bipartition the graph (y => x => A) If further subdivision is merited, repeat recursively 46

47 47 How Shi used the procedure Shi defined the edge weights w(i,j) by w(i,j) = e * e if ||X(i)-X(j)|| 2 < r 0 otherwise ||F(i)-F(j)|| 2 /  I ||X(i)-X(j)|| 2 /  X where X(i) is the spatial location of node i F(i) is the feature vector for node I which can be intensity, color, texture, motion… The formula is set up so that w(i,j) is 0 for nodes that are too far apart.

48 48 Examples of Shi Clustering See Shi’s Web Page http://www-2.cs.cmu.edu/~jshi

49 CSE 803 Fall 2008 Stockman49 Representation of regions

50 Overlay 50

51 CSE 803 Fall 2008 Stockman51 Chain codes for boundaries

52 CSE 803 Fall 2008 Stockman52 Quad trees divide into quadrants M=mixed; E=empty; F=full

53 CSE 803 Fall 2008 Stockman53 Can segment 3D images also Oct trees subdivide into 8 octants Same coding: M, E, F used Software available for doing 3D image processing and differential equations using octree representation. Can achieve large compression factor.

54 CSE 803 Fall 2008 Stockman54 Segmentation with clustering Mean shift description http://cmp.felk.cvut.cz/cmp/courses/ZS1/slidy/meanS hiftSeg.pdf http://cmp.felk.cvut.cz/cmp/courses/ZS1/slidy/meanS hiftSeg.pdf Expectation Maximization Demo http://www.neurosci.aist.go.jp/~akaho/MixtureEM.ht ml Tutorial http://www- 2.cs.cmu.edu/~awm/tutorials/gmm13.pdf

55 Mean Shift Adopted from Yaron Ukrainitz & Bernard Sarel

56 Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

57 Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

58 Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

59 Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

60 Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

61 Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Mean Shift vector Objective : Find the densest region

62 Intuitive Description Distribution of identical billiard balls Region of interest Center of mass Objective : Find the densest region

63 What is Mean Shift ? Non-parametric Density Estimation Non-parametric Density GRADIENT Estimation (Mean Shift) Data Discrete PDF Representation PDF Analysis PDF in feature space Color space Scale space Actually any feature space you can conceive … A tool for: Finding modes in a set of data samples, manifesting an underlying probability density function (PDF) in R N

64 Non-Parametric Density Estimation Assumption : The data points are sampled from an underlying PDF Assumed Underlying PDFReal Data Samples Data point density implies PDF value !

65 Assumed Underlying PDFReal Data Samples Non-Parametric Density Estimation

66 Assumed Underlying PDFReal Data Samples ? Non-Parametric Density Estimation

67 Parametric Density Estimation Assumption : The data points are sampled from an underlying PDF Assumed Underlying PDF Estimate Real Data Samples

68 68 EM Demo Demo http://www.neurosci.aist.go.jp/~akaho/MixtureEM.html Tutorial http://www-2.cs.cmu.edu/~awm/tutorials/gmm13.pdf

69 69 EM Applications Blobworld: Image segmentation using Expectation-Maximization and its application to image querying Yi’s Generative/Discriminative Learning of object classes in color images

70 70 Image Segmentaton with EM: Symbols The feature vector for pixel i is called x i. There are going to be K segments; K is given. The j- th segment has a Gaussian distribution with parameters  j =(  j,  j ).  j 's are the weights (which sum to 1 ) of Gaussians.  is the collection of parameters:  = (  1, …,  k,  1, …,  k )

71 71 Initialization Each of the K Gaussians will have parameters  j =(  j,  j ), where –  j is the mean of the j- th Gaussian. –  j is the covariance matrix of the j- th Gaussian. The covariance matrices are initialed to be the identity matrix. The means can be initialized by finding the average feature vectors in each of K windows in the image; this is data-driven initialization.

72 72 E-Step

73 73 M-Step

74 74 Sample Results from Blobworld

75 Chapter 10 Image Segmentation Chapter 10 Image Segmentation

76 Edge Based Segmentation

77

78 Detect Edges Create Boundaries –Hough Transform –Graph based methods 78

79 Chapter 10 Image Segmentation Chapter 10 Image Segmentation

80 80 1.Hough Transform Given a set of Points, find the best line or curve to represent these points y = mx + b image

81 Line detection All the points passing on a line can be mappet into a single point in the parametter space X-y spaceParameter space

82 ACCUMULATOR CELL A(p,q) y = mx + b is not suitable (why?)

83 83 Line Segments in Polar Coordinate The equation generally used is: d = r sin  + c cos  d  r c d is the distance from the line to origin  is the angle the perpendicular makes with the column axis

84 Polar Coordinates in Hough Transform

85 Polar Coordinates -90< Θ < 90 -√D< ρ <√D

86 Generalized Hough Transform

87 87 Procedure to Accumulate Lines Set accumulator array A to all zero. Set point list array PTLIST to all NIL. For each pixel (R,C) in the image { compute gradient magnitude GMAG if GMAG > gradient_threshold { compute quantized tangent angle THETAQ compute quantized distance to origin DQ increment A(DQ,THETAQ) update PTLIST(DQ,THETAQ) } }

88 88 Example 0 0 0 100 100 100 100 100 100 100 - - 0 0 - 90 90 40 20 - 90 90 90 40 - - - - - - - - 3 3 - 3 3 3 3 - - - - - - 360. 6 3 0 - - - - - - - 4 - 1 - 2 - 5 - - - - - - - 0 10 20 30 40 …90 360. 6 3 0 - - - - - - - * - * - * - * - - - - - - - (1,3)(1,4)(2,3)(2,4) (3,1) (3,2) (4,1) (4,2) (4,3) gray-tone imageDQ THETAQ Accumulator APTLIST distance angle

89 89 Chalmers University of Technology

90 90 Chalmers University of Technology

91 91 How do you extract the line segments from the accumulators? pick the bin of A with highest value V while V > value_threshold { order the corresponding pointlist from PTLIST merge in high gradient neighbors within 10 degrees create line segment from final point list zero out that bin of A pick the bin of A with highest value V }

92 92 Line segments from Hough Transform

93 93 A Nice Hough Variant The Burns Line Finder 1. Compute gradient magnitude and direction at each pixel. 2. For high gradient magnitude points, assign direction labels to two symbolic images for two different quantizations. 3. Find connected components of each symbolic image. 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 Each pixel belongs to 2 components, one for each symbolic image. Each pixel votes for its longer component. Each component receives a count of pixels who voted for it. The components that receive majority support are selected. -22.5 +22.5 0 45

94 Chapter 10 Image Segmentation Chapter 10 Image Segmentation

95 Chapter 10 Image Segmentation Chapter 10 Image Segmentation

96 Chapter 10 Image Segmentation Chapter 10 Image Segmentation

97 Chapter 10 Image Segmentation Chapter 10 Image Segmentation

98 Chapter 10 Image Segmentation Chapter 10 Image Segmentation

99 Chapter 10 Image Segmentation Chapter 10 Image Segmentation

100 Chapter 10 Image Segmentation Chapter 10 Image Segmentation

101 Chapter 10 Image Segmentation Chapter 10 Image Segmentation


Download ppt "1 Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which."

Similar presentations


Ads by Google