Presentation is loading. Please wait.

Presentation is loading. Please wait.

Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.

Similar presentations


Presentation on theme: "Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31."— Presentation transcript:

1 Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31

2 Announcements Read Chapter 22-22.1 in Forsyth & Ponce on classification for Friday HW 5 due Friday Class on Friday is at 4 pm due to Honors day

3 Outline Graph theory basics Eigenvector methods for segmentation Hough transform

4 Graph Theory Terminology Graph G : Set of vertices V and edges E connecting pairs of vertices Each edge is represented by the vertices (a, b) it joins A weighted graph has a weight associated with each edge w(a, b) Connectivity –Vertices are connected if there is a sequence of edges joining them –A graph is connected if all vertices are connected –Any graph can be partitioned into connected components (CC) such that each CC is a connected graph and there are no edges between vertices in different CCs from Forsyth & Ponce 4

5 Graphs for Clustering Tokens are vertices Weights on edges proportional to token similarity Cut: “Weight” of edges joining two sets of vertices: Segmentation: Look for minimum cut in graph –Recursively cut components until regions uniform enough A B

6 Graphs for Clustering Tokens are vertices Weights on edges proportional to token similarity Cut: “Weight” of edges joining two sets of vertices: Segmentation: Look for minimum cut in graph –Recursively cut components until regions uniform enough

7 Representing Graphs As Matrices Use N £ N matrix W for N –vertex graph Entry W(i, j) is weight on edge between vertices i and j Undirected graphs have symmetric weight matrices from Forsyth & Ponce 1 2 3 4 5 7 6 9 8 1 1 Example graph and its weight matrix

8 Affinity Measures Affinity A(i, j) between tokens i and j should be proportional to similarity Based on metric on some visual feature(s) –Position: E.g., A(i, j) = exp f¡(i ¡ j) T (i ¡ j)/2¾ d 2 g –Intensity –Color –Texture These are weights in an affinity graph A over tokens

9 Eigenvectors and Segmentation Given k tokens with affinities defined by A, want partition into c clusters For a particular cluster n, denote the membership weights of the tokens with the vector w n –Require normalized weights so that “Best” assignment of tokens to cluster n is achieved by selecting w n that maximizes objective function (highest intra-cluster affinity) subject to weight vector normalization constraint Using method of Lagrange multipliers, this yields system of equations which means that w n is an eigenvector of A and a solution is obtained from the eigenvector with the largest eigenvalue

10 Eigenvectors and Segmentation Note that an appropriate rearrangement of affinity matrix leads to block structure indicating clusters Largest eigenvectors A of tend to correspond to eigenvectors of blocks So interpret biggest c eigenvectors as cluster membership weight vectors –Quantize weights to 0 or 1 to make memberships definite from Forsyth & Ponce 1 2 3 4 5 7 6 9 8 1 1

11 Normalized Cuts Previous approach doesn’t work when eigenvalues of blocks are similar –Just using within-cluster similarity doesn’t account for between- cluster differences –No encouragement of larger cluster sizes Define association between vertex subset A and full set V as Before, we just maximized assoc(A, A) ; now we also want to minimize assoc(A, V). Define the normalized cut as

12 Normalized Cut Algorithm Define diagonal degree matrix D(i, i) = § j A(i, j) Define integer membership vector x over all vertices such that each element is 1 if the vertex belongs to cluster A and -1 if it belongs to B (i.e., just two clusters) Define real approximation to x as This yields the following objective function to minimize: which sets up the system of equations The eigenvector with second smallest eigenvalue is the solution (smallest always 0) Continue partitioning clusters if normcut is over some threshold

13 Example: Normalized Cut Segmentations Affinity measures are position, intensity, texture from Forsyth & Ponce

14 Shape Finding Problem: How to efficiently find instances of shapes in image –E.g., lines, curves, ellipses Segmentation in the sense of figure-ground separation Finding lane lines for driving after edge detection from B. Southall & C. Taylor

15 Hough Transform Exhaustive search extremely inefficient Basic idea of Hough transform (HT): Change problem from complicated pattern detection to peak finding in parameter space of the shape –Each pixel can lie on a family of possible shapes (e.g., for lines, the pencil of lines through that point) –Shapes with more pixels on them have more evidence that they are present in the image –Thus every pixel “votes” for a set of shapes and the one(s) with the most votes “win”—i.e., exist courtesy of Massey U.

16 HT for Line Finding Parametrize lines by distance from origin and angle (r, µ) Every point (r, µ) in “line space” is a unique line The set of image points f(x, y)g on a particular line is expressed by: Problem with slope-intercept representation (m, n) is that it can’t handle vertical lines

17 HT for Line Finding Fixing an image pixel (x i, y i ) yields a set of points in line space f(r, µ)g corresponding to a sinusoidal curve described by Each point on curve in line space is a member of the pencil of lines through the pixel Collinear points yield curves that intersect in a single point courtesy of R. Bock

18 HT for Line Finding: Algorithm Set up T x R accumulator array A quantizing line space –Range: r 2 [0, max(W, H)], µ 2 [0, ¼ ] –Bin size: Reasonable intervals –Initial value: 0 for all bins For every image pixel (x, y) that is a feature/edge/etc., iterate over h 2 [1, T] ( µ (h) is line angle for row h of A ) –Let r = x cos µ (h) + y sin µ (h) –Find index k of the A column closest to r –Increment A(h, k) by one Find all local maxima of A Graph sinusoid

19 HT for Line Finding: Algorithm Set up T x R accumulator array A quantizing line space –Range: r 2 [0, max(W, H)], µ 2 [0, ¼ ] –Bin size: Reasonable intervals –Initial value: 0 for all bins For every image pixel (x, y) that is a feature/edge/etc., iterate over h 2 [1, T] ( µ (h) is line angle for row h of A ) –Let r = x cos µ (h) + y sin µ (h) –Find index k of the A column closest to r –Increment A(h, k) by one Find all local maxima of A Graph sinusoid

20 Example: HT for Line Finding Edge-detected image Accumulator array “De-Hough” of lines ¸ 70% of max courtesy of Massey U.

21 Hough Transform: Issues Noise –Points slightly off curve result in multiple intersections –Can use larger bins, smooth accumulator array Non-maximum suppression a good idea to get unique peaks Dimensionality –Exponential increase in size of accumulator array as number of shape parameters goes up –HT works best for shapes with 3 or fewer variables


Download ppt "Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31."

Similar presentations


Ads by Google