Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exploring the Parameter Space of Image Segmentation Algorithms Talk at NCHU 2010 - p 1 TexPoint fonts used in EMF. Read the TexPoint manual before you.

Similar presentations


Presentation on theme: "Exploring the Parameter Space of Image Segmentation Algorithms Talk at NCHU 2010 - p 1 TexPoint fonts used in EMF. Read the TexPoint manual before you."— Presentation transcript:

1 Exploring the Parameter Space of Image Segmentation Algorithms Talk at NCHU 2010 - p 1 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA Xiaoyi Jiang Department of Mathematics and Computer Science University of Münster Germany

2 Talk at NCHU 2010 - p 2 Typical approaches:  Not consider the problem at all  “We have experimentally determined the parameter values ……“  Supervised: training of parameter values based on training images with (manually specified) ground truth  Unsupervised: based on heuristics to measure segmentation quality How to deal with parameters?

3 Talk at NCHU 2010 - p 3 Drawbacks:  “We have experimentally determined ……“  Who believes that?  Supervised: training of parameter values based on GT  GT not always available  trained parameters not optimal for a particular image  Unsupervised: based on self-judgement heuristics  still no good solution for self-judgement How to deal with parameters?

4 Talk at NCHU 2010 - p 4 Basic assumption: Known reasonable range of good values for each parameter Our intention: explore the parameter subspace without GT  A: investigate local behavior of parameters  B: adaptively compute an “optimal“ segmentation within a parameter subspace (construction approach)  C: adaptively select an “optimal“ parameter setting within a subspace (selection approach) How to deal with parameters?

5 Talk at NCHU 2010 - p 5 Natural landscape p1p1 p2p2 Quality measure optimal parameters

6 Talk at NCHU 2010 - p 6 A. Investigate local behavior of parameters Belief: There is a subspace of good parameter values Reality: Yes, but there are local outliers within such a subspace!

7 Talk at NCHU 2010 - p 7 A. Investigate local behavior of parameters Felzenszwalb / Huttenlocher: Efficient graph-based image segmentation. Int. J. Computer Vision 59 (2004) 167–181

8 Talk at NCHU 2010 - p 8 A. Investigate local behavior of parameters Close-up: NMI = 0.70 NMI = 0.26

9 Talk at NCHU 2010 - p 9 A. Investigate local behavior of parameters Deng / Manjunath: Unsupervised segmentation of color- texture regions in images and video. IEEE T-PAMI 23 (2001) 800–810 (JSEG) NMI = 0.61NMI = 0.76

10 Talk at NCHU 2010 - p 10 A. Investigate local behavior of parameters Frequency study on Berkeley image set: Strong (weak) outliers = segmentation results with NMI lower than 15% (10%) of the maximum NMI of the current image ensemble (5x5 subspace) FH JSEG

11 Talk at NCHU 2010 - p 11 A. Investigate local behavior of parameters Danger: There are local outliers (salt-and-pepper noise)! Solution: similar to median filtering : Segmentations around some parameter setting : distance function between segmentations Set median:

12 Talk at NCHU 2010 - p 12 A. Investigate local behavior of parameters FH: best worst set median

13 Talk at NCHU 2010 - p 13 A. Investigate local behavior of parameters JSEG:

14 Talk at NCHU 2010 - p 14 B: Adaptively compute an “optimal“ segmentation Belief: There is a reasonable subspace of good parameter values. Some optimal parameter setting can be determined by experiments or training. Reality: Yes, but this parameter setting is not optimal for a particular image!

15 Talk at NCHU 2010 - p 15 B: Adaptively compute an “optimal“ segmentation Exactly the same parameter set applied to two images

16 Talk at NCHU 2010 - p 16 B: Adaptively compute an “optimal“ segmentation Segmentation ensemble technique:  Use a sampled parameter subspace to compute an ensemble of segmentations  Compute a final segmentation based on S This combined segmentation tends to be a good one within the explored parameter subspace

17 Talk at NCHU 2010 - p 17 B: Adaptively compute an “optimal“ segmentation

18 Talk at NCHU 2010 - p 18 L.Grady: Random walks for image segmentation. IEEE-TPAMI, 28: 1768–1783, 2006 Excursus: Random walker based segmentation 18 (a) A two-region image(b) Use-defined seeds for each region (c) A 4-connected lattice topology seeded (labeled) pixelsunseeded (unlabeled) pixel edge weight: similarity between two nodes, based on e.g., intensity gradient, color changes (d) An undirected weighted graph low-weight edge (sharp color gradient)

19 Talk at NCHU 2010 - p 19 The algorithm labels an unseeded pixel in following steps: Step 1. Calculate the probability that a random walker starting at an unseeded pixel x first reaches a seed with label s Excursus: Random walker based segmentation 19 Probability that a random walker starting from each unseeded node first reaches red seed Probability that a random walker starting from each unseeded node first reaches blue seed 0.970.900.85 0.97 0.90 0.85 0.150.100.03 0.15 0.10 0.03 0.100.15 0.03 0.10 0.15 0.850.900.97 0.85 0.90 0.97

20 Talk at NCHU 2010 - p 20 Step 2. Label each pixel with the most probable seed destination Excursus: Random walker based segmentation 20 (0.97,0.03)(0.90,0.10)(0.85,0.15)(0.15,0.85) (0.10,0.90)(0.03,0.97) (0.97,0.03)(0.85,0.15)(0.15,0.85) (0.03,0.97) (0.97,0.03)(0.90,0.10)(0.85,0.15)(0.15,0.85) (0.10,0.90)(0.03,0.97) A segmentation corresponding to region boundary is obtained by biasing the random walker to avoid crossing sharp color gradients

21 Talk at NCHU 2010 - p 21 Excursus: Random walker based segmentation Original Seeds indicating four objects Resulting segmentation Label 1 probabilities Label 2 probabilities Label 3 probabilities Label 4 probabilities

22 Talk at NCHU 2010 - p 22 B: Adaptively compute an “optimal“ segmentation Connection to random walker based segmentation:  The input segmentations provide strong hints about where to automatically place some seeds  Then, the same situation as image segmentation with manually specified seeds  apply the random walker algorithm to achieve a final segmentation Random walker based segmentation ensemble technique:  Generate a graph from input segmentations  Extract seed regions  Compute a final combined segmentation result

23 Talk at NCHU 2010 - p 23 B: Adaptively compute an “optimal“ segmentation Graph generation:  Weight e ij in G: indicate how probably two pixels p i and p j belong to the same image region  Solution: Counting number n ij of initial segmentations, in which p i and p j share the same region label. Then, we define the weight function as a Gaussian weighting: w ij = exp [-β (1- n ij /N)]

24 Talk at NCHU 2010 - p 24 B: Adaptively compute an “optimal“ segmentation Candidate seed region extraction: We build a new graph G * by preserving those edges with weight w ij = 1 only (p i and p j have the same label in all initial segmentations) and removing all other edges. Then, all connected subgraphs in G * build the initial seed regions. Grouping candidate seed regions: A reduction of seed regions is performed by iteratively merging the two closest candidate seed regions until some termination criterion (thresholding) is satisfied. Optimization of K (number of seed regions): Based on an approximation of generalized median segmentation by investigating the subspace consisting of the combination segmentations for all possible K 2 [K min,K max ] only.

25 Talk at NCHU 2010 - p 25 B: Adaptively compute an “optimal“ segmentation graph G initial seeds final result (optimal K)

26 Talk at NCHU 2010 - p 26 B: Adaptively compute an “optimal“ segmentation worst / median / best input segmentation  combination segmentation

27 Talk at NCHU 2010 - p 27 B: Adaptively compute an “optimal“ segmentation Comparison (per image): Worst / best / average input & combination

28 Talk at NCHU 2010 - p 28 B: Adaptively compute an “optimal“ segmentation f(n): Number of images for which the combination result is worse than the best n input segmentations Ensemble technique outperforms all 24 input segmentations in 78 cases. For 70% (210) of all 300 test images, the goodness of our solution is beaten by at most 5 input segmentations only.

29 Talk at NCHU 2010 - p 29 B: Adaptively compute an “optimal“ segmentation Comparison: Average performance for all 300 test images (for each of 24 parameter settings)

30 Talk at NCHU 2010 - p 30 B: Adaptively compute an “optimal“ segmentation The dream must go on! DreamDream

31 Talk at NCHU 2010 - p 31 Additional applications:  2.5D range image segmentation  detect double contours by dynamic programming (layer of intima and adventitia for computing the intima-media thickness) B: Adaptively compute an “optimal“ segmentation

32 Talk at NCHU 2010 - p 32 B: Adaptively compute an “optimal“ segmentation Segmenter combination: There exists no universal segmentation algorithm that can successfully segment all images. It is not easy to know the optimal algorithm for one particular image. Instead of looking for the best segmenter which is hardly possible on a per-image basis, now we look for the best segmenter combiner. Instead of looking for the best set of features and the best classifier, now we look for the best set of classifiers and then the best combination method. Ho, 2002

33 Talk at NCHU 2010 - p 33 Belief: There are heuristics to measure segmentation quality Reality: Yes, but optimizing such heuristic do not necessarily correspond to segmentations perceived by humans! C: Adaptively select an optimal parameter setting

34 Talk at NCHU 2010 - p 34 C: Adaptively select an optimal parameter setting Observations: Different segmenters tend to produce similar good segmentations, but dissimilar bad segmentations (The subspace of bad segmentations is substantially larger than the subspace of good segmentations)  Compare segmentation results of different segmenters and figure out good segmentations by means of similarity tests

35 Talk at NCHU 2010 - p 35 C: Adaptively select an optimal parameter setting

36 Talk at NCHU 2010 - p 36 C: Adaptively select an optimal parameter setting Outline of the framework:  Compute for each segmentation algorithm N segmentations  Compute an N × N similarity matrix by comparing each segmentation of the first algorithm with each segmentation of the second algorithm  Determine the best parameter setting from the similarity matrix

37 Talk at NCHU 2010 - p 37 C: Adaptively select an optimal parameter setting Weaker segmenter CSC benefits from stronger FH/JSEG

38 Talk at NCHU 2010 - p 38 C: Adaptively select an optimal parameter setting Also FH benefits from weaker CSC

39 Talk at NCHU 2010 - p 39 C: Adaptively select an optimal parameter setting Also JSEG benefits from weaker CSC

40 Talk at NCHU 2010 - p 40 Conclusions Basic assumption: Known reasonable range of good values for each parameter Our intention: Explore the parameter subspace without GT A: investigate local behavior of parameters B: adaptively compute an “optimal“ segmentation within a parameter subspace C: adaptively select an optimal parameter setting within a subspace on a per image basis

41 Talk at NCHU 2010 - p 41 Conclusions We could demonstrate:  A: Local outliers can be successfully removed by set median operator  B: The combination performance tends to reach the best input segmentation; in some cases the combined segmentation even outperforms the entire input ensemble  C: Segmenters can help each other for selecting good parameter values

42 Talk at NCHU 2010 - p 42 Conclusions Combination (ensemble) techniques:  Generalized median: Strings, graphs, clusterings, …  Multiple classifier systems  ……  Combining image segmentations Three cobblers combined equal the master mind - Chinese proverb - gracias


Download ppt "Exploring the Parameter Space of Image Segmentation Algorithms Talk at NCHU 2010 - p 1 TexPoint fonts used in EMF. Read the TexPoint manual before you."

Similar presentations


Ads by Google