Presentation is loading. Please wait.

Presentation is loading. Please wait.

How to Evaluate Foreground Maps ?

Similar presentations


Presentation on theme: "How to Evaluate Foreground Maps ?"— Presentation transcript:

1 How to Evaluate Foreground Maps ?
CVPR2014 Poster

2 Outline Introduction Limitation of Current Measures Solution
Experiment Conclusions

3 Introduction The comparison of a foreground map against a binary ground-truth is common in various computer-vision problems salient object detection object segmentation foreground-extraction Several measures have been suggested to evaluate the accuracy of these foreground maps. AUC measure AP measure F-measure PASCAL First, multiple thresholds are applied to it, to obtain multiple binary maps. Then, these binary maps are compared to the ground-truth.

4 Introduction But the most commonly-used measures for evaluating both non-binary maps and binary maps do not always provide a reliable evaluation. [9] K. Chang, T. Liu, H. Chen, and S. Lai. Fusing generic objectness and visual saliency for salient object detection. In ICCV, pages 914–921, 2011. [12] S. Goferman, L. Zelnik-Manor, and A. Tal. Context-aware saliency detection. In CVPR, 2010. [13] H. Jiang, J. Wang, Z. Yuan, T. Liu, N. Zheng, and S. Li. Automatic salient object segmentation based on context and shape prior. In BMVC, volume 3, page 7, 2012.

5 Introduction Our contributions:
Identifying three assumptions in commonly-used measures. We proceed to amend each of these flaws and to suggest a novel measure that evaluates foreground maps at an increased accuracy . Proposing four meta-measures to analyze the performance of evaluation measures. 三個主要貢獻

6 Introduction Two appealing properties of our measure are:
being a generalization of the FB –measure providing a unified evaluation to both binary and non-binary maps.

7 Limitation of Current Measures
Three flawed assumptions : Interpolation flaw Dependency flaw Equal-important flaw

8 Limitation of Current Measures
Current Evaluation Measures Evaluation of binary maps: 4 basic quantities : TP (true-positive) TN (true-negative) FP (false-positive) FN (false-negative)

9 Limitation of Current Measures
Current Evaluation Measures Evaluation of binary maps: Common score : TPR= FPR= 區分binary map 與 non-binary map Binary map 0 or 1 Non-binary map [ 0, 1] ->屬於前景的機率

10 Limitation of Current Measures
Current Evaluation Measures Evaluation of non-binary maps: AUC (Area-Under-the-Curve) AP (Average-Precision) Image Source:

11 Interpolation flaw The source of the interpolation flaw is the thresholding of the non-binary maps. Both AUC and AP assume that the interpolated curve (between binary maps) is a valid tool for evaluating non-binary maps. Should be better than (b) , but their scores are the same . Since both AUC and AP rely solely on the interpolated curve, ignoring the distribution of points along the curves, they deem (b) as perfect as (a)

12 Dependency flaw dependency between false-negatives
Current measures assume that the pixels are independent of each other. Fig4. (a)FP集中 ,(b)散在true-positive中 are not of the same quality and should not receive the same score.

13 Equal-important flaw the location of the false-positives
all erroneous detections have equal importance.

14 Solution Resolving the Interpolation Flaw
Resolving the Dependency Flaw & the Equal-Importance Flaw The New Measure – -measure

15 Resolving the Interpolation Flaw
The key idea is to extend the four basic quantities: TP, TN, FP and FN , to deal with non-binary values. G1xN : the column-stack representation of the binary ground-truth, where N is the number of pixels in the image. D1xN : the non-binary map to be evaluated against the ground-truth.

16 Resolving the Interpolation Flaw
For binary map, pixel i correct G[ i ] = D[ i ] incorrect G[ i ] ≠ D[ i ] For non-binary Note that when D is binary, these definitions are identical to the conventional ones.

17 Resolving the Dependency Flaw & the Equal-Importance Flaw
Assumptions deal with detection errors. Our key idea is to attribute different importance to different errors. Reformulate the basic quantities:

18 Resolving the Dependency Flaw & the Equal-Importance Flaw
We suggest applying a weighting function to the errors. ANxN : captures the dependency between pixels BNx1 : represents the varying importance of the pixels

19 Resolving the Dependency Flaw & the Equal-Importance Flaw

20 Resolving the Dependency Flaw & the Equal-Importance Flaw
Reformulate the basic quantities with weight: Note that when independency and equal-importance are assumed (i.e. A = I and B = 1), these definitions are identical to the conventional ones.

21 The New Measure – -measure
Having dealt with all three flaws, we proceed to construct our evaluation measure.

22 Experiments Meta-measure :
The ranking of an evaluation measure should agree with the preferences of an application that uses the map as input. A measure should prefer a good result by an algorithm that considers the content of the image, over an arbitrary map.

23 Experiments meta-measure :
The score of a map should decrease when using a wrong ground-truth map. The ranking of an evaluation measure should not be sensitive to inaccuracies in the manually marked boundaries in the ground-truth maps.

24 Experiments :Meta-measure(1)
Application Ranking

25 Experiments :Meta-measure(2)
State-of-art vs. Generic

26 Experiments :Meta-measure(3)
Ground-truth Switch

27 Experiments :Meta-measure(4)
Annotation errors

28 Conclusions We analyzed the currently-used evaluation measures that suffer from three flawed assumptions: interpolation, dependency and equal-importance. We suggested an evaluation measure that amends these assumptions, and it offers a unified solution to the evaluation of non-binary and binary maps. The advantages of our measure were shown via four different meta-measures, both qualitatively and quantitatively.


Download ppt "How to Evaluate Foreground Maps ?"

Similar presentations


Ads by Google