Presentation is loading. Please wait.

Presentation is loading. Please wait.

Diagnosing Error in Object Detectors Department of Computer Science University of Illinois at Urbana-Champaign (UIUC) Derek Hoiem Yodsawalai Chodpathumwan.

Similar presentations


Presentation on theme: "Diagnosing Error in Object Detectors Department of Computer Science University of Illinois at Urbana-Champaign (UIUC) Derek Hoiem Yodsawalai Chodpathumwan."— Presentation transcript:

1 Diagnosing Error in Object Detectors Department of Computer Science University of Illinois at Urbana-Champaign (UIUC) Derek Hoiem Yodsawalai Chodpathumwan Qieyun Dai Work supported in part by NSF awards IIS and IIS , ONR MURI Grant N , and a research award from Google

2 Object detection is a collection of problems Distance Shape Occlusion Viewpoint Intra-class Variation for “Airplane”

3 Object detection is a collection of problems Localization Error Background Dissimilar Categories Similar Categories Confusing Distractors for “Airplane”

4 How to evaluate object detectors? Average Precision (AP) – Good summary statistic for quick comparison – Not a good driver of research We propose tools to evaluate – where detectors fail – potential impact of particular improvements Typical evaluation through comparison of AP numbers figs from Felzenszwalb et al. 2010

5 Detectors Analyzed as Examples on VOC 2007 Deformable Parts Model (DPM) Felzenszwalb et al (v4) Sliding window Mixture of HOG templates with latent HOG parts Multiple Kernel Learning (MKL) Vedaldi et al Jumping window Various spatial pyramid bag of words features combined with MKL x x x

6 Top false positives: Airplane (DPM) Other Objects 11% Background 27% Similar Objects 33% Bird, Boat, Car Localization 29% Impact of Removing/Fixing FPs AP = 0.36

7 Top false positives: Dog (DPM) Similar Objects 50% Person, Cat, Horse Background 23% Localization 17% Impact of Removing/Fixing FPs Other Objects 10% AP = 0.03

8 Top false positives: Dog (MKL) Similar Objects 74% Cow, Person, Sheep, Horse Background 4% Localization 17% Other Objects 5% AP = 0.17 Impact of Removing/Fixing FPs Top 5 FP

9 Summary of False Positive Analysis DPM v4 (FGMR 2010) MKL (Vedaldi et al. 2009)

10 Analysis of object characteristics Additional annotations for seven categories: occlusion level, parts visible, sides visible Occlusion Level

11 Normalized Average Precision Average precision is sensitive to number of positive examples Normalized average precision: replace variable N j with fixed N Number of object examples in subset j

12 Object characteristics: Aeroplane

13 Occlusion : poor robustness to occlusion, but little impact on overall performance Easier (None)Harder (Heavy)

14 Size : strong preference for average to above average sized airplanes Object characteristics: Aeroplane Easier Harder X-SmallSmall X-Large Medium Large

15 Aspect Ratio : 2-3x better at detecting wide (side) views than tall views Object characteristics: Aeroplane TallX-Tall Medium WideX-Wide Easier (Wide) Harder (Tall)

16 Sides/Parts : best performance = direct side view with all parts visible Object characteristics: Aeroplane Easier (Side) Harder (Non-Side)

17 Summarizing Detector Performance Avg. Performance of Best Case Avg. Performance of Worst Case Avg. Overall Performance DPM (v4): Sensitivity and Impact

18 DPM (FGMR 2010) MKL (Vedaldi et al. 2009) occlusiontruncsize view part_vis aspect Sensitivity Impact Summarizing Detector Performance Best, Average, Worst Case

19 DPM (FGMR 2010) MKL (Vedaldi et al. 2009) occlusiontruncsize view part_vis aspect Occlusion: high sensitivity, low potential impact Summarizing Detector Performance Best, Average, Worst Case

20 DPM (FGMR 2010) MKL (Vedaldi et al. 2009) occlusiontruncsize view part_vis aspect MKL more sensitive to size Summarizing Detector Performance Best, Average, Worst Case

21 DPM (FGMR 2010) MKL (Vedaldi et al. 2009) occlusiontruncsize view part_vis aspect DPM more sensitive to aspect Summarizing Detector Performance Best, Average, Worst Case

22 Conclusions Most errors that detectors make are reasonable – Localization error and confusion with similar objects – Misdetection of occluded or small objects Large improvements in specific areas (e.g., remove all background FPs or robustness to occlusion) has small impact in overall AP – More specific analysis should be standard Our code and annotations are available online – Automatic generation of analysis summary from standard annotations

23 Thank you! Similar Objects 74% Cow, Person, Sheep, Horse Background 4% Localization 17% Other Objects 5% AP = 0.17 Impact of Removing/Fixing FPs Top 5 FP Top Dog False Positives

24


Download ppt "Diagnosing Error in Object Detectors Department of Computer Science University of Illinois at Urbana-Champaign (UIUC) Derek Hoiem Yodsawalai Chodpathumwan."

Similar presentations


Ads by Google