Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues David R. Martin Charless C. Fowlkes Jitendra Malik.

Similar presentations


Presentation on theme: "Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues David R. Martin Charless C. Fowlkes Jitendra Malik."— Presentation transcript:

1 Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues David R. Martin Charless C. Fowlkes Jitendra Malik

2 Paper Contribution Proposed Boundary Classifier –Uses local image features –Detect & Localize Boundaries Probability each pixel belongs to a boundary Determine boundary angle –Robust on natural scenes –Supervised-learning using human-segmented images

3 Feature Hierarchy Cues for perceptual grouping: –Low-Level: brightness, color, texture, depth, motion –Mid-Level: continuity, closure, convexity, symmetry, … –High-Level: familiar objects and configurations This paper –Strictly uses low-level features Comparison to other low-level classifiers –Canny –Edge detector based on Eigenvalues of “Second Moment Matrix”

4 Boundary Detection Other methods detect edges –Abrupt change in low-level feature –Ad-hoc additions introduce errors This method detects boundaries –Contour separating objects –Resembling human-abilities

5 Goal & Outline Goal: Model the posterior probability of a boundary P b (x,y,  ) at each pixel and orientation using local cues Method: Supervised learning using dataset of 12,000 segmentations of 1,000 images by 30 subjects Outline: –3 Image-feature cues: brightness, color, texture –Cue calibration –Cue combination –Compare with other approaches

6 Image Features Look at region around each pixel for feature discontinuities –Range of orientations –Range of scales Features –Oriented Energy (not used) –Brightness gradient –Color Gradient –Texture Gradient

7 Image Features All 3 Features are gradient-based Gradient found by: –For every pixel (x,y) draw circle of radius ‘r’ –Split circle in half –Compare contents of two halves G(x,y, ,r) Large difference indicates an edge –Repeat for every ‘  ’ & ‘r’ Implementation –8 orientations of ‘  ’ –3 scales ‘r’  r (x,y)

8 Non-BoundariesBoundaries IT BC Image Features

9 Brightness Features Brightness Gradient BG(x,y,r,  ) –Model intensity values in each disk-half using kernel density estimation –Create a histogram of distribution –Compare disk-halves by comparing histograms  2 difference in L * distribution

10 Color Features Color Gradient CG(x,y,r,  ) –Model color values in each disk-half using kernel density estimation –Color Space Red-Green (a) Yellow-Blue (b) –2D space (a*b) greatly increases computation –Instead, use two 1D spaces (a+b) –Create 2 histograms of kernel densities –Compare disk-halves by comparing histograms  2 difference in (a * distribution) + (b * distribution)

11 Texture Features Texture Gradient BG(x,y,r,  ) –Pixel-texture values are computed using 13 filters –Pixels are thus represented with a 13-feature vector –Each disk-half is modeled by a point cloud of vectors in 13-dimensional space –Problem: How does one compare two 13D spaces?

12 Texture Features Solution: Textons estimate joint distribution using adaptive bins –Filter response vectors are clustered using k-means –Cluster centers represent texture primitives (textons) –Example Texton set K = 64 Trained using 200 images

13 Texture Features Texture Gradient BG(x,y,r,  ) –Pixels in each disk-half are assigned to nearest texton –Disk-halfs are represented by a histogram of textons –Compare disk-halves by comparing histograms  2 difference in T * distribution Texton Map

14 Feature Localization Problem: Can’t localize because features don’t form sharp peaks around boundaries –Smooth peaks –Double peaks Solution: For each pixel –Use least-squares to fit a cylindrical parabola over the 2D window of radius ‘r’ –Parabola center = localized edge

15 Evaluation Boundary detector quality –Used for optimizing parameters –Comparing to other techniques Human-marked boundaries as ground truth –1000 images, 5-10 segmentations –Highly consistent

16 Evaluation Compare to ground truth using Precision-Recall –Sensitivity vs. Noise –Optimal tradeoff point is used for comparison – Fewer False Positives Fewer Misses Goal F 

17 Cue Calibration All free parameters optimized on training data Brightness Gradient –Disc Radius, bin/kernel sizes for KDE Color Gradient –Disc Radius, bin/kernel sizes for KDE, joint vs. marginals Texture Gradient –Disc Radius –Filter bank: scale, multiscale vs. singlescale –Histogram comparison method:  2, EMD, etc. –Number of textons (k-means value) –Image-specific vs. universal textons Localization parameters for each cue

18 Eliminate Redundant Cues Supervised learning using ground-truth images Orient energy (OE) same info as BG Multiple scales doesn’t add accuracy Best results –BG+TG+CG at single scale

19 Classifiers for Cue Combination Supervised learning using ground-truth images Logistic Regression –Linear and quadratic terms –Stable, quick, compact, intuitive –Training: MinutesEvaluation: Negligible compared to feature detection Density Estimation –Adaptive bins using k-means –Training: MinutesEvaluation: Negligible compared to feature detection Classification Trees –Top-down splits to maximize entropy, error bounded –Training: HoursEvaluation: Many x longer than regression Hierarchical Mixtures of Experts –8 experts, initialized top-down, fit with EM –Training: HoursEvaluation: 15 x longer than regression Support Vector Machines ( libsvm ) –Terrible: Model was large, exceedingly slow, brittle(parameters), opaque

20 Classifier Comparison

21 Cue Combinations

22 Two Decades of Boundary Detection

23 Pb Images I Canny2MMUsHumanImage

24 Pb Images II Canny2MMUsHumanImage

25 Pb Images III Canny2MMUsHumanImage

26 Summary & Comments Edge detection does not optimally segment natural images –Texture suppression is not sufficient Proposed method offers significant improvements –Simple but powerful feature detectors –Simple model for cue combination Empirical approach for calibration & cue combination Surprisingly effective for a low level approach Likely isn’t robust to larger textures –Offset by using multiple scales Prohibitively slow for on-line use –Minutes per image after optimizations Normalized Cuts: Boundaries  Regions


Download ppt "Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues David R. Martin Charless C. Fowlkes Jitendra Malik."

Similar presentations


Ads by Google