Presentation is loading. Please wait.

Presentation is loading. Please wait.

Kullback-Leibler Boosting

Similar presentations


Presentation on theme: "Kullback-Leibler Boosting"— Presentation transcript:

1 Kullback-Leibler Boosting
Research Asia Kullback-Leibler Boosting Ce Liu Heung-Yeung Shum Microsoft Research Asia

2 A General Two-layer Classifiers
Input Intermediate Output Projection function Discriminating function Coefficients Identification function

3 Issues under Two-layer Framework
How to choose the type of projection function? How to choose the type of discriminating function? How to learn the parameters from samples? Projection function Discriminating function Sigmoid RBF Polynomial

4 Our proposal Kullback-Leibler Boosting (KL Boosting)
How to choose the type of projection function? Kullback-Leibler linear feature How to choose the type of discriminating function? Histogram divergences How to learn the parameters from samples? Sample re-weighting (Boosting) Kullback-Leibler Boosting (KL Boosting)

5 Intuitions Linear projection is robust and easy to compute
The histograms of two classes upon a projection are evidences for classification The linear feature, on which the histograms of two classes differ most, should be selected If the weight distribution of the sample set changes, the histogram changes as well Increase weights for misclassified samples, and decrease weights for correctly classified samples

6 Linear projections and histograms

7 KLBoosting (1) At the kth iteration Kullback-Leibler Feature
Discriminating function Reweighting

8 KLBoosting (2) Two types of parameters to learn
KL features: Combination coefficients: Learning KL feature in low dimensions: MCMC Learning weights to minimize training error Optimization: brute-force search

9 Flowchart Input: Initialize weights Learn KL feature Learn combining
coefficients +1 -1 Update weights Recognition error small enough? N Flowchart Y Output classifier

10 A Simple Example KL Features Histograms Decision manifold

11 A Complicated Case

12 Kullback-Leibler Analysis (KLA)
A challenging task to find KL feature in image space Sequential 1D optimization Construct a feature bank Build a set of the most promising features Sequentially do 1D optimization along the promising features Conjecture: The global optimum of an objective function can be reached by searching along linear features as many as needed

13 Intuition of Sequential 1D Optimization
Result of Sequential 1D Optimization Promising feature set Feature bank MCMC feature

14 Optimization in Image Space
Image is a random field, not a pure random variable The local statistics can be captured by wavelets 111×400 small-scale wavelets for the whole 20×20 patch 80×100 large-scale wavelets for the inner 10×10 patch Total 52,400 wavelets to compose a feature bank 2,800 most promising wavelets selected Gaussian family wavelets Harr wavelets Feature bank

15 Feature bank (111 wavelets)
Data-driven KLA Face patterns On each position of the 20*20 lattice, compute the histograms of the 111 wavelets and the KL divergences between face and non-face images. Large scale wavelets are used to capture the global statistics, on the 10*10 inner lattice Non-face patterns Compose the KL feature by sequential 1D optimization Promising feature set (total 2,800 features) Feature bank (111 wavelets)

16 Comparison with Other Features
KL feature KL= (KL feature) MCMC feature Best Harr wavelet KL=2.944 (Harr wavelet) KL=3.246 (MCMC feature)

17 Application: Face Detection
Experimental setup 20×20 patch to represent face 17,520 frontal faces 1,339,856,947 non-faces from 2,484 images 300 bins in histogram representation A cascade of KLBoosting classifiers In each classifier, keep false negative rate <0.01% and false alarm rate <35% Totally 22 classifiers to form the cascade (450 features)

18 KL Features of Face Detector
Face patterns Non-face patterns First 10 KL features Global semantics Frequency filters Local features Some other KL features

19 ROC Curve

20 Some Detection Results

21 Comparison with AdaBoost

22 Compared with AdaBoost
KLBoosting AdaBoost Base classifier KL feature + histogram divergence Selected from experiences Combining coefficients Globally optimized to minimize training error Empirically set to be incrementally optimal

23 Summary KLBoosting is an optimal classifier
Projection function: linear projection Discrimination function: histogram divergence Coefficients: optimized by minimizing training error KLA: a data-driven approach to pursue KL features Applications in face detection

24 Harry Shum Microsoft Research Asia hshum@microsoft.com
Thank you! Harry Shum Microsoft Research Asia

25 Compared with SVM KLBoosting SVM Support vectors
KL features learnt to optimize KL divergence (a few) Selected from training samples (many) Kennel function Histogram divergence (flexible) Selected from experiences (fixed)


Download ppt "Kullback-Leibler Boosting"

Similar presentations


Ads by Google