Presentation is loading. Please wait.

Presentation is loading. Please wait.

Face Detection Using Large Margin Classifiers Ming-Hsuan Yang Dan Roth Narendra Ahuja Presented by Kiang “Sean” Zhou Beckman Institute University of Illinois.

Similar presentations


Presentation on theme: "Face Detection Using Large Margin Classifiers Ming-Hsuan Yang Dan Roth Narendra Ahuja Presented by Kiang “Sean” Zhou Beckman Institute University of Illinois."— Presentation transcript:

1 Face Detection Using Large Margin Classifiers Ming-Hsuan Yang Dan Roth Narendra Ahuja Presented by Kiang “Sean” Zhou Beckman Institute University of Illinois at Urbana-Champaign Urbana, IL 61801

2 Overview Large margin classifiers have demonstrated success in visual learning Support Vector Machine (SVM) Sparse Network of Winnows (SNoW) Aim to present a theoretical account for their success and suitability in visual recognition Theoretical and empirical analysis of these two classifiers within the context of face detection Generalization error: expected error in test Efficiency: computational capability to represent features

3 Face Detection Goal: Identify and locate human faces in an image (usually gray scale) regardless of their position, scale, in plane rotation, orientation, pose and illumination The first step for any automatic face recognition system A very difficult problem! First aim to detect upright frontal faces with certain ability to detect faces with different pose, scale, and illumination See “Detecting Faces in Images: A Survey”, by M.-H. Yang, D. Kriegman, and N. Ahuja, to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002. http://vision.ai.uiuc.edu/mhyang/face-detection-survey.html Where are the faces, if any?

4 Large Margin Classifiers Based on linear decision surface (hyperplane) f: w T x + b = 0 Compute w and b from samples SNoW: based on Winnow with multiplicative update rule SVM: based on Perceptron with additive update rule Though SVM can be developed independently of the relation to perceptron, we view them as a large margin classifier for the sake of derivation of theoretical analysis

5 Sparse Network of Winnows (SNoW) Feature Vector Target nodes On line, mistake driven algorithm based on Winnow Attribute (feature) efficiency Allocations of nodes and links is data driven time complexity depends on number of active features Mechanisms for discarding irrelevant features Allows for combining task hierarchically

6 Winnow Update Rule Multiplicative weight update algorithm: Number of mistakes in training is O (k log n) where k is the number of relevant features of the concept and n is the number of features Tolerate a large number of features Mistake bound is logaritimic in number of features Advantageous when function space is sparse Robust in the presence of noisy features

7 Support Vector Machine (SVM) Can be viewed as a perceptron with maximum margin Based on statistical learning theory Extend to nonlinear SVM using kernel tricks Computational efficiency Expressive representation with nonlinear features Have demonstrated excellent empirical results in visual recognition tasks Training can be time consuming though fast algorithms have been developed

8 Generalization Error Bounds: SVM Theorem 1: If data is L 2 norm bounded as ||x|| 2  b, and the family of hyperplanes w such that ||w|| 2 <a, then for any margin  <0, with probability 1-  over n random samples, the misclassification error err(w) where k  = |{I: w T x i y i <  }| is the number of samples with margin less than 

9 Generalization Error Bounds: SNoW Theorem 2: If data is L  norm bounded as ||x||   b, and the family of hyperplanes w such that ||w|| 1 <a and  j ln( )  c, then for any margin  <0, with probability 1-  over n random samples, the misclassification error err(w) where k  = |{I: w T x i y i <  }| is the number of samples with margin less than 

10 Generalization Error Bounds In summary SVM: E a  ||w|| 2 2 max ||x i || 2 2 SNoW: E m  2 ln 2n||w|| 1 2 max ||x i ||  2 SNoW has lower generalization error if Data is L  norm bounded and there is a small L 1 norm hyperplane SVM has lower generalization error if Data is L 2 norm bounded and there is a small L 2 norm hyperplane SNoW performs better than SVM if the data has small L  norm but large L 2 norm

11 Efficiency Features in nonlinear SVMs are more expressive than linear features (and efficient as a result of kernel trick) Can use conjunctive features in SNoW as nonlinear features Represent the occurrence (conjunction) of intensity values of m pixels within a window by a new feature value

12 Experiments Training set: 6,977 20  20 upright, frontal images: 2,429 faces and 4,548 nonfaces Appearance-based approach: Histogram equalized Convert each image to a vector of intensity values Test set: 24,045 images: 472 faces and 23,573 nonfaces

13 Empirical Results SNoW with local features performs better linear SVM SVM with 2 nd order polynomial performs better than SNoW with conjunctive features SNoW with local features SVM with linear features SVM with 2 nd poly kernel SNoW with conjunctive features

14 Discussion Studies have shown that the target hyperplane function in visual pattern recognition is usually sparse, i.e., the L 2 norm and L 1 of ||w|| are usually small Perceptron does not have any theoretical advantage over Winnow (or SNoW) In the experiments, L 2 is on average 10.2 times larger than L  Empirical results conform to theoretical analysis SNoW with local features SVM with linear features SNoW with local features SVM with linear features SVM with 2 nd poly kernel SNoW with conjunctive features

15 Conclusion Theoretical and empirical arguments suggest SNoW-based learning framework has important advantages for visual learning task SVMs have nice computational properties to represent nonlinear features as a result of kernel tricks Future work will focus on efficient methods (i.e., similar to kernel ticks) to represent nonlinear features for SNoW-based learning framework


Download ppt "Face Detection Using Large Margin Classifiers Ming-Hsuan Yang Dan Roth Narendra Ahuja Presented by Kiang “Sean” Zhou Beckman Institute University of Illinois."

Similar presentations


Ads by Google