Download presentation

Presentation is loading. Please wait.

Published byAmarion Bowley Modified about 1 year ago

1
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013 Face/Object detection

2
Previous Lecture The Viola/Jones Face Detector A seminal approach to real-time object detection Training is slow, but detection is very fast Key ideas Integral images for fast feature evaluation Boosting for feature selection Attentional cascade for fast rejection of non-face windows P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. CVPR 2001.Rapid object detection using a boosted cascade of simple features. P. Viola and M. Jones. Robust real-time face detection. IJCV 57(2), 2004.Robust real-time face detection.

3
Image Features “Rectangle filters” Value = ∑ (pixels in white area) – ∑ (pixels in black area)

4
Computing the integral image Cumulative row sum: s(x, y) = s(x–1, y) + i(x, y) Integral image: ii(x, y) = ii(x, y−1) + s(x, y) ii(x, y-1) s(x-1, y) i(x, y) MATLAB: ii = cumsum(cumsum(double(i)), 2);

5
Boosting for face detection First two features selected by boosting: This feature combination can yield 100% detection rate and 50% false positive rate

6
The AdaBoost Algorithm

7
Attentional cascade Chain classifiers that are progressively more complex and have lower false positive rates: vs falsenegdetermined by % False Pos % Detection FACE IMAGE SUB-WINDOW Classifier 1 T Classifier 3 T F NON-FACE T Classifier 2 T F NON-FACE F Receiver operating characteristic

8
Boosting vs. SVM Advantages of boosting Integrates classifier training with feature selection Complexity of training is linear instead of quadratic in the number of training examples Flexibility in the choice of weak learners, boosting scheme Testing is fast Easy to implement Disadvantages Needs many training examples Training is slow Often doesn’t work as well as SVM (especially for many- class problems)

9
Face Recognition N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar, "Attribute and Simile Classifiers for Face Verification," ICCV "Attribute and Simile Classifiers for Face Verification," Attributes for trainingSimiles for training

10
Face Recognition with Attributes ImagesVerificationAttributes Male Round Jaw Asia n Different Low-level features RGB HOG LBP SIFT … RGB HOG LBP SIFT … Dark hair

11
Learning an attribute classifier Males Fema les Gender classifier Male Feature selection Train classifier Training images Low-level features RGB HoG HSV … RGB HoG HSV … RGB, Nose HoG, Eyes HSV, Hair Edges, Mouth … 0.87

12
Using simile classifiers for verification Verification classifier

13
Eigenfaces

14
Principal Component Analysis A N x N pixel image of a face, represented as a vector occupies a single point in N 2 -dimensional image space. Images of faces being similar in overall configuration, will not be randomly distributed in this huge image space. Therefore, they can be described by a low dimensional subspace. Main idea of PCA for faces: To find vectors that best account for variation of face images in entire image space. These vectors are called eigen vectors. Construct a face space and project the images into this face space (eigenfaces).

15
Image Representation Training set of m images of size N*N are represented by vectors of size N 2 x 1,x 2,x 3,…,x M Example

16
Average Image and Difference Images The average training set is defined by = (1/m) ∑ m i=1 x i Each face differs from the average by vector r i = x i –

17
Covariance Matrix The covariance matrix is constructed as C = AA T where A=[r 1,…,r m ] Finding eigenvectors of N 2 x N 2 matrix is intractable. Hence, use the matrix A T A of size m x m and find eigenvectors of this small matrix. Size of this matrix is N 2 x N 2

18
Eigenvalues and Eigenvectors - Definition If v is a nonzero vector and λ is a number such that Av = λv, then v is said to be an eigenvector of A with eigenvalue λ. Example

19
Eigenvectors of Covariance Matrix The eigenvectors v i of A T A are: Consider the eigenvectors v i of A T A such that A T Av i = i v i Premultiplying both sides by A, we have AA T (Av i ) = i (Av i )

20
Face Space The eigenvectors of covariance matrix are u i = Av i u i resemble facial images which look ghostly, hence called Eigenfaces A face image can be projected into this face space by p k = U T (x k – ) where k=1,…,m

21
Projection into Face Space

22
EigenFaces Algorithm

23

24

25
EigenFaces for Detection

26
EigenFaces for Recognition

27
Limitations of Eigenfaces Approach Variations in lighting conditions Different lighting conditions for enrolment and query. Bright light causing image saturation. Differences in pose – Head orientation - 2D feature distances appear to distort. Expression - Change in feature location and shape.

28
Linear Discriminant Analysis PCA does not use class information PCA projections are optimal for reconstruction from a low dimensional basis, they may not be optimal from a discrimination standpoint. LDA is an enhancement to PCA Constructs a discriminant subspace that minimizes the scatter between images of same class and maximizes the scatter between different class images

29
More sliding window detection: Discriminative part-based models Many slides based on P. FelzenszwalbP. Felzenszwalb

30
Challenge: Generic object detection

31

32

33

34
Pedestrian detection Features: Histograms of oriented gradients (HOG) Learn a pedestrian template using a linear support vector machine At test time, convolve feature map with template N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR 2005Histograms of Oriented Gradients for Human Detection Template HOG feature mapDetector response map

35

36
Discriminative part-based models P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, Object Detection with Discriminatively Trained Part Based Models, PAMI 32(9), 2010Object Detection with Discriminatively Trained Part Based Models Root filter Part filters Deformation weights

37
Discriminative part-based models

38
Object hypothesis Multiscale model: the resolution of part filters is twice the resolution of the root

39
Scoring an object hypothesis The score of a hypothesis is the sum of filter scores minus the sum of deformation costs Filters Subwindow features Deformation weights Displacements

40
Scoring an object hypothesis The score of a hypothesis is the sum of filter scores minus the sum of deformation costs Recall: pictorial structures Matching cost Deformation cost Filters Subwindow features Deformation weights Displacements

41
Scoring an object hypothesis The score of a hypothesis is the sum of filter scores minus the sum of deformation costs Concatenation of filter and deformation weights Concatenation of subwindow features and displacements Filters Subwindow features Deformation weights Displacements

42
Detection Define the score of each root filter location as the score given the best part placements:

43
Detection Define the score of each root filter location as the score given the best part placements: Efficient computation: generalized distance transforms For each “default” part location, find the best- scoring displacement Head filter Head filter responsesDistance transform

44
Detection

45
Matching result

46
Training Training data consists of images with labeled bounding boxes Need to learn the filters and deformation parameters

47
Training Our classifier has the form w are model parameters, z are latent hypotheses Latent SVM training: Initialize w and iterate: Fix w and find the best z for each training example (detection) Fix z and solve for w (standard SVM training) Issue: too many negative examples Do “data mining” to find “hard” negatives

48
Car model Component 1 Component 2

49
Car detections

50
Person model

51
Person detections

52
Cat model

53
Cat detections

54
Bottle model

55
More detections

56
Quantitative results (PASCAL 2008) 7 systems competed in the 2008 challenge Out of 20 classes, first place in 7 classes and second place in 8 classes BicyclesPersonBird Proposed approach

57
Summary

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google