Download presentation

Presentation is loading. Please wait.

1
**Jan-Michael Frahm, Enrique Dunn Spring 2013**

776 Computer Vision Face/Object detection Jan-Michael Frahm, Enrique Dunn Spring 2013

2
**Previous Lecture The Viola/Jones Face Detector**

A seminal approach to real-time object detection Training is slow, but detection is very fast Key ideas Integral images for fast feature evaluation Boosting for feature selection Attentional cascade for fast rejection of non-face windows P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. CVPR 2001. P. Viola and M. Jones. Robust real-time face detection. IJCV 57(2), 2004.

3
**Image Features “Rectangle filters” Value =**

∑ (pixels in white area) – ∑ (pixels in black area) For real problems results are only as good as the features used... This is the main piece of ad-hoc (or domain) knowledge Rather than the pixels, we have selected a very large set of simple functions Sensitive to edges and other critcal features of the image ** At multiple scales Since the final classifier is a perceptron it is important that the features be non-linear… otherwise the final classifier will be a simple perceptron. We introduce a threshold to yield binary features

4
**Computing the integral image**

ii(x, y-1) s(x-1, y) i(x, y) Cumulative row sum: s(x, y) = s(x–1, y) + i(x, y) Integral image: ii(x, y) = ii(x, y−1) + s(x, y) MATLAB: ii = cumsum(cumsum(double(i)), 2);

5
**Boosting for face detection**

First two features selected by boosting: This feature combination can yield 100% detection rate and 50% false positive rate

6
**The AdaBoost Algorithm**

7
**Receiver operating characteristic**

Attentional cascade Chain classifiers that are progressively more complex and have lower false positive rates: Receiver operating characteristic vs false neg determined by % False Pos % Detection 50 In general simple classifiers, while they are more efficient, they are also weaker. We could define a computational risk hierarchy (in analogy with structural risk minimization)… A nested set of classifier classes The training process is reminiscent of boosting… - previous classifiers reweight the examples used to train subsequent classifiers The goal of the training process is different - instead of minimizing errors minimize false positives T T T T IMAGE SUB-WINDOW FACE Classifier 1 Classifier 2 Classifier 3 F F F NON-FACE NON-FACE NON-FACE

8
**Boosting vs. SVM Advantages of boosting Disadvantages**

Integrates classifier training with feature selection Complexity of training is linear instead of quadratic in the number of training examples Flexibility in the choice of weak learners, boosting scheme Testing is fast Easy to implement Disadvantages Needs many training examples Training is slow Often doesn’t work as well as SVM (especially for many-class problems)

9
**Face Recognition Attributes for training Similes for training**

N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar, "Attribute and Simile Classifiers for Face Verification," ICCV 2009.

10
**Face Recognition with Attributes**

Low-level features Images Attributes Verification RGB HOG LBP SIFT … + - Dark hair Asian Round Jaw Different Male RGB HOG LBP SIFT … + -

11
**Learning an attribute classifier**

Training images Low-level features Feature selection Train classifier RGB HoG HSV … RGB, Nose HoG, Eyes Gender classifier Males HSV, Hair RGB HoG HSV … Edges, Mouth … Male 0.87 Females

12
**Using simile classifiers for verification**

13
Eigenfaces

14
**Principal Component Analysis**

A N x N pixel image of a face, represented as a vector occupies a single point in N2-dimensional image space. Images of faces being similar in overall configuration, will not be randomly distributed in this huge image space. Therefore, they can be described by a low dimensional subspace. Main idea of PCA for faces: To find vectors that best account for variation of face images in entire image space. These vectors are called eigen vectors. Construct a face space and project the images into this face space (eigenfaces).

15
Image Representation Training set of m images of size N*N are represented by vectors of size N2 x1,x2,x3,…,xM Example

16
**Average Image and Difference Images**

The average training set is defined by m= (1/m) ∑mi=1 xi Each face differs from the average by vector ri = xi – m

17
**Covariance Matrix The covariance matrix is constructed as**

C = AAT where A=[r1,…,rm] Finding eigenvectors of N2 x N2 matrix is intractable. Hence, use the matrix ATA of size m x m and find eigenvectors of this small matrix. Size of this matrix is N2 x N2

18
**Eigenvalues and Eigenvectors - Definition**

If v is a nonzero vector and λ is a number such that Av = λv, then v is said to be an eigenvector of A with eigenvalue λ. Example l (eigenvalues) A v (eigenvectors)

19
**Eigenvectors of Covariance Matrix**

The eigenvectors vi of ATA are: Consider the eigenvectors vi of ATA such that ATAvi = ivi Premultiplying both sides by A, we have AAT(Avi) = i(Avi)

20
**Face Space A face image can be projected into this face space by**

The eigenvectors of covariance matrix are ui = Avi Face Space ui resemble facial images which look ghostly, hence called Eigenfaces A face image can be projected into this face space by pk = UT(xk – m) where k=1,…,m

21
**Projection into Face Space**

22
EigenFaces Algorithm

23
EigenFaces Algorithm

24
EigenFaces Algorithm

25
**EigenFaces for Detection**

26
**EigenFaces for Recognition**

27
**Limitations of Eigenfaces Approach**

Variations in lighting conditions Different lighting conditions for enrolment and query. Bright light causing image saturation. Differences in pose – Head orientation - 2D feature distances appear to distort. Expression - Change in feature location and shape.

28
**Linear Discriminant Analysis**

PCA does not use class information PCA projections are optimal for reconstruction from a low dimensional basis, they may not be optimal from a discrimination standpoint. LDA is an enhancement to PCA Constructs a discriminant subspace that minimizes the scatter between images of same class and maximizes the scatter between different class images

29
**More sliding window detection: Discriminative part-based models**

Many slides based on P. Felzenszwalb

30
**Challenge: Generic object detection**

34
**Pedestrian detection Features: Histograms of oriented gradients (HOG)**

Learn a pedestrian template using a linear support vector machine At test time, convolve feature map with template HOG feature map Template Detector response map N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR 2005

36
**Discriminative part-based models**

Root filter Part filters Deformation weights P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, Object Detection with Discriminatively Trained Part Based Models, PAMI 32(9), 2010

37
**Discriminative part-based models**

38
Object hypothesis Multiscale model: the resolution of part filters is twice the resolution of the root

39
**Scoring an object hypothesis**

The score of a hypothesis is the sum of filter scores minus the sum of deformation costs Subwindow features Displacements Filters Deformation weights

40
**Scoring an object hypothesis**

The score of a hypothesis is the sum of filter scores minus the sum of deformation costs Recall: pictorial structures Subwindow features Displacements Filters Deformation weights Matching cost Deformation cost

41
**Scoring an object hypothesis**

The score of a hypothesis is the sum of filter scores minus the sum of deformation costs Subwindow features Displacements Filters Deformation weights Concatenation of filter and deformation weights Concatenation of subwindow features and displacements

42
Detection Define the score of each root filter location as the score given the best part placements:

43
Detection Define the score of each root filter location as the score given the best part placements: Efficient computation: generalized distance transforms For each “default” part location, find the best-scoring displacement Head filter responses Distance transform Head filter

44
Detection

45
Matching result

46
**Training Training data consists of images with labeled bounding boxes**

Need to learn the filters and deformation parameters

47
**Training Our classifier has the form**

w are model parameters, z are latent hypotheses Latent SVM training: Initialize w and iterate: Fix w and find the best z for each training example (detection) Fix z and solve for w (standard SVM training) Issue: too many negative examples Do “data mining” to find “hard” negatives

48
Car model Component 1 Component 2

49
Car detections

50
Person model

51
Person detections

52
Cat model

53
Cat detections

54
Bottle model

55
More detections

56
**Quantitative results (PASCAL 2008)**

7 systems competed in the 2008 challenge Out of 20 classes, first place in 7 classes and second place in 8 classes Bicycles Person Bird Proposed approach Proposed approach Proposed approach

57
Summary

Similar presentations

OK

More sliding window detection: Discriminative part-based models

More sliding window detection: Discriminative part-based models

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on guru nanak dev ji Ppt on complex numbers class 11th economics Ppt on information and communication technology Ppt on shell scripting linux Ppt on north bengal Ppt on marie curie radioactivity Ppt on australian continent nations Ppt on interesting facts of india Ppt on glorious past of india Ppt on francis bacon