Presentation is loading. Please wait.

Presentation is loading. Please wait.

Terrorists Team members: Ágnes Bartha György Kovács Imre Hajagos Wojciech Zyla.

Similar presentations


Presentation on theme: "Terrorists Team members: Ágnes Bartha György Kovács Imre Hajagos Wojciech Zyla."— Presentation transcript:

1 Terrorists Team members: Ágnes Bartha György Kovács Imre Hajagos Wojciech Zyla

2 The Project What is our goal? Who are they? How to start? How ro recognize face? –Face detection –Face feature detection –Eyes, mouth, nose related search

3 What is our goal? Find out if someone is a terrorist. Try to identify then even if they are disguised We have a problem..

4 Who are they? They are who –Blow up cars, buildings –Kill people –Undertake control Enough reason to do something

5 How to start? Database –Images of terrorist –Training images for identification ( by computer) Take a picture of suspicious person Start to do a program that decides if someone is a terrorist

6 How to recognize face? Problems –Disguised person –Other : rotated head, glasses. Use some algorithms –PCA –LDA OpenCV –Haar object detection –AdaBoot

7 PCA Principal Component Analysis reduce the dimensionality of the data while retaining as much as possible of the variation present in the original dataset implies information loss The best low-dimensional space can be determined by the "best„ eigenvectors of the covariance matrix (i.e., the eigenvectors corresponding to the "largest" eigenvalues, also called "principal components"). PCA projects the data along the directions where the data varies the most.

8

9 Problems of Eigenface technique Sensitive to rotation, scale and translation. Sensitive to lighting variations Background interference Face images should be preprocessed to lessen the effects of possible variations. Variations such as lighting and rotation can also be taken into account during training. The training dataset may include samples with such variations.

10 LDA Linear Discriminant Analysis The objective of LDA is to perform dimensionality reduction while preserving as much of the class discriminatory information as possible It seeks to find directions along which the classes are best separated. It does so by taking into consideration the scatter within-classes but also the scatter between-classes. It is also more capable of distinguishing image variation due to person identity from variation due to other sources such as illumination and expression. μr mean feature vector for class r. Kr number of training samples from class r. LDA computes a transformation that maximizes the between-class scatter while minimizing the within-class scatter:

11 LDA 2. Limitations: –at most R−1 nonzero eigenvalues. –matrix S w -1 does not always exist. need at least N +R training samples – not practical Use PCA to reduce dimention When the number of training samples is large and representative for each class, LDA outperforms PCA.

12 OpenCV Open Source Computer Vision Library –Extensive vision suport Convolution, thresholding, floodfills, histogramming Pyramidal-subsampling Learning-based vision Feature detection –Edge detection –Blob finders,..... –Haar cascade classifier

13 IplImage

14 OpenCV -- Haar OpenCV has a Haar features based face detection module. Uses local features such as edges and line patterns. It scans a given image at different scales as in template matching. Scale, translation and light invariant. However it is sensitive to rotation. –Rotate image and run again

15 Advantages of using OpenCV Haar object detection Face detector already implemented Its only argument is a xml file Detection at any scale Face detection (for videos) at 15 frames per second for 384*288 pixel images 90% objects detected – achievable doing 2 weeks training

16 Haar-Like Features Each Haar-like feature consists of two or three jointed “black” and “white” rectangles: The value of a Haar-like feature is the difference between the sum of the pixel gray level values within the black and white rectangular regions: f(x)=Sum black rectangle (pixel gray level) – Sum white rectangle (pixel gray level) Compared with raw pixel values, Haar-like features can reduce/increase the in-class/out-of-class variability, and thus making classification easier. Figure 1: A set of basic Haar-like features. Figure 2: A set of extended Haar-like features.

17 Haar-Like Features The rectangle Haar-like features can be computed rapidly using “integral image”. Integral image at location of x, y contains the sum of the pixel values above and left of x, y, inclusive: Haar features computed in constant time The sum of pixel values within “D”: AB C D P2P2 P3P3 P4P4 P1P1 P (x, y)

18 Adaboost classifier Selects a small number of critical visual features Combines a collection of weak classification functions to form a strong classifier The first and second features selected by AdaBoost for face detection

19 2. Haar-Like Features (cont’d) For example: to detect hand, the image is scanned by a sub-window containing a Haar-like feature. Based on each Haar-like feature f j, a weak classifier h j (x) is defined as: where x is a sub-window, and θ is a threshold. p j indicating the direction of the inequality sign.

20 Adaboost The computation cost using Haar-like features: Example: original image size: 320X240, sub-window size: 24X24, The total number of sub-windows with one Haar-like feature per second: (320-24)X(240-24)=63,936 Considering the scaling factor and the total number of Haar-like features, the computation cost is huge. AdaBoost (Adaptive Boost) is an iterative learning algorithm to construct a “strong” classifier using only a training set and a “weak” learning algorithm. A “weak” classifier with the minimum classification error is selected by the learning algorithm at each iteration. AdaBoost is adaptive in the sense that later classifiers are tuned up in favor of those sub-windows misclassified by previous classifiers.

21 Adaboost The algorithm:

22  Adaboost starts with a uniform distribution of “ weights ” over training examples. The weights tell the learning algorithm the importance of the example.  Obtain a weak classifier from the weak learning algorithm, h j (x).  Increase the weights on the training examples that were misclassified.  (Repeat)  At the end, carefully make a linear combination of the weak classifiers obtained at all iterations. Adaboost

23

24 Simple to implement But.. –Suboptimal solution –Over fit in presence of noise

25 The Cascade of Classifiers A series of classifiers are applied to every sub-window. Increases speed The first classifier eliminates a large number of negative sub-windows and pass almost all positive sub-windows (high false positive rate) with very little processing. Subsequent layers eliminate additional negatives sub-windows (passed by the first classifier) but require more computation. After several stages of processing the number of negative sub-windows have been reduced radically.

26

27 The Cascade of Classifiers Negative samples: non-object images. Negative samples are taken from arbitrary images. These images must not contain object representations. Positive samples: images contain object (hand in our case). The hand in the positive samples must be marked out for classifier training.

28 image detecting face detecting features creating cropping face normalizing feature vector 0 0 1... 0 1 0 Database of terrorists comparing vectors results is not in the database 144x150 90x130 256x256

29 Eye detection with Haar eye_haarcascade_classifier create a growable sequence of eyes detect the objects store them in the sequence

30

31 Thank you for your attention


Download ppt "Terrorists Team members: Ágnes Bartha György Kovács Imre Hajagos Wojciech Zyla."

Similar presentations


Ads by Google