Presentation is loading. Please wait.

Presentation is loading. Please wait.

Window-based models for generic object detection Mei-Chen Yeh 04/24/2012.

Similar presentations


Presentation on theme: "Window-based models for generic object detection Mei-Chen Yeh 04/24/2012."— Presentation transcript:

1 Window-based models for generic object detection Mei-Chen Yeh 04/24/2012

2 Object Detection Find the location of an object if it appear in an image –Does the object appear? –Where is it?

3

4 Viola-Jones face detector P. Viola and M. J. Jones. Robust Real-Time Face Detection. IJCV 2004.

5 Viola-Jones Face Detector: Results

6

7

8 Paul Viola, ICCV tutorial Viola-Jones Face Detector: Results

9 A successful application

10

11 Consumer application: iPhoto 2009 http://www.apple.com/ilife/iphoto/ Slide credit: Lana Lazebnik

12 Consumer application: iPhoto 2009 Things iPhoto thinks are faces Slide credit: Lana Lazebnik

13 Consumer application: iPhoto 2009 Can be trained to recognize pets! http://www.maclife.com/article/news/iphotos_faces_recognizes_cats Slide credit: Lana Lazebnik

14 Challenges Slide credit: Fei-Fei Li Michelangelo 1475-1564 view point variation illumination occlusion Magritte, 1957 scale

15 Challenges Xu, Beihong 1943 deformation Slide credit: Fei-Fei Li Klimt, 1913 background clutter

16 Basic framework Build/train object model –Choose a representation –Learn or fit parameters of model / classifier Generate candidates in new image Score the candidates

17 Basic framework Build/train object model –Choose a representation –Learn or fit parameters of model / classifier Generate candidates in new image Score the candidates

18 Window-based models Building an object model Face/non-face Classifier Yes, face. No, not a face. Given the representation, train a binary classifier

19 Basic framework Build/train object model –Choose a representation –Learn or fit parameters of model / classifier Generate candidates in new image Score the candidates

20 face/non-face Classifier Scans the detector at multiple locations and scales Window-based models Generate and score candidates

21 Window-based object detection Car/non-car Classifier Feature extraction Training examples Training: 1. Obtain training data 2. Define features 3. Define classifier Given new image: 1. Slide window 2. Score by classifier

22 Viola-Jones detection approach Viola and Jones’ face detection algorithm –The first object detection framework to provide competitive object detection rates in real-time –Implemented in OpenCV Components –Features Haar-features Integral image –Learning Boosting algorithm –Cascade method

23 Haar-features (1) The difference between pixels’ sum of the white and black areas

24 Haar-features (2) Capture the face symmetry

25

26 A 24x24 detection window Four types of haar features Type A Haar-features (3) Can be extracted at any location with any scale!

27 Haar-features (4) Too many features! –location, scale, type –180,000+ possible features associated with each 24 x 24 window Not all of them are useful! Speed-up strategy –Fast calculation of haar-features –Selection of good features 24 AdaBoost

28 Integral image (1) Sum of pixel values in the blue area Example: 21 2 3 4 3 32 1 2 2 3 42 1 1 1 2 Image 2 3 5 8 12 15 5 8 11 16 22 28 9 14 18 24 31 39 Integral image Time complexity?

29 Integral image (2) 1 3 2 ab c d a = sum(1) b = sum(1+2) c = sum(1+3) d = sum(1+2+3+4) Sum(4) = ? 4 d + a – b – c Four-point calculation! A, B: 2 rectangles => C: 3 rectangles => D: 4 rectangles => 6-point 8-point 9-point

30 Feature selection A very small number of features can be combined to from an effective classifier! Example: The 1 st and 2 nd features selected by AdaBoost

31 Feature selection A weak classifier h f1f1 f2f2 f 1 > θ (a threshold) => Face! f 2 ≤ θ (a threshold) => Not a Face! h = 1 if f i > θ 0 otherwise

32 Feature selection Idea: Combining several weak classifiers to generate a strong classifier α1α1 α2α2 α3α3 αTαT …… α 1 h 1 + α 2 h 2 + α 3 h 3 + … + α T h T ><>< T thresold weak classifier (feature, threshold) h 1 = 1 or 0 ~ performance of the weak classifier on the training set

33 Feature selection Training Dataset –4916 face images –non-face images cropped from 9500 images non-face images positive samplesnegative samples

34 AdaBoost Each training sample may have different importance! Focuses more on previously misclassified samples –Initially, all samples are assigned equal weights –Weights may change at each boosting round misclassified samples => increase their weights correctly classified samples => decrease their weights

35 Boosting illustration: 2D case Weak Classifier 1 Slide credit: Paul Viola

36 Boosting illustration Weights Increased

37 Boosting illustration Weak Classifier 2

38 Boosting illustration Weights Increased

39 Boosting illustration Weak Classifier 3

40 Boosting illustration Final classifier is a combination of weak classifiers

41 Viola-Jones detector: AdaBoost Want to select the single rectangle feature and threshold that best separates positive (faces) and negative (non- faces) training examples, in terms of weighted error. Outputs of a possible rectangle feature on faces and non-faces. … Resulting weak classifier: For next round, reweight the examples according to errors, choose another filter/threshold combo. Kristen Grauman

42 AdaBoost decreased increased fifi Initial weights for each data point -∞∞ misclassified ~ error rate error↘ α↗

43 AdaBoost ……

44 Learning the classifier Initialize equal weights to training samples For T rounds –normalize the weights –select the best weak classifier in terms of the weighted error –update the weights (raise weights to misclassified samples) Linearly combine these T weak classifiers to form a strong classifier

45 AdaBoost Algorithm Start with uniform weights on training examples Evaluate weighted error for each feature, pick best. Re-weight the examples: Incorrectly classified -> more weight Correctly classified -> less weight Final classifier is combination of the weak ones, weighted according to error they had. Freund & Schapire 1995 {x 1,…x n } For T rounds

46 First two features selected Feature Selection: Results

47 Boosting: pros and cons Advantages of boosting –Integrates classification with feature selection –Complexity of training is linear in the number of training examples –Flexibility in the choice of weak learners, boosting scheme –Testing is fast –Easy to implement Disadvantages –Needs many training examples –Often found not to work as well as an alternative discriminative classifier, support vector machine (SVM) especially for many-class problems Slide credit: Lana Lazebnik

48 Viola-Jones detection approach Viola and Jones’ face detection algorithm –The first object detection framework to provide competitive object detection rates in real-time –Implemented in OpenCV Components –Features Haar-features Integral image –Learning Boosting algorithm –Cascade method

49 Even if the filters are fast to compute, each new image has a lot of possible windows to search. How to make the detection more efficient?

50 Cascade method Strong Classifier = (α 1 h 1 + α 2 h 2 ) + (…)+ (…+ α T h T ) 123 ><>< T thresold Most windows contain no face! Rejects negative windows in an early stage!

51 Viola-Jones detector: summary Train with 5K positives, 350M negatives Real-time detector using 38 layer cascade 6061 features in all layers Implementation available in OpenCV Faces Non-faces Train cascade of classifiers with AdaBoost Selected features, thresholds, and weights New image Apply to each subwindow Kristen Grauman

52 Questions What other categories are amenable to window-based representation? Can the integral image technique be applied to compute histograms? Alternatives to sliding-window-based approaches?


Download ppt "Window-based models for generic object detection Mei-Chen Yeh 04/24/2012."

Similar presentations


Ads by Google