Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Vision Template matching and object recognition Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, …

Similar presentations


Presentation on theme: "Computer Vision Template matching and object recognition Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, …"— Presentation transcript:

1 Computer Vision Template matching and object recognition Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, …

2 Computer Vision 2 Jan 16/18-Introduction Jan 23/25CamerasRadiometry Jan 30/Feb1Sources & ShadowsColor Feb 6/8Linear filters & edgesTexture Feb 13/15Multi-View GeometryStereo Feb 20/22Optical flowProject proposals Feb27/Mar1Affine SfMProjective SfM Mar 6/8Camera CalibrationSegmentation Mar 13/15Springbreak Mar 20/22FittingProb. Segmentation Mar 27/29Silhouettes and Photoconsistency Linear tracking Apr 3/5Project UpdateNon-linear Tracking Apr 10/12Object Recognition Apr 17/19Range data Apr 24/26Final project Tentative class schedule

3 Computer Vision 3 Recognition by finding patterns We have seen very simple template matching (under filters) Some objects behave like quite simple templates –Frontal faces Strategy: –Find image windows –Correct lighting –Pass them to a statistical test (a classifier) that accepts faces and rejects non-faces

4 Computer Vision 4 Basic ideas in classifiers Loss –some errors may be more expensive than others e.g. a fatal disease that is easily cured by a cheap medicine with no side-effects -> false positives in diagnosis are better than false negatives –We discuss two class classification: L(1->2) is the loss caused by calling 1 a 2 Total risk of using classifier s

5 Computer Vision 5 Basic ideas in classifiers Generally, we should classify as 1 if the expected loss of classifying as 1 is better than for 2 gives Crucial notion: Decision boundary –points where the loss is the same for either case 1 if 2 if

6 Computer Vision 6 Some loss may be inevitable: the minimum risk (shaded area) is called the Bayes risk

7 Computer Vision 7 Finding a decision boundary is not the same as modelling a conditional density.

8 Computer Vision 8 Assume normal class densities, p-dimensional measurements with common (known) covariance and different (known) means Class priors are Can ignore a common factor in posteriors - important; posteriors are then: Example: known distributions

9 Computer Vision 9 Classifier boils down to: choose class that minimizes: because covariance is common, this simplifies to sign of a linear expression (i.e. Voronoi diagram in 2D for  =I and equal priors) Mahalanobis distance

10 Computer Vision 10 Plug-in classifiers Assume that distributions have some parametric form - now estimate the parameters from the data. Common: –assume a normal distribution with shared covariance, different means; use usual estimates –ditto, but different covariances; ditto Issue: parameter estimates that are “good” may not give optimal classifiers.

11 Computer Vision 11 Histogram based classifiers Use a histogram to represent the class- conditional densities –(i.e. p(x|1), p(x|2), etc) Advantage: estimates become quite good with enough data! Disadvantage: Histogram becomes big with high dimension –but maybe we can assume feature independence?

12 Computer Vision 12 Finding skin Skin has a very small range of (intensity independent) colours, and little texture –Compute an intensity-independent colour measure, check if colour is in this range, check if there is little texture (median filter) –See this as a classifier - we can set up the tests by hand, or learn them. –get class conditional densities (histograms), priors from data (counting) Classifier is

13 Computer Vision 13 Figure from “Statistical color models with application to skin detection,” M.J. Jones and J. Rehg, Proc. Computer Vision and Pattern Recognition, 1999 copyright 1999, IEEE

14 Computer Vision 14 Figure from “Statistical color models with application to skin detection,” M.J. Jones and J. Rehg, Proc. Computer Vision and Pattern Recognition, 1999 copyright 1999, IEEE Receiver Operating Curve

15 Computer Vision 15 Finding faces Faces “look like” templates (at least when they’re frontal). General strategy: –search image windows at a range of scales –Correct for illumination –Present corrected window to classifier Issues –How corrected? –What features? –What classifier? –what about lateral views?

16 Computer Vision 16 Naive Bayes (Important: naive not necessarily pejorative) Find faces by vector quantizing image patches, then computing a histogram of patch types within a face Histogram doesn’t work when there are too many features –features are the patch types –assume they’re independent and cross fingers –reduction in degrees of freedom –very effective for face finders why? probably because the examples that would present real problems aren’t frequent. Many face finders on the face detection home page http://home.t-online.de/home/Robert.Frischholz/face.htm

17 Computer Vision 17 Figure from A Statistical Method for 3D Object Detection Applied to Faces and Cars, H. Schneiderman and T. Kanade, Proc. Computer Vision and Pattern Recognition, 2000, copyright 2000, IEEE

18 Computer Vision 18 Face Recognition Whose face is this? (perhaps in a mugshot) Issue: –What differences are important and what not? –Reduce the dimension of the images, while maintaining the “important” differences. One strategy: –Principal components analysis

19 Computer Vision 19 Template matching Simple cross-correlation between images Best match wins Computationally expensive, i.e. requires presented image to be correlated with every image in the database !

20 Computer Vision 20 Eigenspace matching Consider PCA Then, Much cheaper to compute!

21 Computer Vision 21

22 Computer Vision 22 Eigenfaces plus a linear combination of eigenfaces

23 Computer Vision 23

24 Computer Vision 24 Appearance manifold approach - for every object sample the set of viewing conditions - use these images as feature vectors - apply a PCA over all the images - keep the dominant PCs - sequence of views for 1 object represent a manifold in space of projections - what is the nearest manifold for a given view? (Nayar et al. ‘96)

25 Computer Vision 25 Object-pose manifold Appearance changes projected on PCs (1D pose changes) Sufficient characterization for recognition and pose estimation

26 Computer Vision 26 Real-time system (Nayar et al. ‘96)

27 Computer Vision 27 Difficulties with PCA Projection may suppress important detail –smallest variance directions may not be unimportant Method does not take discriminative task into account –typically, we wish to compute features that allow good discrimination –not the same as largest variance

28 Computer Vision 28

29 Computer Vision 29 Linear Discriminant Analysis We wish to choose linear functions of the features that allow good discrimination. –Assume class-conditional covariances are the same –Want linear feature that maximises the spread of class means for a fixed within- class variance

30 Computer Vision 30

31 Computer Vision 31

32 Computer Vision 32

33 Computer Vision 33 Neural networks Linear decision boundaries are useful –but often not very powerful –we seek an easy way to get more complex boundaries Compose linear decision boundaries –i.e. have several linear classifiers, and apply a classifier to their output –a nuisance, because sign(ax+by+cz) etc. isn’t differentiable. –use a smooth “squashing function” in place of sign.

34 Computer Vision 34

35 Computer Vision 35

36 Computer Vision 36 Training Choose parameters to minimize error on training set Stochastic gradient descent, computing gradient using trick (backpropagation, aka the chain rule) Stop when error is low, and hasn’t changed much

37 Computer Vision 37 The vertical face-finding part of Rowley, Baluja and Kanade’s system Figure from “Rotation invariant neural-network based face detection,” H.A. Rowley, S. Baluja and T. Kanade, Proc. Computer Vision and Pattern Recognition, 1998, copyright 1998, IEEE

38 Computer Vision 38 Histogram equalisation gives an approximate fix for illumination induced variability

39 Computer Vision 39 Architecture of the complete system: they use another neural net to estimate orientation of the face, then rectify it. They search over scales to find bigger/smaller faces. Figure from “Rotation invariant neural-network based face detection,” H.A. Rowley, S. Baluja and T. Kanade, Proc. Computer Vision and Pattern Recognition, 1998, copyright 1998, IEEE

40 Computer Vision 40 Figure from “Rotation invariant neural-network based face detection,” H.A. Rowley, S. Baluja and T. Kanade, Proc. Computer Vision and Pattern Recognition, 1998, copyright 1998, IEEE

41 Computer Vision 41 Convolutional neural networks Template matching using NN classifiers seems to work Natural features are filter outputs –probably, spots and bars, as in texture –but why not learn the filter kernels, too?

42 Computer Vision 42 Figure from “Gradient-Based Learning Applied to Document Recognition”, Y. Lecun et al Proc. IEEE, 1998 copyright 1998, IEEE A convolutional neural network, LeNet; the layers filter, subsample, filter, subsample, and finally classify based on outputs of this process.

43 Computer Vision 43 LeNet is used to classify handwritten digits. Notice that the test error rate is not the same as the training error rate, because the test set consists of items not in the training set. Not all classification schemes necessarily have small test error when they have small training error. Figure from “Gradient-Based Learning Applied to Document Recognition”, Y. Lecun et al Proc. IEEE, 1998 copyright 1998, IEEE

44 Computer Vision 44 Support Vector Machines Neural nets try to build a model of the posterior, p(k|x) Instead, try to obtain the decision boundary directly –potentially easier, because we need to encode only the geometry of the boundary, not any irrelevant wiggles in the posterior. –Not all points affect the decision boundary

45 Computer Vision 45 Support Vector Machines Set S of points x i  R n, each x i belongs to one of two classes y i {-1,1} The goals is to find a hyperplane that divides S in these two classes S is separable if  w  R n, b  R Separating hyperplanes didi Closest point

46 Computer Vision 46 Problem 1: Support Vector Machines Optimal separating hyperplane maximizes Optimal separating hyperplane (OSH) Minimize Subject to support vectors

47 Computer Vision 47 Solve using Lagrange multipliers Lagrangian –at solution –therefore

48 Computer Vision 48 Problem 2: Dual problem Minimize Subject to where Kühn-Tucker condition: (for x j a support vector) (  i >0 only for support vectors)

49 Computer Vision 49 Linearly non-separable cases Find trade-off between maximum separation and misclassifications Problem 3: Minimize Subject to

50 Computer Vision 50 Dual problem for non- separable cases Problem 4: Minimize Subject to where Kühn-Tucker condition: Support vectors: misclassified margin vectors too close OSH errors

51 Computer Vision 51 Decision function Once w and b have been computed the classification decision for input x is given by Note that the globally optimal solution can always be obtained (convex problem)

52 Computer Vision 52 Non-linear SVMs Non-linear separation surfaces can be obtained by non-linearly mapping the data to a high dimensional space and then applying the linear SVM technique Note that data only appears through vector product Need for vector product in high-dimension can be avoided by using Mercer kernels: (Polynomial kernel) (Radial Basis Function) (Sigmoïdal function) e.g.

53 Computer Vision 53 Space in which decision boundary is linear - a conic in the original space has the form

54 Computer Vision 54 SVMs for 3D object recognition - Consider images as vectors - Compute pairwise OSH using linear SVM - Support vectors are representative views of the considered object (relative to other) - Tournament like classification - Competing classes are grouped in pairs - Not selected classes are discarded - Until only one class is left - Complexity linear in number of classes - No pose estimation (Pontil & Verri PAMI’98)

55 Computer Vision 55 Vision applications Reliable, simple classifier, –use it wherever you need a classifier Commonly used for face finding Pedestrian finding –many pedestrians look like lollipops (hands at sides, torso wider than legs) most of the time –classify image regions, searching over scales –But what are the features? –Compute wavelet coefficients for pedestrian windows, average over pedestrians. If the average is different from zero, probably strongly associated with pedestrian

56 Computer Vision 56 Figure from, “A general framework for object detection,” by C. Papageorgiou, M. Oren and T. Poggio, Proc. Int. Conf. Computer Vision, 1998, copyright 1998, IEEE

57 Computer Vision 57 Figure from, “A general framework for object detection,” by C. Papageorgiou, M. Oren and T. Poggio, Proc. Int. Conf. Computer Vision, 1998, copyright 1998, IEEE

58 Computer Vision 58 Figure from, “A general framework for object detection,” by C. Papageorgiou, M. Oren and T. Poggio, Proc. Int. Conf. Computer Vision, 1998, copyright 1998, IEEE

59 Computer Vision 59 Latest results on Pedestrian Detection: Viola, Jones and Snow’s paper (ICCV’03: Marr prize) Combine static and dynamic features cascade for efficiency (4 frames/s) 5 best out of 55k (AdaBoost) 5 best static out of 28k (AdaBoost) some positive examples used for training

60 Computer Vision 60 Dynamic detection false detection: typically 1/400,000 (=1 every 2 frames for 360x240)

61 Computer Vision 61 Static detection

62 Computer Vision 62 Next class: Object recognition Reading: Chapter 23


Download ppt "Computer Vision Template matching and object recognition Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, …"

Similar presentations


Ads by Google