Presentation is loading. Please wait.

Presentation is loading. Please wait.

Object Recognizing We will discuss: Features Classifiers Example ‘winning’ system.

Similar presentations


Presentation on theme: "Object Recognizing We will discuss: Features Classifiers Example ‘winning’ system."— Presentation transcript:

1 Object Recognizing We will discuss: Features Classifiers Example ‘winning’ system

2 Object Classes

3 ClassNon-class

4

5 Features and Classifiers Same features with different classifiers Same classifier with different features

6 Generic Features Simple (wavelets)Complex (Geons)

7 Class-specific Features: Common Building Blocks

8 Optimal Class Components? Large features are too rare Small features are found everywhere Find features that carry the highest amount of information

9 Entropy Entropy: x =01H p =0.50.5? 0.10.90.47 0.010.990.08

10 Mutual Information I(C,F) Class:11010100 Feature:10011100 I(F,C) = H(C) – H(C|F)

11 Optimal classification features Theoretically: maximizing delivered information minimizes classification error In practice: informative object components can be identified in training images

12 Selecting Fragments

13 Adding a New Fragment (max-min selection) ? MIΔ MI = MI [ Δ ; class ] - MI [ ; class ] Select: Max i Min k ΔMI (Fi, Fk) (Min. over existing fragments, Max. over the entire pool)

14 Highly Informative Face Fragments

15 Horse-class features Car-class features Pictorial features Learned from examples

16 Fragments with positions On all detected fragments within their regions

17 Star model Detected fragments ‘vote’ for the center location Find location with maximal vote

18 Bag of words

19 Bag of visual words A large collection of image patches –

20 Each class has its words historgram – – –

21 SVM – linear separation in feature space

22 Optimal Separation SVMPerceptron Find a separating plane such that the closest points are as far as possible

23 Separating line:w ∙ x + b = 0 Far line:w ∙ x + b = +1 Their distance:w ∙ ∆x = +1 Separation:|∆x| = 1/|w| Margin:2/|w| 0 +1 The Margin

24 Max Margin Classification (Equivalently, usually used How to solve such constraint optimization? The examples are vectors x i The labels y i are +1 for class, -1 for non-class

25 Using Lagrange multipliers: Minimize L P = With α i > 0 the Lagrange multipliers

26 Minimize L p : Set all derivatives to 0: Also for the α i Dual formulation: Maximize the Lagrangian w.r.t. the α i and the above conditions. Put into L p

27 Dual formulation Mathematically equivalent formulation: Can maximize the Lagrangian with respect to the α i After manipulations – nice concise optimization:

28 SVM: in simple matrix form We first find the α. From this we can find:w, b, and the support vectors. The matrix H is a simple ‘data matrix’: H ij = y i y j Final classification: w∙x + b ∑α i y i + b Because w = ∑α i y i x i Only with support vectors are used

29 Full story – separable case Or use ∑α i y i + b

30 Quadratic Programming QP Minimize (with respect to x) Subject to one or more constraints of the form: Ax < b (inequality constraints) Ex = d (equality constraints)

31 The non-separable case

32 It turns out that we can get a very similar formulation of the problem and solution, if we penalize the incorrect classification in a certain way. The penalty is Cξ i where ξ i ≥ 0is the distance of the miss- classified point from the respective plane. We now minimize a penalty with the miss-classifications:

33

34 Kernel Classification

35

36 Using kernels A kernel K(x,x’) is also associated with a mapping x → φ(x) We can use φ(x) and perform a linear classification in the target space. It turns out that this can be done directly using kernels and without the mapping, the results are equivalent. The optimal separation in the target space is the same as what we will get using the procedure below. It is similar to the linear case, with the kernel replacing the dot-product.

37 Use K(x i, x j ) Use ∑α i y i K + b

38 Summary points Linear separation with the largest margin, f(x) = w∙x + b Dual formulation, f(x) = ∑α i y i (x i ∙ x) + b Natural extension to non-separable classes Extension through kernels, f(x) = ∑α i y i K(x i x) + b

39 Felzenszwalb et al. Felzenszwalb, McAllester, Ramanan CVPR 2008. A Discriminatively Trained, Multiscale, Deformable Part Model

40 Object model using HoG A bicycle and its ‘root filter’ The root filter is a patch of HoG descriptor Image is partitioned into 8x8 pixel cells In each block we compute a histogram of gradient orientations

41 Using patches with HoG descriptors and classification by SVM

42 The filter is searched on a pyramid of HoG descriptors, to deal with unknown scale Dealing with scale: multi-scale analysis

43 A part Pi = (Fi, vi, si, ai, bi). Fi is filter for the i-th part, vi is the center for a box of possible positions for part i relative to the root position, si the size of this box ai and bi are two-dimensional vectors specifying coefficients of a quadratic function measuring a score for each possible placement of the i-th part. That is, a i and b i are two numbers each, and the penalty for deviation ∆x, ∆y from the expected location is a 1 ∆ x + a 2 ∆y + b 1 ∆x 2 + b 2 ∆y 2 Adding Parts

44 Bicycle model: root, parts, spatial map Person model

45

46 The full score of a potential match is: ∑ F i ∙ H i + ∑ a i1 x i + a i2 y + b i1 x 2 + b i2 y 2 F i ∙ H i is the appearance part x i, y i, is the deviation of part p i from its expected location in the model. This is the spatial part. Match Score

47 The score of a match can be expressed as the dot-product of a vector β of coefficients, with the image: Score = β∙ψ Using the vectors ψ to train an SVM classifier: β∙ψ > 1 for class examples β∙ψ < 1 for class examples Using SVM:

48 β∙ψ > 1 for class examples β∙ψ < 1 for class examples However, ψ depends on the placement z, that is, the values of ∆x i, ∆y i We need to take the best ψ over all placements. In their notation: Classification then uses β∙f > 1 We need to take the best ψ over all placements. In their notation: Classification then uses β∙f > 1

49 In analogy to classical SVMs we would like to train from labeled examples D = (..., ) By optimizing the following objective function, Finding β, SVM training:

50 search with gradient descent over the placement. This includes also the levels in the hierarchy. Start with the root filter, find places of high score for it. For these high-scoring locations, each for the optimal placement of the parts at a level with twice the resolution as the root-filter, using GD. With the optimal placement, use β∙ψ > 1 for class examples β∙ψ < 1 for class examples Recognition

51

52 Training -- positive examples with bounding boxes around the objects, and negative examples. Learn root filter using SVM Define fixed number of parts, at locations of high energy in the root filter HoG Use these to start the iterative learning

53 Hard Negatives The set M of hard-negatives for a known β and data set D These are support vector (y ∙ f =1) or misses (y ∙ f < 1) Optimal SVM training does not need all the examples, hard examples are sufficient. For a given β, use the positive examples + C hard examples Use this data to compute β by standard SVM Iterate (with a new set of C hard examples)

54

55 All images contain at least 1 bike

56

57 Correct person detections

58 Difficult images, medium results. About 0.5 precision at 0.5 recall

59 All images contain at least 1 bird

60

61 Average precision: Roughly, AP of 0.3 – in a test with 1000 class images, out of the top 1000 detection, 300 will be true class examples (recall = precision = 0.3).

62 Future Directions Dealing with very large number of classes –Imagenet, 15,000 categories, 12 million images To consider: human-level performance for at least one class


Download ppt "Object Recognizing We will discuss: Features Classifiers Example ‘winning’ system."

Similar presentations


Ads by Google