Presentation is loading. Please wait.

Presentation is loading. Please wait.

Probabilistic Object Recognition and Localization Bernt Schiele, Alex Pentland, ICCV 99 Presenter: Matt Grimes.

Similar presentations


Presentation on theme: "Probabilistic Object Recognition and Localization Bernt Schiele, Alex Pentland, ICCV 99 Presenter: Matt Grimes."— Presentation transcript:

1 Probabilistic Object Recognition and Localization Bernt Schiele, Alex Pentland, ICCV 99 Presenter: Matt Grimes

2 What they did 1.Chose a set of local image descriptors whose outputs are robust to object orientation and lighting. –Examples: Laplacian First-derivative magnitude:

3 What they did 2.Learn a PDF for the outputs of these descriptors given an image of the object: Vector of descriptor outputs A particular object Object orientation, lighting, etc.

4 What they did 2.Learn a PDF for the outputs of these descriptors given an image of the object: Vector of descriptor outputs A particular object

5 Use Bayes rule to obtain the posterior… …which is the probability of an image containing an object, given local image measurements M. (Not quite this clean) What they did

6 History of image-based object recognition Two major genres: 1.Histogram-based approaches. 2.Comparison of local image features.

7 Histogramming approaches Object recognition by color histograms (Swain & Ballard, IJCV 1991) –Robust to changes in orientation, scale. –Brittle against lighting changes (dependency on color). –Many classes of objects not distinguishable by color distribution alone.

8 Histogramming approaches Combat color-brittleness using (quasi-) invariants of color histograms: –Eigenvalues of matrices of moments of color histograms –Derivatives of logs of color channels –Comprehensive color normalization

9 Histogramming approaches Comprehensive color normalization examples:

10 Histogramming approaches Comprehensive color normalization examples:

11 Localized feature approaches Approaches include: –Using image interest-points to index into a hashtable of known objects. –Comparing large vectors of local filter responses.

12 Geometric Hashing 1.An interest point detector finds the same points on an object in different images. Types of interest points include corners, T- junctions, sudden texture changes.

13 Geometric Hashing From Schmid, Mohr, Bauckhage, Comparing and Evaluating Interest Points, ICCV 98

14 Geometric Hashing From Schmid, Mohr, Bauckhage, Comparing and Evaluating Interest Points, ICCV 98

15 Geometric Hashing 2.Store points in an affine-transform- invariant representation. 3.Store all possible triplets of points as keys in a hashtable.

16 Geometric Hashing 4.For object recognition, find all triplets of interest points in an image, look for matches in the hashtable, accumulate votes for the correct object. Hashtable approaches support multiple object recognition within the same image.

17 Geometric hashing weaknesses Dependent on the consistency of the interest point detector used. From Schmid, Mohr, Bauckhage, Comparing and Evaluating Interest Points, ICCV 98

18 Geometric hashing weaknesses Shoddy repeatibility necessitates lots of points. Lots of points, combined with noise, leads to lots of false positives.

19 Vectors of filter responses Typically use vectors of oriented filters at fixed grid points, or at interest points. Pros: –Very robust to noise. Cons: –Fixed grid needs large representation, large grid is sensitive to occlusion. –If using an interest point detector instead, the detector must be consistent over a variety of scenes.

20 Also: eigenpictures Calculate the eigenpictures of a set of images of objects to be recognized. Pros: –Efficient representation of images by their eigenpicture coefficients. (Fast searches) Cons: –Images must be pre-segmented. –Eigenpictures are not local (sensitive to occlusion). –Translation, image-plane rotation must be represented in the eigenpictures.

21 This paper: Uses vectors of filter responses, with probabilistic object recognition. Bayes rule Learned from training images Using scene- invariant M

22 Wins of this paper Uses hashtables for multiple object recognition. Unlike geometric hashing, doesnt depend on point correspondence betw. images. –Uses location-unspecific filter responses, not points. –Inherits robustness to noise of filter response methods.

23 Wins of this paper Uses local filter responses. –Robust to occlusion compared to global methods (e.g. eigenpictures or filter grids.) Probabilistic matching –Theoretically cleaner than voting. –Combined with local filter responses, allows for localization of detected objects.

24 Details of the PDF What degrees of freedom are there in the other parameters? o n : Object R: Rotation (3 DOF) T: Translation(3 DOF) S: Scene (occlusions, background) L: Lighting I: Imaging (noise, pixelation/blur)

25 P(M|o n,R,T,S,L,I) Way too many params to get a reliable estimate from even a large image library. # of examples needed is exponential in the number of dimensions of the PDF. Solution: choose measurements (M) that are invariant with respect to as many params as possible (except o n ).

26 Techniques for invariance Imaging (noise:) see Schieles thesis. Lighting: apply a energy normalization technique to the filter outputs. Scene: probabilistic object recognition + local image measurements. –Gives best estimate using the visible portion of the object.

27 Techniques for invariance Translation: –Tx, Ty (image-plane translation) are ignored for non-localizing recognition. –Tz is equivalent to scale. For known scales, compensate by scaling the filters regions of support.

28 Techniques for invariance Fairly robust to unknown scale:

29 Techniques for invariance Rotation: –Rz: rotation in the image plane. Filters invariant to image-plane rotation may be used. –Rx, Ry must remain in the PDF. Impossible to have viewpoint- invariant descriptors in the general case.

30 4 parameters. Still a large amount of training examples needed, but feasible. Example: algorithm has been successful after training with 108 images per object. (108 = 16 orientations * 6 scales) New PDF

31 Learning & representation of the PDF Since the goal is discrimination, overgeneralization is scarier than overfitting. They chose multidimensional histograms over parametric representations. They mention that they couldve used kernel function estimates.

32 Multidimensional Histograms

33 In their experiments, they use a 6-dimensional histogram. –X and Y derivative, at 3 different scales …with 24 buckets per axis. –Theoretical max for # of cells: 24 6 =1.9 x 10 8 Way too many cells to be meaningfully filled by even 512 x 512 (=262144 ) pixel images.

34 Multidimensional Histograms Somehow, by exploiting dependencies betw. histogram axes, and applying a uniform prior bias, they get the number of calculated cells below 10 5. Factor of 1000 reduction. Anybody know how they do this?

35 (Single) object recognition

36

37 A single measurement vector m k is insufficient for recognition.

38 (Single) object recognition A single measurement vector m k is insufficient for recognition.

39 (Single) object recognition For k measurement vectors:

40 (Single) object recognition

41

42

43 Measurement regions covering 10~20% of an object are usually sufficient for discrimination.

44 (Single) object recognition

45 Multiple object recognition We can apply the single-object detector to many small regions in the image.

46 Multiple object recognition The algorithm is now O(NKJ) –N = # of known objects –K = # of measurement vectors in each region –J = # of regions

47 Multiple object recognition

48

49

50

51

52

53

54

55

56

57

58

59 One drawback: For a given image, the algorithm calculates a probability for each object it knows of. The algorithm lists the objects in its library in decreasing order of probability. Need to know beforehand the number of objects in a test image, to know where to stop reading the list.

60 Failure example

61

62

63

64

65 Unfamiliar clutter

66

67

68

69 Bite the dimensionality bullet and add an object position variable to the PDF: Object localization

70 Stop assuming independence of mks, to account for structural dependencies: Object localization

71 Tradeoff between recognition and localization, depending on region size.

72 Object localization Heirarchical discrimination with coarse fine region size refinement:


Download ppt "Probabilistic Object Recognition and Localization Bernt Schiele, Alex Pentland, ICCV 99 Presenter: Matt Grimes."

Similar presentations


Ads by Google