Download presentation
Presentation is loading. Please wait.
Published byEdwin Chambers Modified over 9 years ago
1
Recognition by Probabilistic Hypothesis Construction P. Moreels, M. Maire, P. Perona California Institute of Technology
2
Rich features, probabilistic, fast learning, efficient matching Background Huttenlocher & Ullman, 1990 Efficient matching Rich features Fischler & Elschlager, 1973 v.d. Malsburg et al. ‘93 Burl et al. ‘96 Weber et al. ‘00 Fergus et al. ‘03 Lowe ‘99, ‘04 Probabilistic constellations, categories
3
Objective: Individual object recognition D.Lowe, constellation model. Hypothesis and score. Scheduling of matches. Experiments: compare with D.Lowe. Outline
4
Lowe’s recognition system Lowe’99,’04 … Test image Models
5
Constellation model Burl’96, Weber’00, Fergus’03
6
Principled detection/recognition Learn parameters from data Model clutter, occlusion, distortions + - Lowe’s recognition systemConstellation model High number of parameters (O(n 2 )) 5-7 parts per model many training examples needed learning expensive Many parts redundancy Learn from 1 image Fast Pros and Cons Manual tuning of parameters Rigid planar objects Sensitive to clutter
7
How to adapt the constellation model to our needs ?
8
Reducing degrees of freedom 1. Common reference frame ([Lowe’99],[Huttenlocher’90]) 2. Share parameters ([Schmid’97]) 3. Use prior information learned on foreground and background ([FeiFei’03]) model m position of model m
9
Foreground Gaussian shape pdf Gaussian part appearance pdf Gaussian relative scale pdf log(scale) Prob. of detection 0.8 Based on [Fergus’03][Burl’98] Parameters and priors 0.8 0.75 0.9 Gaussian background appearance pdf Clutter Constellation model Gaussian part appearance pdf Gaussian relative scale pdf log(scale) Prob. of detection 0.8 0.2 Gaussian background appearance pdf Gaussian conditional shape pdf ForegroundClutter Sharing parameters
10
Hypotheses – features assignments = models from database New scene (test image)... Interpretation...
11
Models from database New scene (test image) Hypotheses – model position 11 22 33 Θ = affine transformation
12
Score of a hypothesis Hypothesis: model + position + assignments observed features geometry + appearance database of models (Bayes rule) constant Consistency Hypothesis probability
13
Score of a hypothesis - Consistency between observations and hypothesis - Probability of number of clutter detections - Probability of detecting the indicated model features - Prior on the pose of the given model foreground features‘null’ assignments geometry appearance
14
Efficient matching process
15
Scheduling – inspired from A* empty hypothesis 1 assignment … scene features, no assignment done PPPP perfect completion (admissible heuristic, used as a guide for the search) Increase computational efficiency: - at each node, searches only a fixed number of sub-branches - forces termination Pearl’84,Grimson’87 ‘null’ assignment …… 2 assignments PPPP PP can be compared explore most promising branches first PP Score
16
…. models from database New scene Recognition: the first match No clue regarding geometry first match based on appearance best match second best match features Initialization of hypotheses queue …. PPPPP PPPPP PPPPP
17
models from database New scene Scheduling – promising branches first features Updated hypotheses queue …. PPPP PPP PPP ?
18
Experiments
19
Toys database – models 153 model images
20
Toys database – test images (scenes) - 90 test images - multiple objects or different view of model
21
100 model images Kitchen database – models
22
Kitchen database – test images - 80 test images - 0-9 models / test image
23
Test image Identified model Test imageIdentified model Examples Lowe’s model implemented using [Lowe’97,’99,’01,’03] Lowe’s method Our system
24
a. Object found, correct pose Detection b. Object found, incorrect pose False alarm c. Wrong object found False alarm d. Object not found Non detection Performance evaluation Test image hand-labeled before the experiments
25
Results – Toys images Scenes (test images) Models (database) - 80% recognition with false alarms / test set = 0.2 - Lower false alarm rate than Lowe’s system. - 153 model images - 90 test images - 0-5 models / test image
26
Results – Kitchen images - Achieves 77% recognition rate with 0 false alarms - 100 training images - 80 test images - 0-9 models / test image - 254 objects to be detected
27
Unified treatment Best of both worlds Probabilistic interpretation of Lowe [‘99,’04]. Extension of [Burl,Weber,Fergus ‘96-’03] to many-features, many-models, one-shot learning. Higher performance Comparison with Lowe [‘99,’04]. Future work: categories Conclusions
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.