Download presentation

Presentation is loading. Please wait.

Published byLesley Booker Modified over 4 years ago

1
Multiple Instance Real Boosting with Aggregation Functions Hossein Hajimirsadeghi and Greg Mori School of Computing Science Simon Fraser University International Conference on Pattern Recognition November 14, 2012

2
Multiple Instance Learning 2 Traditional supervised learning gets Instance/label pairs A kind of weak learning to handle ambiguity in training data Standard Definitions: – Positive Bag: At least one of the instances is positive – Negative Bag: All the instances are negative Multiple Instance Learning (MIL) gets bag of instances/label pairs

3
Applications of MIL Image Categorization – [e.g., chen et al., IEEE-TPAMI 2006] Content-Based Image Retrieval – [e.g., Li et al., ICCV11] 3

4
Applications of MIL Text Categorization – [e.g., Andrews et al., NIPS02] Object Tracking – [e.g., Babenko et al., IEEE-TPAMI 2011] 4

5
Problem & Objective The information “At least one of the instances is positive” is very weak and ambiguous. – There are examples of MIL datasets where most instances in the positive bags are positive. We aim to mine through different levels of ambiguity in the data: – For example: a few instances are positive, some instances are positive, many instances are positive, most instances are positive, … 5

6
Approach Using the ideas in Boosting: – Finding a bag-level classifier by maximizing the expected log-likelihood of the training bags – Finding an instance-level strong classifier as a combination of weak classifiers like RealBoost Algorithm (Friedman et al. 2000), modified by the information from the bag-level classifier Using aggregation functions with different degrees of or-ness: – Aggregate the probability of instances to define probability of a bag be positive 6

7
Ordered Weighted Averaging (OWA) OWA is an aggregation function: 7 Yager et al. IEEE-TSMC, 1988

8
OWA: Example 8 1)Sort the values: 0.9, 0.6, 0.5, 0.1 Ex: uniform aggregation (mean): 2)Compute the weighted sum:

9
OWA: Linguistic Quantifiers Regular Increasing Monotonic (RIM) Quantifiers – All, Many, Half, Some, At Least One, … 9

10
OWA: RIM Quantifiers RIM Quantifier : All 10 1 1 Q Ex:

11
OWA: RIM Quantifiers RIM Quantifier : At Least One 11 Ex: 1 1 Q

12
OWA: RIM Quantifiers RIM Quantifier : At Least Some 12 Gives higher weights to the largest arguments So, some high values are enough to make the result high 1 1 Q

13
OWA: RIM Quantifiers RIM Quantifier : Many 13 Gives lower weights to the largest arguments So, many arguments should have high values to make the result high 1 1 Q

14
OWA: Linguistic Quantifiers 14 Linguistic QuantifierDegree of orness At least one of them (Max function) 0.999 Few of them0.10.909 Some of them0.50.667 Half of them10.5 Many of them20.333 Most of them100.091 All of them (Min Function) 0.001

15
MIRealBoost 15 Instance Probabilities Bag Probabilities Training Bags Instance Classifier Bag Classifier

16
MIRealBoost MIL training input: Objective to find the bag classifier: 16

17
MIRealBoost: Learning Bag Classifier Objective: Maximize the Expected Binomial Log- Likelihood: 17 Proved: ? ?

18
MIRealBoost: Estimate Bag Prob. 18 Estimate probability of each instance Aggregate Aggregation functions : Noisy-OR OWA ? ?

19
MIRealBoost: Estimate Instance Prob. Estimate Instance Probabilities by training the standard RealBoost classifier: 19 Then:

20
MIRealBoost: Learning Instance Classifier RealBoost classifier : 20 Proved: ? ?

21
MIRealBoost: Estimate Weak Classifiers

22
22 We do not know true instance labels. Estimate the instance label by the bag label, weighted by the bag confidence ?

23
MIRealBoost Algorithm 23 For each feature k=1:K, compute the weak classifier Compute the instance probabilities Aggregate the instance probabilities to find bag probabilities Compute the experimental log likelihood

24
Experiments Popular MIL datasets: – Image categorization: Elephant, Fox, and Tiger – Drug activity prediction: Musk1 and Musk2 24

25
Results MIRealBoost classification accuracy with Different Aggregation functions 25 aggElephantFoxTigerMusk1Musk2 NOR8363728574 Max7758688574 Few7558708372 Some7557738575 Half7254709077 Many6752679175 Most5450518369 All50 8469

26
Results Comparison with MILBoost Algorithm 26 MethodElephantFoxTigerMusk1Musk2 MIRealBoost8363739177 MILBoost7358567161 MILBoost results are reported from Leistner et al. ECCV10

27
Results Comparison between state-of-the-art MIL methods 27 MethodElephantFoxTigerMusk1Musk2 MIRealBoost8363739177 MIForest8464828582 MI-Kernel8460848889 MI-SVM8159847884 mi-SVM8258798784 MILES8162808883 AW-SVM8264838684 AL-SVM7963788683 EM-DD78567285 MIGraph85618290 miGraph87628690

28
Conclusion Proposed MIRealBoost algorithm Modeling different levels of ambiguity in data – Using OWA aggregation functions which can realize a wide range of orness in aggregation Experimental results showed: – encoding degree of ambiguity can improve the accuracy – MIRealBoost outperforms MILBoost and comparable with state-of-the art methds 28

29
Thanks! supported by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC). 29

30
MIRealBoost: Learning Instance Classifier Implementation details: – Each weak classifier is a stump (i.e., built from only one feature). – At each step, the best feature is selected as the feature which leads to the bag probabilities, which maximize the empirical log-likelihood of the bags. 30

Similar presentations

© 2020 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google