Multiple Instance Real Boosting with Aggregation Functions Hossein Hajimirsadeghi and Greg Mori School of Computing Science Simon Fraser University International.

Slides:



Advertisements
Similar presentations
Context-based object-class recognition and retrieval by generalized correlograms by J. Amores, N. Sebe and P. Radeva Discussion led by Qi An Duke University.
Advertisements

Ignas Budvytis*, Tae-Kyun Kim*, Roberto Cipolla * - indicates equal contribution Making a Shallow Network Deep: Growing a Tree from Decision Regions of.
Olivier Duchenne , Armand Joulin , Jean Ponce Willow Lab , ICCV2011.
Limin Wang, Yu Qiao, and Xiaoou Tang
Data Mining Classification: Alternative Techniques
Online Multiple Classifier Boosting for Object Tracking Tae-Kyun Kim 1 Thomas Woodley 1 Björn Stenger 2 Roberto Cipolla 1 1 Dept. of Engineering, University.
Learning on Probabilistic Labels Peng Peng, Raymond Chi-wing Wong, Philip S. Yu CSE, HKUST 1.
Object-centric spatial pooling for image classification Olga Russakovsky, Yuanqing Lin, Kai Yu, Li Fei-Fei ECCV 2012.
Large-Scale Object Recognition with Weak Supervision
Multiple Instance Learning
Bayesian Learning Rong Jin. Outline MAP learning vs. ML learning Minimum description length principle Bayes optimal classifier Bagging.
Multiple-Instance Learning Paper 1: A Framework for Multiple-Instance Learning [Maron and Lozano-Perez, 1998] Paper 2: EM-DD: An Improved Multiple-Instance.
MCS 2005 Round Table In the context of MCS, what do you believe to be true, even if you cannot yet prove it?
Bagging and Boosting in Data Mining Carolina Ruiz
Image Categorization by Learning and Reasoning with Regions Yixin Chen, University of New Orleans James Z. Wang, The Pennsylvania State University Published.
Three kinds of learning
CSSE463: Image Recognition Day 31 Due tomorrow night – Project plan Due tomorrow night – Project plan Evidence that you’ve tried something and what specifically.
Competent Undemocratic Committees Włodzisław Duch, Łukasz Itert and Karol Grudziński Department of Informatics, Nicholas Copernicus University, Torun,
Region Based Image Annotation Through Multiple-Instance Learning By: Changbo Yang Wayne State University Department of Computer Science.
Wayne State University, 1/31/ Multiple-Instance Learning via Embedded Instance Selection Yixin Chen Department of Computer Science University of.
Selective Sampling on Probabilistic Labels Peng Peng, Raymond Chi-Wing Wong CSE, HKUST 1.
August 16, 2015EECS, OSU1 Learning with Ambiguously Labeled Training Data Kshitij Judah Ph.D. student Advisor: Prof. Alan Fern Qualifier Oral Presentation.
Crash Course on Machine Learning
Transfer Learning From Multiple Source Domains via Consensus Regularization Ping Luo, Fuzhen Zhuang, Hui Xiong, Yuhong Xiong, Qing He.
Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification on Reviews Peter D. Turney Institute for Information Technology National.
CSSE463: Image Recognition Day 27 This week This week Last night: k-means lab due. Last night: k-means lab due. Today: Classification by “boosting” Today:
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Visual Tracking with Online Multiple Instance Learning
Ensemble Classification Methods Rayid Ghani IR Seminar – 9/26/00.
Partially Supervised Classification of Text Documents by Bing Liu, Philip Yu, and Xiaoli Li Presented by: Rick Knowles 7 April 2005.
Learning from Multi-topic Web Documents for Contextual Advertisement KDD 2008.
Boris Babenko 1, Ming-Hsuan Yang 2, Serge Belongie 1 1. University of California, San Diego 2. University of California, Merced OLCV, Kyoto, Japan.
Boris 2 Boris Babenko 1 Ming-Hsuan Yang 2 Serge Belongie 1 (University of California, Merced, USA) 2 (University of California, San Diego, USA) Visual.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Combining multiple learners Usman Roshan. Bagging Randomly sample training data Determine classifier C i on sampled data Goto step 1 and repeat m times.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
CLASSIFICATION: Ensemble Methods
Stefan Mutter, Mark Hall, Eibe Frank University of Freiburg, Germany University of Waikato, New Zealand The 17th Australian Joint Conference on Artificial.
Extending the Multi- Instance Problem to Model Instance Collaboration Anjali Koppal Advanced Machine Learning December 11, 2007.
ISQS 6347, Data & Text Mining1 Ensemble Methods. ISQS 6347, Data & Text Mining 2 Ensemble Methods Construct a set of classifiers from the training data.
Exploit of Online Social Networks with Community-Based Graph Semi-Supervised Learning Mingzhen Mo and Irwin King Department of Computer Science and Engineering.
Multiple Instance Learning via Successive Linear Programming Olvi Mangasarian Edward Wild University of Wisconsin-Madison.
Robust Object Tracking with Online Multiple Instance Learning
Optimal Dimensionality of Metric Space for kNN Classification Wei Zhang, Xiangyang Xue, Zichen Sun Yuefei Guo, and Hong Lu Dept. of Computer Science &
Online Multiple Kernel Classification Steven C.H. Hoi, Rong Jin, Peilin Zhao, Tianbao Yang Machine Learning (2013) Presented by Audrey Cheong Electrical.
Boris Babenko, Steve Branson, Serge Belongie University of California, San Diego ICCV 2009, Kyoto, Japan.
CSSE463: Image Recognition Day 33 This week This week Today: Classification by “boosting” Today: Classification by “boosting” Yoav Freund and Robert Schapire.
Multiple Instance Learning for Sparse Positive Bags Razvan C. Bunescu Machine Learning Group Department of Computer Sciences University of Texas at Austin.
Data Mining, ICDM '08. Eighth IEEE International Conference on Duy-Dinh Le National Institute of Informatics Hitotsubashi, Chiyoda-ku Tokyo,
© Devi Parikh 2008 Devi Parikh and Tsuhan Chen Carnegie Mellon University April 3, ICASSP 2008 Bringing Diverse Classifiers to Common Grounds: dtransform.
Classification Ensemble Methods 1
Discriminative Frequent Pattern Analysis for Effective Classification By Hong Cheng, Xifeng Yan, Jiawei Han, Chih- Wei Hsu Presented by Mary Biddle.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Direct mining of discriminative patterns for classifying.
… Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Combining multiple learners Usman Roshan. Decision tree From Alpaydin, 2010.
 Effective Multi-Label Active Learning for Text Classification Bishan yang, Juan-Tao Sun, Tengjiao Wang, Zheng Chen KDD’ 09 Supervisor: Koh Jia-Ling Presenter:
Mismatch String Kernals for SVM Protein Classification Christina Leslie, Eleazar Eskin, Jason Weston, William Stafford Noble Presented by Pradeep Anand.
Max-Confidence Boosting With Uncertainty for Visual tracking WEN GUO, LIANGLIANG CAO, TONY X. HAN, SHUICHENG YAN AND CHANGSHENG XU IEEE TRANSACTIONS ON.
Mustafa Gokce Baydogan, George Runger and Eugene Tuv INFORMS Annual Meeting 2011, Charlotte A Bag-of-Features Framework for Time Series Classification.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Yu-Feng Li 1, James T. Kwok2, Ivor W. Tsang3 and Zhi-Hua Zhou1
Boosting and Additive Trees
COMP61011 : Machine Learning Ensemble Models
Combining Base Learners
Data Mining Practical Machine Learning Tools and Techniques
A New Boosting Algorithm Using Input-Dependent Regularizer
Multiple Instance Learning: applications to computer vision
Ensemble learning.
Boris Babenko, Steve Branson, Serge Belongie
Presentation transcript:

Multiple Instance Real Boosting with Aggregation Functions Hossein Hajimirsadeghi and Greg Mori School of Computing Science Simon Fraser University International Conference on Pattern Recognition November 14, 2012

Multiple Instance Learning 2 Traditional supervised learning gets Instance/label pairs A kind of weak learning to handle ambiguity in training data Standard Definitions: – Positive Bag: At least one of the instances is positive – Negative Bag: All the instances are negative Multiple Instance Learning (MIL) gets bag of instances/label pairs

Applications of MIL Image Categorization – [e.g., chen et al., IEEE-TPAMI 2006] Content-Based Image Retrieval – [e.g., Li et al., ICCV11] 3

Applications of MIL Text Categorization – [e.g., Andrews et al., NIPS02] Object Tracking – [e.g., Babenko et al., IEEE-TPAMI 2011] 4

Problem & Objective The information “At least one of the instances is positive” is very weak and ambiguous. – There are examples of MIL datasets where most instances in the positive bags are positive. We aim to mine through different levels of ambiguity in the data: – For example: a few instances are positive, some instances are positive, many instances are positive, most instances are positive, … 5

Approach Using the ideas in Boosting: – Finding a bag-level classifier by maximizing the expected log-likelihood of the training bags – Finding an instance-level strong classifier as a combination of weak classifiers like RealBoost Algorithm (Friedman et al. 2000), modified by the information from the bag-level classifier Using aggregation functions with different degrees of or-ness: – Aggregate the probability of instances to define probability of a bag be positive 6

Ordered Weighted Averaging (OWA) OWA is an aggregation function: 7 Yager et al. IEEE-TSMC, 1988

OWA: Example 8 1)Sort the values: 0.9, 0.6, 0.5, 0.1 Ex: uniform aggregation (mean): 2)Compute the weighted sum:

OWA: Linguistic Quantifiers Regular Increasing Monotonic (RIM) Quantifiers – All, Many, Half, Some, At Least One, … 9

OWA: RIM Quantifiers RIM Quantifier : All Q Ex:

OWA: RIM Quantifiers RIM Quantifier : At Least One 11 Ex: 1 1 Q

OWA: RIM Quantifiers RIM Quantifier : At Least Some 12 Gives higher weights to the largest arguments So, some high values are enough to make the result high 1 1 Q

OWA: RIM Quantifiers RIM Quantifier : Many 13 Gives lower weights to the largest arguments So, many arguments should have high values to make the result high 1 1 Q

OWA: Linguistic Quantifiers 14 Linguistic QuantifierDegree of orness At least one of them (Max function) Few of them Some of them Half of them10.5 Many of them Most of them All of them (Min Function) 0.001

MIRealBoost 15 Instance Probabilities Bag Probabilities Training Bags Instance Classifier Bag Classifier

MIRealBoost MIL training input: Objective to find the bag classifier: 16

MIRealBoost: Learning Bag Classifier Objective: Maximize the Expected Binomial Log- Likelihood: 17 Proved: ? ?

MIRealBoost: Estimate Bag Prob. 18 Estimate probability of each instance Aggregate Aggregation functions : Noisy-OR OWA ? ?

MIRealBoost: Estimate Instance Prob. Estimate Instance Probabilities by training the standard RealBoost classifier: 19 Then:

MIRealBoost: Learning Instance Classifier RealBoost classifier : 20 Proved: ? ?

MIRealBoost: Estimate Weak Classifiers

22 We do not know true instance labels. Estimate the instance label by the bag label, weighted by the bag confidence ?

MIRealBoost Algorithm 23 For each feature k=1:K, compute the weak classifier Compute the instance probabilities Aggregate the instance probabilities to find bag probabilities Compute the experimental log likelihood

Experiments Popular MIL datasets: – Image categorization: Elephant, Fox, and Tiger – Drug activity prediction: Musk1 and Musk2 24

Results MIRealBoost classification accuracy with Different Aggregation functions 25 aggElephantFoxTigerMusk1Musk2 NOR Max Few Some Half Many Most All

Results Comparison with MILBoost Algorithm 26 MethodElephantFoxTigerMusk1Musk2 MIRealBoost MILBoost MILBoost results are reported from Leistner et al. ECCV10

Results Comparison between state-of-the-art MIL methods 27 MethodElephantFoxTigerMusk1Musk2 MIRealBoost MIForest MI-Kernel MI-SVM mi-SVM MILES AW-SVM AL-SVM EM-DD MIGraph miGraph

Conclusion Proposed MIRealBoost algorithm Modeling different levels of ambiguity in data – Using OWA aggregation functions which can realize a wide range of orness in aggregation Experimental results showed: – encoding degree of ambiguity can improve the accuracy – MIRealBoost outperforms MILBoost and comparable with state-of-the art methds 28

Thanks! supported by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC). 29

MIRealBoost: Learning Instance Classifier Implementation details: – Each weak classifier is a stump (i.e., built from only one feature). – At each step, the best feature is selected as the feature which leads to the bag probabilities, which maximize the empirical log-likelihood of the bags. 30