Multiple Instance Learning: applications to computer vision

Slides:



Advertisements
Similar presentations
Automatic Photo Pop-up Derek Hoiem Alexei A.Efros Martial Hebert Carnegie Mellon University.
Advertisements

On-line learning and Boosting
Data Mining Classification: Alternative Techniques
Semantic Texton Forests for Image Categorization and Segmentation We would like to thank Amnon Drory for this deck הבהרה : החומר המחייב הוא החומר הנלמד.
Games of Prediction or Things get simpler as Yoav Freund Banter Inc.
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Multiple-Instance Learning Paper 1: A Framework for Multiple-Instance Learning [Maron and Lozano-Perez, 1998] Paper 2: EM-DD: An Improved Multiple-Instance.
Lecture 5 (Classification with Decision Trees)
Image Categorization by Learning and Reasoning with Regions Yixin Chen, University of New Orleans James Z. Wang, The Pennsylvania State University Published.
WORD-PREDICTION AS A TOOL TO EVALUATE LOW-LEVEL VISION PROCESSES Prasad Gabbur, Kobus Barnard University of Arizona.
Supervised Distance Metric Learning Presented at CMU’s Computer Vision Misc-Read Reading Group May 9, 2007 by Tomasz Malisiewicz.
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Region Based Image Annotation Through Multiple-Instance Learning By: Changbo Yang Wayne State University Department of Computer Science.
Using Crowdsourcing & Big Data to Understand Agriculture in Sub-Saharan Africa Dee Luo.
1  The goal is to estimate the error probability of the designed classification system  Error Counting Technique  Let classes  Let data points in class.
PixelLaser: Range scans from image segmentation Nicole Lesperance ’11 Michael Leece ’11 Steve Matsumoto ’12 Max Korbel ’13 Kenny Lei ’15 Zach Dodds ‘62.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Watch, Listen and Learn Sonal Gupta, Joohyun Kim, Kristen Grauman and Raymond Mooney -Pratiksha Shah.
Texture analysis Team 5 Alexandra Bulgaru Justyna Jastrzebska Ulrich Leischner Vjekoslav Levacic Güray Tonguç.
Multiple Instance Real Boosting with Aggregation Functions Hossein Hajimirsadeghi and Greg Mori School of Computing Science Simon Fraser University International.
1 Multiple Classifier Based on Fuzzy C-Means for a Flower Image Retrieval Keita Fukuda, Tetsuya Takiguchi, Yasuo Ariki Graduate School of Engineering,
Boris 2 Boris Babenko 1 Ming-Hsuan Yang 2 Serge Belongie 1 (University of California, Merced, USA) 2 (University of California, San Diego, USA) Visual.
Bryan Willimon, Stan Birchfield, Ian Walker Department of Electrical and Computer Engineering Clemson University IROS 2010 Rigid and Non-Rigid Classification.
Extending the Multi- Instance Problem to Model Instance Collaboration Anjali Koppal Advanced Machine Learning December 11, 2007.
2005/12/021 Fast Image Retrieval Using Low Frequency DCT Coefficients Dept. of Computer Engineering Tatung University Presenter: Yo-Ping Huang ( 黃有評 )
Gang WangDerek HoiemDavid Forsyth. INTRODUCTION APROACH (implement detail) EXPERIMENTS CONCLUSION.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
CSSE463: Image Recognition Day 11 Due: Due: Written assignment 1 tomorrow, 4:00 pm Written assignment 1 tomorrow, 4:00 pm Start thinking about term project.
Object Recognition as Ranking Holistic Figure-Ground Hypotheses Fuxin Li and Joao Carreira and Cristian Sminchisescu 1.
Ch1 Larson/Farber 1 1 Elementary Statistics Larson Farber Introduction to Statistics As you view these slides be sure to have paper, pencil, a calculator.
A Statistical Approach to Texture Classification Nicholas Chan Heather Dunlop Project Dec. 14, 2005.
DISCRIMINATIVELY TRAINED DENSE SURFACE NORMAL ESTIMATION ANDREW SHARP.
Stephan Tschechne Chair for Image Understanding Computer Science Technische Universität München Designing vs. Learning the Objective.
Oisin Mac Aodha (UCL) Gabriel Brostow (UCL) Marc Pollefeys (ETH)
Real-Time Hierarchical Scene Segmentation and Classification Andre Uckermann, Christof Elbrechter, Robert Haschke and Helge Ritter John Grossmann.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
Machine Learning: Ensemble Methods
Chapter 1 Chapter 1 Introduction to Statistics Larson/Farber 6th ed.
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Yu-Feng Li 1, James T. Kwok2, Ivor W. Tsang3 and Zhi-Hua Zhou1
Efficient Image Classification on Vertically Decomposed Data
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Session 7: Face Detection (cont.)
Multimedia Content-Based Retrieval
Chapter 1 Chapter 1 Introduction to Statistics Larson/Farber 6th ed.
CSSE463: Image Recognition Day 11
Classification with Perceptrons Reading:
Huazhong University of Science and Technology
Self-Organizing Maps for Content-Based Image Database Retrieval
Machine Learning Week 1.
Efficient Image Classification on Vertically Decomposed Data
Content-Based Image Retrieval
CSSE463: Image Recognition Day 11
Content-Based Image Retrieval
REMOTE SENSING Multispectral Image Classification
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei.
Categorization by Learning and Combing Object Parts
Project 1 Binary Classification
Learning Chapter 18 and Parts of Chapter 20
Chapter 1 Chapter 1 Introduction to Statistics Larson/Farber 6th ed.
Grouping/Segmentation
CSSE463: Image Recognition Day 11
CSSE463: Image Recognition Day 11
ECE738 Final Project Face Detection Baseline
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Chapter 1 Chapter 1 Introduction to Statistics Larson/Farber 6th ed.
Boris Babenko, Steve Branson, Serge Belongie
Presentation transcript:

Multiple Instance Learning: applications to computer vision Jianan Su

Problem Overview Images are inherently ambiguous since they can represent many different things. Difficulty of classifying natural images: variations in color, texture, spatial properties

Solution Strategy and Implementation Regular learning: learn concept from labeled examples MI learning: learn concepts from labeled bags An image is a bag of instances Positive Bag: at least one of the instances in it is positive Negative Bag : all instances in it are negative

Bag Generator A blob is a 2x2 set of pixels. Instance = [r,g,b,r1,g1,b1,r2,g2,b2,....r4,g4,b4], where r, g, b are the mean RGB values of the central blob, r2, g2, b2 are the differences in mean RGB values between the central blob and the blob above it.

Concept Extraction(Diverse Density algorithm ) Find MAX DD Point t by maximizing which is equivalent to where Pr 𝑡 𝐵 𝑖 + ) = 1 - 𝑗 (1 −Pr⁡( 𝑡| 𝐵 𝑖𝑗 + )) Pr 𝑡 𝐵 𝑖 − ) = 𝑗 (1 −Pr⁡( 𝑡| 𝐵 𝑖𝑗 − )) Pr⁡( 𝑡| 𝐵 𝑖𝑗 + ) = exp( - 𝐵 𝑖𝑗 + −𝑡 )2 Starting points are every positive instance.

Classification Generate each new image to bag Calculate distance between new images and MAX DD POINT The entire testing database is sorted by the distance to the MAX DD POINT

Experimental Design and Data Sets Try to learn concept of waterfall Randomly choose 100 images from each of the following classes: waterfall, cloud, forests, mountain, sunset The training set consists of five positives from waterfall and five negatives from the rest classes The small testing set consists of the remaining 90 images from each of the five classes

Results Training accuracy is the ratio of the number of correct images to the number of testing images.

Optimization Scaling : assign weight to each feature Hypothesis classes

Thank You