Robust Moving Object Detection & Categorization using self- improving classifiers Omar Javed, Saad Ali & Mubarak Shah.

Slides:



Advertisements
Similar presentations
Ensemble Learning Reading: R. Schapire, A brief introduction to boosting.
Advertisements

Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Object Detection Using Semi- Naïve Bayes to Model Sparse Structure Henry Schneiderman Robotics Institute Carnegie Mellon University.
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Taxonomic classification for web- based videos Author: Yang Song et al. (Google) Presenters: Phuc Bui & Rahul Dhamecha.
Proposed concepts illustrated well on sets of face images extracted from video: Face texture and surface are smooth, constraining them to a manifold Recognition.
AdaBoost & Its Applications
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
AdaBoost & Genetic algorithms: application to pedestrian detection Yotam Abramson Ecole des Mines de Paris 9/12/05 Korea-France SafeMove Workshop.
São Paulo Advanced School of Computing (SP-ASC’10). São Paulo, Brazil, July 12-17, 2010 Looking at People Using Partial Least Squares William Robson Schwartz.
Challenges in Learning the Appearance of Faces for Automated Image Analysis: part I alessandro verri DISI – università di genova
Generic Object Recognition -- by Yatharth Saraf A Project on.
Graz University of Technology, AUSTRIA Institute for Computer Graphics and Vision Fast Visual Object Identification and Categorization Michael Grabner,
Generic Object Detection using Feature Maps Oscar Danielsson Stefan Carlsson
A Brief Introduction to Adaboost
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Slide credits for this chapter: Frank Dellaert, Forsyth & Ponce, Paul Viola, Christopher Rasmussen.
Object Class Recognition Using Discriminative Local Features Gyuri Dorko and Cordelia Schmid.
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Boosting Main idea: train classifiers (e.g. decision trees) in a sequence. a new classifier should focus on those cases which were incorrectly classified.
Face Detection and Recognition
Introduction to machine learning
Foundations of Computer Vision Rapid object / face detection using a Boosted Cascade of Simple features Presented by Christos Stoilas Rapid object / face.
Online Learning Algorithms
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
Machine Learning CS 165B Spring 2012
Processing of large document collections Part 2 (Text categorization) Helena Ahonen-Myka Spring 2006.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
Visual Tracking with Online Multiple Instance Learning
Detecting Pedestrians Using Patterns of Motion and Appearance Paul Viola Microsoft Research Irfan Ullah Dept. of Info. and Comm. Engr. Myongji University.
Window-based models for generic object detection Mei-Chen Yeh 04/24/2012.
Sign Classification Boosted Cascade of Classifiers using University of Southern California Thang Dinh Eunyoung Kim
Lecture 29: Face Detection Revisited CS4670 / 5670: Computer Vision Noah Snavely.
Supervised Learning of Edges and Object Boundaries Piotr Dollár Zhuowen Tu Serge Belongie.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
BAGGING ALGORITHM, ONLINE BOOSTING AND VISION Se – Hoon Park.
Kevin Cherry Robert Firth Manohar Karki. Accurate detection of moving objects within scenes with dynamic background, in scenarios where the camera is.
 Detecting system  Training system Human Emotions Estimation by Adaboost based on Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki ( Kobe University ) User's.
Human Detection Mikel Rodriguez. Organization 1. Moving Target Indicator (MTI) Background models Background models Moving region detection Moving region.
Limitations of Cotemporary Classification Algorithms Major limitations of classification algorithms like Adaboost, SVMs, or Naïve Bayes include, Requirement.
Face Detection Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
CSE 185 Introduction to Computer Vision Face Recognition.
Expectation-Maximization (EM) Case Studies
Robust Object Tracking with Online Multiple Instance Learning
Gang WangDerek HoiemDavid Forsyth. INTRODUCTION APROACH (implement detail) EXPERIMENTS CONCLUSION.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
COP5992 – DATA MINING TERM PROJECT RANDOM SUBSPACE METHOD + CO-TRAINING by SELIM KALAYCI.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Lecture 15: Eigenfaces CS6670: Computer Vision Noah Snavely.
PRESENTATION REU IN COMPUTER VISION 2014 AMARI LEWIS CRCV UNIVERSITY OF CENTRAL FLORIDA.
Lecture 15: Eigenfaces CS6670: Computer Vision Noah Snavely.
Wonjun Kim and Changick Kim, Member, IEEE
Notes on HW 1 grading I gave full credit as long as you gave a description, confusion matrix, and working code Many people’s descriptions were quite short.
Image Quality Measures Omar Javed, Sohaib Khan Dr. Mubarak Shah.
Cell Segmentation in Microscopy Imagery Using a Bag of Local Bayesian Classifiers Zhaozheng Yin RI/CMU, Fall 2009.
Max-Confidence Boosting With Uncertainty for Visual tracking WEN GUO, LIANGLIANG CAO, TONY X. HAN, SHUICHENG YAN AND CHANGSHENG XU IEEE TRANSACTIONS ON.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Machine Learning: Ensemble Methods
Experience Report: System Log Analysis for Anomaly Detection
Reading: R. Schapire, A brief introduction to boosting
Session 7: Face Detection (cont.)
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
CS4670: Intro to Computer Vision
Announcements Project 2 artifacts Project 3 due Thursday night
Lecture 29: Face Detection Revisited
Presentation transcript:

Robust Moving Object Detection & Categorization using self- improving classifiers Omar Javed, Saad Ali & Mubarak Shah

Moving Object Detection & Categorization Goal Detect moving objects in images and classify them into categories, e.g., humans or vehicles. Motivation Most monitoring and video understanding systems require knowledge of, location and type of objects in the scene.

Object Classification: Major Approaches Supervised Classifiers Adaboost (Viola & Jones), Naive Bayes (Schniederman et al.), SVMs (Papageorgiou & Poggio) Limitations Requirement of large number of training examples, negative examples for face detection (Zhang et al.). More than examples used by (Viola & Jones) Fixed parameters after training. After deployment, parameters are not tunable to best performance in a particular scenario.

Object Classification: Major Approaches Semi-Supervised Classifiers Co-training (Levin et al.) Limitations: Requirement for collection of large amount of training data, though no need for labels. Offline training, i.e., Fixed parameters in the testing phase.

Properties of an “Ideal” Object Detection System Learns both background and object models online with no prior training. Adapts quickly to changing background and object properties

Overview of the Proposed Approach In a single boosted framework, Obtain regions of Interest (ROI) from a background subtraction approach. Obtain motion and appearance features from the ROI. Use separate views (motion and appearance features) of the data for online co-training, i.e., If one set of features confidently predicts a label of an object, then use this label to online update the base classifiers and the boosting parameters. Use combined view (both features) for classification decisions.

Properties of the Proposed Object Detection Method Background model is learned online. Object models are learned offline with a small number of training examples. The object classifier parameters are continuously updated online using co-training to improve detection rates.

Proposed Object Detection Method Co-Training Decision (if confident prediction by one set) ROIs Background Appearance Feature Extraction Background Updated weak learners Background Models Foreground Models Updated parameters Classification Output Color Classifier Base Classifiers (Appearance) Motion Feature Extraction Edge Classifier Base Classifiers (Motion) Boosted Classifier Updated Boosted Parameters

Background Detection First level Per-pixel Mixture of Gaussian color models Second Level Gradient magnitude and gradient direction models Gradient boundary check Feedback to first level Current Image from videoOutput of first levelOutput of second level

Features for Object Classification Base classifiers learned from global PCA coefficients of appearance and motion templates of Image regions. Appearance subspace learned by performing PCA separately on a small set of labeled ‘d’ dimensional gradient magnitude images of people and vehicles.

Features for Object Classification The people and vehicle appearance subspaces are represented by d x m 1 and d x m 2 projection matrices (S 1 and S 2 ) respectively. m 1 and m 2 are chosen such that the eigenvectors account for 99% of variance in the respective subspaces.

Features for Object Classification Appearance features for base learners are obtained by projecting each training example ‘r’ in the two subspaces

Features for Object Classification Row 1: Top 3 eigenvectors for person appearance subspace. Row 2: Vehicle appearance subspace

Features for Object Classification To obtain motion features, person and vehicle motion subspaces (matrices S 3 and S 4 )are constructed from m 3 and m 4 dimensional person and vehicle examples respectively. Optical flow is obtained using the method by Lucas and Kanade. Motion features for base learners are obtained by projecting each training motion example ‘o’ in the two subspaces

Base Classifiers We use the Bayes Classifier as the base classifier. Let c 1, c 2 and c 3 represent the person, vehicle and background classes. Each feature vector component v q,where q ranges from 1,.., m 1 +m 2 +m 3 +m 4, is used to learn the pdf for each class. The pdf is represented by a smoothed 1D histogram.

Base Classifiers The classification decision by the q th base classifier is taken as c i, If

Adaboost Boosting is a method for combining many base classifiers to come up with a more accurate ‘strong’ classifier. We use the Adaboost.M1 (Freund and Schapire) to learn the strong classifier, from the initial training data and the base classifiers.

The online co-training Framework In general co-training requires at least two classifiers trained on independent features for labeling of data. Examples confidently labeled by one classifier are used to train the other. In our case, individual base classifiers either represent motion or appearance features. To determine confidence thresholds for each base classifier, we use a validation data set.

The online co-training Framework For class c i and j th base classifier the confidence threshold, is set to be the highest probability achieved by a negative example, i.e., All examples in the validation set with probability higher than the threshold are correctly classified.

During the test phase, If more than 20% of the appearance based or motion based classifiers predict the label of an example with the probability higher than the validation threshold, then the example is selected for online update. Online update is only necessary if the boosted classifier decision has a small or negative margin. Margin thresholds are also computed from the validation set. The online co-training Framework

Once an example has been labeled by the co-training mechanism, an online boosting algorithm is used to update the base classifiers and the boosting coefficients.

Online Co-training Algorithm

Experiments Initial Training 50 examples of each class All examples scaled to 30x30 vector Validation Set 20 images per class Testing on three sequences

Experiments Results on Sequence1.

Experiments Results on Sequence1. Performance over time Performance over number of co-trained examples

Experiments Results on Sequence 2.

Experiments Results on Sequence 2. Performance over time Performance over number of co-trained examples

Experiments Results on Sequence 3.

Experiments Results on Sequence 3. Performance over time Performance over number of co-trained examples