Detecting Occlusion from Color Information to Improve Visual Tracking

Slides:



Advertisements
Similar presentations
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Advertisements

Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Tracking Learning Detection
Patch to the Future: Unsupervised Visual Prediction
Activity Recognition Aneeq Zia. Agenda What is activity recognition Typical methods used for action recognition “Evaluation of local spatio-temporal features.
F ACE TRACKING EE 7700 Name: Jing Chen Shaoming Chen.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
Robust Object Tracking via Sparsity-based Collaborative Model
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
A KLT-Based Approach for Occlusion Handling in Human Tracking Chenyuan Zhang, Jiu Xu, Axel Beaugendre and Satoshi Goto 2012 Picture Coding Symposium.
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Robust Object Segmentation Using Adaptive Thresholding Xiaxi Huang and Nikolaos V. Boulgouris International Conference on Image Processing 2007.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications Lucia Maddalena and Alfredo Petrosino, Senior Member, IEEE.
Presented by Pat Chan Pik Wah 28/04/2005 Qualifying Examination
A Real-Time for Classification of Moving Objects
Tracking objects using Gabor filters
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
REALTIME OBJECT-OF-INTEREST TRACKING BY LEARNING COMPOSITE PATCH-BASED TEMPLATES Yuanlu Xu, Hongfei Zhou, Qing Wang*, Liang Lin Sun Yat-sen University,
ICBV Course Final Project Arik Krol Aviad Pinkovezky.
Sana Naghipour, Saba Naghipour Mentor: Phani Chavali Advisers: Ed Richter, Prof. Arye Nehorai.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Abstract Some Examples The Eye tracker project is a research initiative to enable people, who are suffering from Amyotrophic Lateral Sclerosis (ALS), to.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Olga Zoidi, Anastasios Tefas, Member, IEEE Ioannis Pitas, Fellow, IEEE
Tracking by Sampling Trackers Junseok Kwon* and Kyoung Mu lee Computer Vision Lab. Dept. of EECS Seoul National University, Korea Homepage:
1 Mean shift and feature selection ECE 738 course project Zhaozheng Yin Spring 2005 Note: Figures and ideas are copyrighted by original authors.
Visual Object Tracking Xu Yan Quantitative Imaging Laboratory 1 Xu Yan Advisor: Shishir K. Shah Quantitative Imaging Laboratory Computer Science Department.
Visual Tracking Decomposition Junseok Kwon* and Kyoung Mu lee Computer Vision Lab. Dept. of EECS Seoul National University, Korea Homepage:
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
A General Framework for Tracking Multiple People from a Moving Camera
Kourosh MESHGI Shin-ichi MAEDA Shigeyuki OBA Shin ISHII 18 MAR 2014 Integrated System Biology Lab (Ishii Lab) Graduate School of Informatics Kyoto University.
Detecting Pedestrians Using Patterns of Motion and Appearance Paul Viola Microsoft Research Irfan Ullah Dept. of Info. and Comm. Engr. Myongji University.
KOUROSH MESHGI PROGRESS REPORT TOPIC To: Ishii Lab Members, Dr. Shin-ichi Maeda, Dr. Shigeuki Oba, And Prof. Shin Ishii 9 MAY 2014.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
Expectation-Maximization (EM) Case Studies
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
An Effective & Interactive Approach to Particle Tracking for DNA Melting Curve Analysis 李穎忠 DEPARTMENT OF COMPUTER SCIENCE & INFORMATION ENGINEERING NATIONAL.
By Naveen kumar Badam. Contents INTRODUCTION ARCHITECTURE OF THE PROPOSED MODEL MODULES INVOLVED IN THE MODEL FUTURE WORKS CONCLUSION.
Week 10 Emily Hand UNR.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND.
Week 3 Emily Hand UNR. Online Multiple Instance Learning The goal of MIL is to classify unseen bags, instances, by using the labeled bags as training.
Learning Image Statistics for Bayesian Tracking Hedvig Sidenbladh KTH, Sweden Michael Black Brown University, RI, USA
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Robust and Fast Collaborative Tracking with Two Stage Sparse Optimization Authors: Baiyang Liu, Lin Yang, Junzhou Huang, Peter Meer, Leiguang Gong and.
Guillaume-Alexandre Bilodeau
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
Lit part of blue dress and shadowed part of white dress are the same color
Real-Time Object Localization and Tracking from Image Sequences
Object Tracking Based on Appearance and Depth Information
Fast and Robust Object Tracking with Adaptive Detection
ISOMAP TRACKING WITH PARTICLE FILTERING
Group 1: Gary Chern Paul Gurney Jared Starman
Tremor Detection Using Motion Filtering and SVM Bilge Soran, Jenq-Neng Hwang, Linda Shapiro, ICPR, /16/2018.
In Defense of Color-based Model-free Tracking
“The Truth About Cats And Dogs”
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Online Graph-Based Tracking
Anomaly Detection in Crowded Scenes
Report 7 Brandon Silva.
Learning complex visual concepts
Presentation transcript:

Detecting Occlusion from Color Information to Improve Visual Tracking Stephen Siena B.V.K Vijaya Kumar ICASSP 2016 March 22, 2016

Online Object Tracking Challenging tracking scenario Tracker is given minimal information 1st frame has target identified No prior knowledge about type of object Tracker must adapt to appearance of object in that video sequence alone

Why is occlusion so challenging? Target appearance will change over time Scale, illumination, rotation/deformation Tracker needs to adapt to changing appearance Typically done by retraining tracker using each new detection

Why is occlusion so challenging? It is very hard to distinguish between changing target and an obscured target One initial training example means we don’t know what a change in appearance really means (deformation or obstruction)

Why is occlusion so challenging? One learning rate for different videos means: Lower learning rate: tolerant to occlusion, but can’t keep up with rapidly changing objects Higher learning rate: can learn new target appearance quickly, but prone to lock on and learn appearance of occlusion Most trackers will strike a balance for overall good performance A better option: one learning rate for unoccluded targets, another learning rate for occluded targets

How can we detect occlusion? Our proposal: color features Target region: many brown hues, small amount of blue captured from background Surrounding region: lots of red and blue hues, small amount of brown

Hue for Occlusion Detection Observations: The target won’t change colors (probably) There’s a chance the target is a different color than its surroundings Object that obscure the target may be a different color

Learning Target Hues Convert RGB to HSV Create PDF of target/surrounding hues

Learning Target Hues Convert pdfs to likelihoods ℒ 𝑡𝑎𝑟𝑔𝑒𝑡 𝐻 = log 𝑃 𝐻 𝑡𝑎𝑟𝑔𝑒𝑡 +𝜀 𝑃 𝐻 𝑏𝑎𝑐𝑘𝑔𝑟𝑜𝑢𝑛𝑑 +𝜀

Applying Hue Likelihoods Tracker will find most likely location in next frame Take pixels in that region, and get average likelihood 𝑂 𝑆 𝑟𝑎𝑤 𝑛 =− 𝑖= 𝑥 𝑛 𝑥 𝑛 + 𝑤 𝑛 𝑗= 𝑦 𝑛 𝑦 𝑛 + ℎ 𝑛 ℒ(𝑡𝑎𝑟𝑔𝑒𝑡|𝐻 𝑖,𝑗 ) 𝑤 𝑛 ℎ 𝑛

Applying Hue Likelihoods 𝑂 𝑆 𝑟𝑎𝑤 (𝑛) can mean different things, depending on hue contrast in initial frame Normalize scores relative to first frame (ground truth) occlusion score Helps normalize scores across videos and find a single good decision threshold 𝑂𝑆 𝑛 =𝑂 𝑆 𝑟𝑎𝑤 𝑛 −𝑂 𝑆 𝑟𝑎𝑤 (1)

Applying Hue Likelihoods If occlusion score is above threshold, we’ve detected occlusion, so we won’t update the tracker Otherwise, update tracker as usual Frame 296 𝑂 𝑆 𝑟𝑎𝑤 = −.73 Frame 338 𝑂 𝑆 𝑟𝑎𝑤 =1.13 Frame 971 𝑂 𝑆 𝑟𝑎𝑤 =−1.88

Experiment Results 20 videos, 21 targets Subset of 2013 Online Object Tracking Benchmark dataset RGB videos with occlusion

Experiment Results Run 3 correlation trackers, with and without the occlusion detection Circulant Structure Kernel (CSK) tracker (Henriques et al.) Intensity features Kernelized Correlation Filter (KCF) tracker (Henriques et al.) HOG features Discriminative Scale Space Tracker (DSST) (Danelljan et al.) 2nd filter to estimate target scale

Experiment Results Evaluation in two ways

Experiment Results Overlap 𝑔𝑟𝑜𝑢𝑛𝑑 𝑡𝑟𝑢𝑡ℎ ∩ 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛 𝑔𝑟𝑜𝑢𝑛𝑑 𝑡𝑟𝑢𝑡ℎ ∪ 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛 [AUROC]

Experiment Results Center pixel error [20 pixel threshold]

Frame-by-Frame Results

Track-by-Track Results Count how many tracks have “errors”: critical mistakes that represent losing the target entirely Tracker Tracking Errors Total Corrected Introduced CSK 17 4 (24%) 1 KCF 11 3 (27%) DSST 4 (36%) 39 11 (28%) 3

Examples CSK tracker – ‘jogging’ sequence

Examples KCF tracker – ‘lemming’ sequence

Examples DSST tracker – ‘girl’ sequence

Example of an Error KCF tracker – ‘skating1’ sequence Colored lighting changes over course of video Frame 1

Conclusion Color information of the estimated target can help detect occlusion to improve tracking Fits with common tracker scheme that updates model every frame Low computation cost

Examples (Stills) DSST ‘girl’ KCF ‘lemming’

Examples (Stills) CSK ‘jogging’

Occlusion detection causes mistake due to colored lighting Examples (Stills) KCF ‘skating1’ Occlusion detection causes mistake due to colored lighting