KOUROSH MESHGI PROGRESS REPORT TOPIC To: Ishii Lab Members, Dr. Shin-ichi Maeda, Dr. Shigeuki Oba, And Prof. Shin Ishii 9 MAY 2014.

Slides:



Advertisements
Similar presentations
Probabilistic Tracking and Recognition of Non-rigid Hand Motion
Advertisements

Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Tracking Learning Detection
Vision Based Control Motion Matt Baker Kevin VanDyke.
Qualifying Exam: Contour Grouping Vida Movahedi Supervisor: James Elder Supervisory Committee: Minas Spetsakis, Jeff Edmonds York University Summer 2009.
+ Integrated Systems Biology Lab, Department of Systems Science, Graduate School of Informatics, Kyoto University Sep. 2 nd, 2013 – IBISML 2013 Enhancing.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
Robust Object Tracking via Sparsity-based Collaborative Model
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
A KLT-Based Approach for Occlusion Handling in Human Tracking Chenyuan Zhang, Jiu Xu, Axel Beaugendre and Satoshi Goto 2012 Picture Coding Symposium.
Detecting Pedestrians by Learning Shapelet Features
Formation et Analyse d’Images Session 8
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
A Study of Approaches for Object Recognition
Segmentation Divide the image into segments. Each segment:
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
1 Face Tracking in Videos Gaurav Aggarwal, Ashok Veeraraghavan, Rama Chellappa.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Robust Lane Detection and Tracking
CVPR 2006 New York City Granularity and Elasticity Adaptation in Visual Tracking Ming Yang, Ying Wu NEC Laboratories America Cupertino, CA 95014
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
Introduction to Object Tracking Presented by Youyou Wang CS643 Texas A&M University.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Exploiting Segmentation for Robust 3D Object Matching Michael Krainin, Kurt Konolige, and Dieter Fox.
Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS.
CS55 Tianfan Xue Adviser: Bo Zhang, Jianmin Li.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
Human-Computer Interaction Human-Computer Interaction Tracking Hanyang University Jong-Il Park.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
A General Framework for Tracking Multiple People from a Moving Camera
Kourosh MESHGI Shin-ichi MAEDA Shigeyuki OBA Shin ISHII 18 MAR 2014 Integrated System Biology Lab (Ishii Lab) Graduate School of Informatics Kyoto University.
Video-Vigilance and Biometrics
Learning the Appearance and Motion of People in Video Hedvig Sidenbladh, KTH Michael Black, Brown University.
Stable Multi-Target Tracking in Real-Time Surveillance Video
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
Expectation-Maximization (EM) Case Studies
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Sean M. Ficht.  Problem Definition  Previous Work  Methods & Theory  Results.
Jiu XU, Axel BEAUGENDRE and Satoshi GOTO Computer Sciences and Convergence Information Technology (ICCIT), th International Conference on 1 Real-time.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Looking at people and Image-based Localisation Roberto Cipolla Department of Engineering Research team
Week 10 Emily Hand UNR.
OBJECT TRACKING USING PARTICLE FILTERS. Table of Contents Tracking Tracking Tracking as a probabilistic inference problem Tracking as a probabilistic.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Suspicious Behavior in Outdoor Video Analysis - Challenges & Complexities Air Force Institute of Technology/ROME Air Force Research Lab Unclassified IED.
Date of download: 5/29/2016 Copyright © 2016 SPIE. All rights reserved. From left to right are camera views 1,2,3,5 of surveillance videos in TRECVid benchmarking.
Detecting Occlusion from Color Information to Improve Visual Tracking
Learning Image Statistics for Bayesian Tracking Hedvig Sidenbladh KTH, Sweden Michael Black Brown University, RI, USA
CSSE463: Image Recognition Day 29
Guillaume-Alexandre Bilodeau
Dynamical Statistical Shape Priors for Level Set Based Tracking
Object Tracking Based on Appearance and Depth Information
A Tutorial on HOG Human Detection
CSSE463: Image Recognition Day 29
Progress report 2019/1/14 PHHung.
CSSE463: Image Recognition Day 29
Introduction to Object Tracking
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 29
Spatially Supervised Recurrent Neural Networks for Visual Object Tracking Authors: Guanghan Ning, Zhi Zhang, Chen Huang, Xiaobo Ren, Haohong Wang, Canhui.
Nome Sobrenome. Time time time time time time..
Presentation transcript:

KOUROSH MESHGI PROGRESS REPORT TOPIC To: Ishii Lab Members, Dr. Shin-ichi Maeda, Dr. Shigeuki Oba, And Prof. Shin Ishii 9 MAY 2014

KOUROSH MESHGI – ISHII LAB - DEC SLIDE 2 MAIN APPLICATIONS Surveillance Public Entertainment Robotics Video Indexing Action Recog.

KOUROSH MESHGI – ISHII LAB - DEC SLIDE 3 MAIN CHALLENGES Varying Scale Clutter Deformation Illumination Abrupt Motion

 Goal: Define p(X t |Y 1,…,Y t ) given p(X 1 ) KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 4 States: Target Location and Scale Observations: Sensory Information

PARTICLE FILTER TR.

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 6 Frame: t RGB Domain

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 7 Frame: t Depth Domain

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 8 Frame: t Sensory Information

Observation KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 9 Frame: t State w h (x,y)

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 10 Feature Set Color Shape Edge Texture

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 11 Frame: 1 Template f1f1 fjfj fMfM ……

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 12 Frame: 1 Particles Initialized Overlapped

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 13 Frame: t Motion Model → t + 1

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 14 Frame: t + 1 Feature Vectors f1f1 f2f2 fMfM X 1,t+1 X 2,t+1 X N,t+1 … …

KOUROSH MESHGI – ISHII LAB - MAR SLIDE 15 Frame: t Probability of Observation Each Feature Independence Assumption  !

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 16 Frame: t + 1 Particles Brighter = More Probable

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 17 Frame: t + 1 Feature Vectors

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 18 Frame: t + 1 New Model Model Update

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 19 Frame: t + 1 Proportional to Probability

PARTICLE FILTER TR.

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 21 Appearance ChangesModel DriftDeficient Feature SpaceUninformed SearchOptimized Feature SelectionApproximation of Target

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 22 Same Color Objects Background Clutter Illumination Change Shadows, Shades Use Depth!

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 23 Templates Corrupted! Handle Occlusion! (No Model Update During Them)

* Local Optima of Feature Space * Feature Noise * Feature Failures KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 24 Regularization Non-zero Values Normalization

Particles Converge to Local Optima / Remains The Same Region KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 25 Advanced Motion Models (not always feasible) Restart Tracking (slow occlusion recovery) Expand Search Area!

* The Search is not Directed * Neither of the Channels have Useful Information * Particles Should Scatter Away from Last Known Position KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 26 Occlusion!

do not address occlusion explicitly maintain a large set of hypotheses  computationally expensive direct occlusion detection robust against partial & temp occ.  persistent occ. hinder tracking GENERATIVE MODELS DISCRIMINATIVE MODELS Dynamic Occlusion: Pixels of other object close to camera Scene Occlusion: Still objects are closer to camera than the target object Apparent Occlusion: Due to shape change, silhouette motion, shadows, self-occ UPDATE MODEL FOR TARGET  TYPE OF OCCLUSION IS IMPORTANT  KEEP MEMORY VS. KEEP FOCUS ON THE TARGET Combine Them!

 PTO partial occlusion  SAO self- or articulation occlusion  TFO temporal full occlusion - shorter than 3 frames  PFO persistent full occlusion  CPO complex partial occlusion - including “split and merge” and permanent changes in a key attribute of a part of target  CFO complex full occlusion KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 28

[Zhao & Nevatia, 04] Occlusion Indicator: Ratio of FG/BKG [Wu & Nevatia, 07] Handle Occlusion using Appearance Model [de Villiers et al., 12] Switch Tracker in the case of Occlusion [Song & Xiao, 13] Occlusion Indicator: New Peak in HOD or Reduction of the Size of Main Peak Many other papers handle occlusions as the by- product of their novel trackers

OCCLUSION AWARE PFT

Motion Model Resampling Target Estimation Calculate Likelihood KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 31 Initialization Model Update Observation Occlusion Flag? Constant Likelihood Occlusion Estimation Occlusion Threshold > ? YES NO

 Occlusion Flag (for each particle)  Observation Model  No-Occlusion Particles  Same as Before  Occlusion-Flagged Particles  Uniform Distribution KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 32

 Position Estimation of the Target  Occlusion State for the Next Box KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE a x x

 Model Update (Separately for each Feature)  Modified Dynamics Model of Particle KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 34

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 35 Occlusion!

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 36 Occlusion! GOTCHA!

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 37 Quick Occlusion Recovery  Low CPE No Template Corruption No Attraction to other Object/ Background

COLOR (HOC) TEXTURE (LBP) EDGE (LOG) 2D PROJ. (BETA) 3D SHAPE (PCL Σ ) DEPTH (HOD) GRADIENT (HOG) KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 38

& DISCUSSION

Princeton Tracking Dataset 5 Validation Video with Ground Truth 95 Evaluation Video KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 40

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 41 OAPFT (Proposed, with different feature sets) State-of-the-art RGBD tracker OI+SVM (SVM tracker with Occlusion Indicator) Traditional Particle Filter tracker ACPF (Adaptive Color Particle Filter) State-of-the-art RGB tracker, Successful for Occlusion Handling STRUCK (Structured Output SVM Tracker)

 PASCAL VOC: Overall Performance toto Success Overlap Threshold Area Under Curve KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 42

KOUROSH MESHGI – ISHII LAB - MAR SLIDE Success Plot Overlap Threshold Success Rate 1 1

 Mean Central Point Error: Localization Success  Mean Scale Adaption Error KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 44 EstimatedGround Truth

Center Positioning Error Frames CPE (pixels)

Scale Adaptation Error Frames SAE (pixels)

 FP happens when a tracker doesn’t realize that the target is occluded.  MI happens when the target is visible but the tracker fails to track it as if the target is still in an occlusion state  MT the estimated bounding box has nothing in common with ground truth box  FPS execution time in frames per second KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 47

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 48 Tracker AUCCPESAEMIFPMTFPS BCDEGST (proposed) ACPF (Nummiaro ‘03) STRUCK (Hare ‘11) OI+SVM (Song ‘13)

KOUROSH MESHGI – ISHII LAB – MAY 2014 – SLIDE 49 More Resilient Features + Scale Adaptation Active Occlusion Handling Measure the Confidence of each Data Channel Adaptive Model Update

Q UESTIONS? Thank you for your time…