Kourosh MESHGI Shin-ichi MAEDA Shigeyuki OBA Shin ISHII 18 MAR 2014 Integrated System Biology Lab (Ishii Lab) Graduate School of Informatics Kyoto University.

Slides:



Advertisements
Similar presentations
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Advertisements

DDDAS: Stochastic Multicue Tracking of Objects with Many Degrees of Freedom PIs: D. Metaxas, A. Elgammal and V. Pavlovic Dept of CS, Rutgers University.
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Tracking Learning Detection
Introduction To Tracking
Database-Based Hand Pose Estimation CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Qualifying Exam: Contour Grouping Vida Movahedi Supervisor: James Elder Supervisory Committee: Minas Spetsakis, Jeff Edmonds York University Summer 2009.
+ Integrated Systems Biology Lab, Department of Systems Science, Graduate School of Informatics, Kyoto University Sep. 2 nd, 2013 – IBISML 2013 Enhancing.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
Robust Object Tracking via Sparsity-based Collaborative Model
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
Formation et Analyse d’Images Session 8
Illumination Model How to compute color to represent a scene As in taking a photo in real life: – Camera – Lighting – Object Geometry Material Illumination.
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Segmentation Divide the image into segments. Each segment:
1 Face Tracking in Videos Gaurav Aggarwal, Ashok Veeraraghavan, Rama Chellappa.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
A Real-Time for Classification of Moving Objects
Introduction to Object Tracking Presented by Youyou Wang CS643 Texas A&M University.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,
REALTIME OBJECT-OF-INTEREST TRACKING BY LEARNING COMPOSITE PATCH-BASED TEMPLATES Yuanlu Xu, Hongfei Zhou, Qing Wang*, Liang Lin Sun Yat-sen University,
Real Time Abnormal Motion Detection in Surveillance Video Nahum Kiryati Tammy Riklin Raviv Yan Ivanchenko Shay Rochel Vision and Image Analysis Laboratory.
Action recognition with improved trajectories
CS55 Tianfan Xue Adviser: Bo Zhang, Jianmin Li.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Olga Zoidi, Anastasios Tefas, Member, IEEE Ioannis Pitas, Fellow, IEEE
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
資訊碩一 蔡勇儀  Introduction  Method  Background generation and updating  Detection of moving object  Shape control points.
A General Framework for Tracking Multiple People from a Moving Camera
Video-Vigilance and Biometrics
KOUROSH MESHGI PROGRESS REPORT TOPIC To: Ishii Lab Members, Dr. Shin-ichi Maeda, Dr. Shigeuki Oba, And Prof. Shin Ishii 9 MAY 2014.
A Comparative Evaluation of Three Skin Color Detection Approaches Dennis Jensch, Daniel Mohr, Clausthal University Gabriel Zachmann, University of Bremen.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Stable Multi-Target Tracking in Real-Time Surveillance Video
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
Expectation-Maximization (EM) Case Studies
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Sean M. Ficht.  Problem Definition  Previous Work  Methods & Theory  Results.
Jiu XU, Axel BEAUGENDRE and Satoshi GOTO Computer Sciences and Convergence Information Technology (ICCIT), th International Conference on 1 Real-time.
Human Detection Method Combining HOG and Cumulative Sum based Binary Pattern Jong Gook Ko', Jin Woo Choi', So Hee Park', Jang Hee You', ' Electronics and.
Looking at people and Image-based Localisation Roberto Cipolla Department of Engineering Research team
Video Tracking G. Medioni, Q. Yu Edwin Lei Maria Pavlovskaia.
Week 10 Emily Hand UNR.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Vehicle Detection in Aerial Surveillance Using Dynamic Bayesian Networks Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen IEEE TRANSACTIONS.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Suspicious Behavior in Outdoor Video Analysis - Challenges & Complexities Air Force Institute of Technology/ROME Air Force Research Lab Unclassified IED.
Illumination Model How to compute color to represent a scene As in taking a photo in real life: – Camera – Lighting – Object Geometry Material Illumination.
Date of download: 5/29/2016 Copyright © 2016 SPIE. All rights reserved. From left to right are camera views 1,2,3,5 of surveillance videos in TRECVid benchmarking.
Detecting Occlusion from Color Information to Improve Visual Tracking
REAL-TIME DETECTOR FOR UNUSUAL BEHAVIOR
Guillaume-Alexandre Bilodeau
Tracking Objects with Dynamics
Object Tracking Based on Appearance and Depth Information
A Tutorial on HOG Human Detection
Vehicle Segmentation and Tracking in the Presence of Occlusions
“The Truth About Cats And Dogs”
Liyuan Li, Jerry Kah Eng Hoe, Xinguo Yu, Li Dong, and Xinqi Chu
CSSE463: Image Recognition Day 29
Nome Sobrenome. Time time time time time time..
Learning complex visual concepts
Presentation transcript:

Kourosh MESHGI Shin-ichi MAEDA Shigeyuki OBA Shin ISHII 18 MAR 2014 Integrated System Biology Lab (Ishii Lab) Graduate School of Informatics Kyoto University IEICE NC Tamagawa’14

KOUROSH MESHGI – ISHII LAB - DEC SLIDE 2 MAIN APPLICATIONS Surveillance Public Entertainment Robotics Video Indexing Action Recog.

KOUROSH MESHGI – ISHII LAB - DEC SLIDE 3 MAIN CHALLENGES Varying Scale Clutter Non-Rigid Illumination Abrupt Motion

[Mihaylova et al., 07] RGB Color + Texture + Motion + Edge, Two PF [Spinello & Arras,11] HOG on RGB + HOG on Depth, SVM Classification [Shotton et al, 11] Skeleton from Depth, Random Forrest [Choi et al, 11] Ensemble of Detectors (upper body, face, skin, shape from depth, motion from depth), RJ-MCMC [Song et al., 13] 2.5D Shape + Motion + HOG on Color and Depth, Occlusion Indicator, SVM

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 5 Frame: t Observation

Image Patch KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 6 Frame: t State w h (x,y)

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 7 Feature Set Color Shape Edge Texture

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 8 Frame: 1 Template f1f1 fjfj fnfn ……

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 9 Frame: 1 Particles Initialized Overlapped

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 10 Frame: t Motion Model → t + 1

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 11 Frame: t + 1 Feature Vectors f1f1 f2f2 fnfn X 1,t+1 X 2,t+1 X N,t+1 … …

KOUROSH MESHGI – ISHII LAB - MAR SLIDE 12 Frame: t Probability of Observation Each Feature Independence Assumption  !

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 13 Frame: t + 1 Particles Brighter = More Probable

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 14 Frame: t + 1 Expectation

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 15 Frame: t + 1 New Model Model Update

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 16 Frame: t + 1 Proportional to Probability

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 17 Same Color Objects Background Clutter Illumination Change Shadows, Shades Use Depth!

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 18 Templates Corrupted! Handle Occlusion!

Particles Converge to Local Optima / Remains The Same Region KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 19 Advanced Motion Models (not always feasible) Restart Tracking (slow occlusion recovery) Expand Search Area!

do not address occlusion explicitly maintain a large set of hypotheses  computationally expensive direct occlusion detection robust against partial & temp occ.  persistent occ. hinder tracking GENERATIVE MODELS DISCRIMINATIVE MODELS Dynamic Occlusion: Pixels of other object close to camera Scene Occlusion: Still objects are closer to camera than the target object Apparent Occlusion: Due to shape change, silhouette motion, shadows, self-occ UPDATE MODEL FOR TARGET  TYPE OF OCCLUSION IS IMPORTANT  KEEP MEMORY VS. KEEP FOCUS ON THE TARGET Combine Them!

* The Search is not Directed * Neither of the Channels have Useful Information * Particles Should Scatter Away from Last Known Position KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 21 Occlusion!

 Occlusion Flag (for each particle)  Observation Model  No-Occlusion Particles  Same as Before  Occlusion-Flagged Particles  Uniform Distribution KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 22

 Probability of Occlusion for the Next Box  Modified Dynamics Model of Particle KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 23

 Model Update  Separately for each Feature KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 24

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 25 Occlusion!

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 26 Occlusion! GOTCHA!

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 27 Quick Occlusion Recovery  Low CPE No Template Corruption No Attraction to other Object/ Background

COLOR (HOC) TEXTURE (LBP) EDGE (LOG) DEPTH (HOD) 3D SHAPE (PCL Σ ) KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 28

Princeton Tracking Dataset 5 Validation Video with Ground Truth 95 Evaluation Video KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 29

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 30 A. Edge PFB. Edge + Color PFC. Edge + Color + Depth PFD. Edge + Color + Depth + Texture PFE. Edge + Color + Depth + Texture + 3D Shape PFF. Occlusion Aware PF

(Yellow Dashed Line is Ground Truth)

 PASCAL VOC toto Success Overlap Threshold Area Under Curve KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 32

KOUROSH MESHGI – ISHII LAB - MAR SLIDE 33 Success Plot A D B E C F 1 1 Overlap Threshold Success Rate

 Mean Central Point Error: Localization Success  Mean Scale Adaption Error KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 34 EstimatedGround Truth

KOUROSH MESHGI – ISHII LAB - MAR SLIDE 35 Center Positioning Error A D B E C F Frames CPE (pixels)

KOUROSH MESHGI – ISHII LAB - MAR SLIDE 36 Scale Adaptation Error KOUROSH MESHGI – ISHII LAB - MAR SLIDE SAE (pixels) 50 Frames A D B E C F

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 37 Tracker AUCCPESAE A (edg) B (edg+hoc) C (edg+hoc+hod) D (edg+hoc+hod+tex) E (edg+hoc+hod+tex+shp) F (all + occlusion handling)

KOUROSH MESHGI – ISHII LAB – MAR 2014 – SLIDE 38 More Resilient Features + Scale Adaptation Active Occlusion Handling Measure the Confidence of each Data Channel Adaptive Model Update

Q UESTIONS? Thank you for your time… Image Credit: