TRACKING THE INVISIBLE: LEARNING WHERE THE OBJECT MIGHT BE Helmut Grabner1, Jiri Matas2, Luc Van Gool1,3, Philippe Cattin4 1ETH-Zurich, 2Czech Technical.

Slides:



Advertisements
Similar presentations
IEEE CDC Nassau, Bahamas, December Integration of shape constraints in data association filters Integration of shape constraints in data.
Advertisements

Bayesian Decision Theory Case Studies
Complex Networks for Representation and Characterization of Images For CS790g Project Bingdong Li 9/23/2009.
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
Group Meeting Presented by Wyman 10/14/2006
Eigenfaces for Recognition Presented by: Santosh Bhusal.
Presented by Xinyu Chang
Caroline Rougier, Jean Meunier, Alain St-Arnaud, and Jacqueline Rousseau IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 21, NO. 5,
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Patch to the Future: Unsupervised Visual Prediction
Face Detection, Pose Estimation, and Landmark Localization in the Wild
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Intelligent Systems Lab. Recognizing Human actions from Still Images with Latent Poses Authors: Weilong Yang, Yang Wang, and Greg Mori Simon Fraser University,
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Robust Object Tracking via Sparsity-based Collaborative Model
A KLT-Based Approach for Occlusion Handling in Human Tracking Chenyuan Zhang, Jiu Xu, Axel Beaugendre and Satoshi Goto 2012 Picture Coding Symposium.
Shape and Dynamics in Human Movement Analysis Ashok Veeraraghavan.
Tracking Features with Large Motion. Abstract Problem: When frame-to-frame motion is too large, KLT feature tracker does not work. Solution: Estimate.
Beyond bags of features: Part-based models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Recognition using Regions CVPR Outline Introduction Overview of the Approach Experimental Results Conclusion.
Robust and large-scale alignment Image from
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Shape and Dynamics in Human Movement Analysis Ashok Veeraraghavan.
1 Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion (IEEE 2009) Junlan Yang University of Illinois,Chicago.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman ICCV 2003 Presented by: Indriyati Atmosukarto.
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman University of Oxford ICCV 2003.
Object Class Recognition Using Discriminative Local Features Gyuri Dorko and Cordelia Schmid.
A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications Lucia Maddalena and Alfredo Petrosino, Senior Member, IEEE.
Tracking Video Objects in Cluttered Background
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
Image Matching via Saliency Region Correspondences Alexander Toshev Jianbo Shi Kostas Daniilidis IEEE Conference on Computer Vision and Pattern Recognition.
1 Invariant Local Feature for Object Recognition Presented by Wyman 2/05/2006.
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Object Tracking for Retrieval Application in MPEG-2 Lorenzo Favalli, Alessandro Mecocci, Fulvio Moschetti IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR.
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe – IJCV 2004 Brien Flewelling CPSC 643 Presentation 1.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS.
Flow Based Action Recognition Papers to discuss: The Representation and Recognition of Action Using Temporal Templates (Bobbick & Davis 2001) Recognizing.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Olga Zoidi, Anastasios Tefas, Member, IEEE Ioannis Pitas, Fellow, IEEE
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Marcin Marszałek, Ivan Laptev, Cordelia Schmid Computer Vision and Pattern Recognition, CVPR Actions in Context.
1 ECE 738 Paper presentation Paper: Active Appearance Models Author: T.F.Cootes, G.J. Edwards and C.J.Taylor Student: Zhaozheng Yin Instructor: Dr. Yuhen.
Dynamic 3D Scene Analysis from a Moving Vehicle Young Ki Baik (CV Lab.) (Wed)
Yao, B., and Fei-fei, L. IEEE Transactions on PAMI(2012)
Beyond Sliding Windows: Object Localization by Efficient Subwindow Search The best paper prize at CVPR 2008.
BAGGING ALGORITHM, ONLINE BOOSTING AND VISION Se – Hoon Park.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
ECE 172A SIMPLE OBJECT DETECTOR WITH INDICATOR WHEN A NEW OBJECT HAS BEEN ADDED TO OR MISSING IN A ROOM Presented by by Hugo Groening.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Gang WangDerek HoiemDavid Forsyth. INTRODUCTION APROACH (implement detail) EXPERIMENTS CONCLUSION.
1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Object Recognition as Ranking Holistic Figure-Ground Hypotheses Fuxin Li and Joao Carreira and Cristian Sminchisescu 1.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Max-Confidence Boosting With Uncertainty for Visual tracking WEN GUO, LIANGLIANG CAO, TONY X. HAN, SHUICHENG YAN AND CHANGSHENG XU IEEE TRANSACTIONS ON.
Image Compression Using Address-Vector Quantization NASSER M. NASRABADI, and YUSHU FENG Presented by 蔡進義 P IEEE TRANSACTIONS ON COMMUNICATIONS,
Parsing Natural Scenes and Natural Language with Recursive Neural Networks INTERNATIONAL CONFERENCE ON MACHINE LEARNING (ICML 2011) RICHARD SOCHER CLIFF.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Tracking Objects with Dynamics
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
SIMPLE ONLINE AND REALTIME TRACKING WITH A DEEP ASSOCIATION METRIC
Tracking Many slides adapted from Kristen Grauman, Deva Ramanan.
Presentation transcript:

TRACKING THE INVISIBLE: LEARNING WHERE THE OBJECT MIGHT BE Helmut Grabner1, Jiri Matas2, Luc Van Gool1,3, Philippe Cattin4 1ETH-Zurich, 2Czech Technical University, Prague, 3K.U. Leuven, 4University of Basel

Outline  1. Introduction  2. Learning the Motion Couplings  3. Practical Implementation  4. Experimental Results  5. Conclusion

Outline  1. Introduction

Introduction  Goal: Estimate the Position of an Object

Introduction  Many temporary, but potential very strong links exists between a tracked object and parts of the images.  We discover the dynamic elements – called SUPPORTERS – that predict the position of the target. [L. Cerman, J. Matas, J., V. Hlavac, Sputnik Tracker,SCIA, 2009] [M. Yang, Y. Wu, G. Hua. Context-Aware Visual Tracking PAMI, 2009]

SUPPORTERS  1. Came with different strength.  2. Change over time.  SUPPORTERS help Tracking of objects which change there appearance very quickly and occluded objects or object outside the image.

Tracking Carl  Given the position of the object of interest in the frame.  Local image features from the whole image are extracted (yellow points). Image features: our implementation uses Harris points [5] and describe them by a SIFT inspired descriptor [10].

Local Image Features as Supporters  These image features are usually divided into object points and points belonging to the background.

Local Image Features as Supporters  Object points lie on the object surface and thus always have a strong correlation to the object motion (green points)

Local Image Features as Supporters  The object gets occluded or changed its appearance, the supporters can help infer its position.

Discovering the Supporters

Outline  2. Learning the Motion Couplings

Problem Formulation  We want to learn a model of P(x|I), predicting the position x of an object in the image I.

Implicit Shape Model - Features  After training from a large database of labeled images, the model can be used to detect objects from that object class in test images.  indicator functions P(f|I)

Implicit Shape Model – Object Displacement Each feature subsequently casts probabilistic votes for possible object positions, where the hypothesis score is obtained as a sum over all votes. P(x|f):votes for the object position x.

Implicit Shape Model

Model of Carl – Object Detection

Model of Supporters

Voting Where μ is the mean and Σ is the covariance matrix.

Reliable Information & Motion Coupling feature point x(i) = (x(i), y(i)) associated with a feature f(i) and the position x* = (x*, y*) of the target.  where parameter α ϵ [0, 1] determines the weighting of the past estimates.  The current image It and the object position Xt* is included using p(f|It),  the indicator of feature f in the current image;  and p(Xt*-Xt(i)|f), the current relative object position.

Implementation angle Ф distance r covariance matrix Σ S tore all detected feature points in a database DB. The entries (d,r, Ф, Σ ) store the descriptor and the estimated voting vector, encoded by radius, angle and covariance matrix.

Updates Adjust the voting vector, the radius r and the relative angle Ф Update the covariance matrix Σ

Outline  3. Practical Implementation

Algorithm : Determination of Position

Outline  4. Experimental Results

ETH-Cup: Supporters

[11] S. Stalder, H. Grabner, and L. V. Gool. Beyond semisupervised tracking: Tracking should be as simple as detection,but not simpler than recognition. In Proc. IEEE WS on On-line Learning for Computer Vision, 2009.

Experimental Results

ETH-Cup: Improving Object Tracking   Recall: TP/(TP+FN)  Precision: TP/(TP+FP) 

Medical Application  (a) Manually labeled valve positions, (b-f) the found valve positions in 5 cardiac phases. As the confidence of the voting space was too low, in cardiac phase 16, the valve positions were interpolated (indicated by the dotted lines) from neighboring images.

Outline  5. Conclusion

Conclusion  SUPPORTERS help Tracking of objects which 1.Change there appearance very quickly. 2.Occluded objects or object outside the image.