An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.

Slides:



Advertisements
Similar presentations
Bayesian Decision Theory Case Studies
Advertisements

Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
The fundamental matrix F
Face Alignment with Part-Based Modeling
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Joint Eye Tracking and Head Pose Estimation for Gaze Estimation
Facial feature localization Presented by: Harvest Jang Spring 2002.
A KLT-Based Approach for Occlusion Handling in Human Tracking Chenyuan Zhang, Jiu Xu, Axel Beaugendre and Satoshi Goto 2012 Picture Coding Symposium.
ICIP 2000, Vancouver, Canada IVML, ECE, NTUA Face Detection: Is it only for Face Recognition?  A few years earlier  Face Detection Face Recognition 
A Study of Approaches for Object Recognition
A Bayesian algorithm for tracking multiple moving objects in outdoor surveillance video Department of Electrical Engineering and Computer Science The University.
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
1 Face Tracking in Videos Gaurav Aggarwal, Ashok Veeraraghavan, Rama Chellappa.
Robust Lane Detection and Tracking
Computer Vision for Interactive Computer Graphics Mrudang Rawal.
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition Oytun Akman.
CSE473/573 – Stereo Correspondence
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
Computer vision: models, learning and inference
Face Alignment Using Cascaded Boosted Regression Active Shape Models
Path-Based Constraints for Accurate Scene Reconstruction from Aerial Video Mauricio Hess-Flores 1, Mark A. Duchaineau 2, Kenneth I. Joy 3 Abstract - This.
Sequential Reconstruction Segment-Wise Feature Track and Structure Updating Based on Parallax Paths Mauricio Hess-Flores 1, Mark A. Duchaineau 2, Kenneth.
A Brief Overview of Computer Vision Jinxiang Chai.
Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS.
3D Fingertip and Palm Tracking in Depth Image Sequences
TP15 - Tracking Computer Vision, FCUP, 2013 Miguel Coimbra Slides by Prof. Kristen Grauman.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Human-Computer Interaction Human-Computer Interaction Tracking Hanyang University Jong-Il Park.
A General Framework for Tracking Multiple People from a Moving Camera
A Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction C. Stentoumis 1, L. Grammatikopoulos 2, I. Kalisperakis 2, E.
DIEGO AGUIRRE COMPUTER VISION INTRODUCTION 1. QUESTION What is Computer Vision? 2.
1 Howard Schultz, Edward M. Riseman, Frank R. Stolle Computer Science Department University of Massachusetts, USA Dong-Min Woo School of Electrical Engineering.
Ray Divergence-Based Bundle Adjustment Conditioning for Multi-View Stereo Mauricio Hess-Flores 1, Daniel Knoblauch 2, Mark A. Duchaineau 3, Kenneth I.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Communication Systems Group Technische Universität Berlin S. Knorr A Geometric Segmentation Approach for the 3D Reconstruction of Dynamic Scenes in 2D.
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Reconstruction the 3D world out of two frames, based on camera pinhole model : 1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating.
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
An Effective & Interactive Approach to Particle Tracking for DNA Melting Curve Analysis 李穎忠 DEPARTMENT OF COMPUTER SCIENCE & INFORMATION ENGINEERING NATIONAL.
Robust Nighttime Vehicle Detection by Tracking and Grouping Headlights Qi Zou, Haibin Ling, Siwei Luo, Yaping Huang, and Mei Tian.
IEEE International Conference on Multimedia and Expo.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Bayesian Decision Theory Case Studies CS479/679 Pattern Recognition Dr. George Bebis.
Reconstruction of a Scene with Multiple Linearly Moving Objects Mei Han and Takeo Kanade CISC 849.
Ehsan Nateghinia Hadi Moradi (University of Tehran, Tehran, Iran) Video-Based Multiple Vehicle Tracking at Intersections.
AHED Automatic Human Emotion Detection
Bag-of-Visual-Words Based Feature Extraction
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Video-based human motion recognition using 3D mocap data
ISOMAP TRACKING WITH PARTICLE FILTERING
Mean transform , a tutorial
Combining Geometric- and View-Based Approaches for Articulated Pose Estimation David Demirdjian MIT Computer Science and Artificial Intelligence Laboratory.
AHED Automatic Human Emotion Detection
Optical flow and keypoint tracking
Automated traffic congestion estimation via public video feeds
Presentation transcript:

An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and Video Computing Group, Computer Science Department, Boston University Introduction Where is the object/feature point? time Robust tracking is important for  Human Computer Interaction  Video-based Surveillance  Remote Sensing  Video Indexing Problem: Failure in tracking, especially due to occlusion Solution: Automatic Recovery Idea: Use multiple cameras Conclusion  System detects tracking failures with high accuracy  Promising results on automatic feature re-initialization  Correlation based term is strong to predict reliability  Proposed RM is inexpensive to compute Feature Extensions:  Use of particle filters or other trackers on 3D  Extend RM using geometric constraints about the motion of the object  Use of multiple points considering constraints about shape Proposed Method Idea: Construct a Reliability Measure (RM) for each view to detect tracking failures Observation: Prefer the view in which object is most visible. Which parameter is most informative for tracking in a view? Wikipedia, Epipolar Geometry, F: Fundamental Matrix Right Camera Left Camera  Normalized Correlation Coefficient (NCC) I: Image T: Template N: number of pixels  Epipolar distance (EPD)  Estimate of the 3D Position  Geometric constraints about the shape/motion of the object  Term 1  Term 2  Term 3  Term 4 z: 2D tracks, : Reconstructed 3D Trajectory, y: Projection of estimated 3D position and Term 2 Term 3 Term 4 Term 1 where Proposed System:  Independent 2D trackers in each view utilize pyramidal implementation of optical flow algorithm  Stereoscopic reconstruction of 3D trajectories (simple linear method)  Predicting 3D Position with ‘Constant Velocity Assumption’  Automatic recovery using projection of estimated 3D position Experiments & Results Left-view RM (top) and right-view RM’ (bottom) for the video of subject A and direction of subject’s movements where for all and High correlation b/w values of term 1 and term 2 DROP TERM 2! RMs with do not have succinct peaks. Weigh term 1 more! Values of pairs and triplets of RM terms with weights set equally. RMs with final weights for subject A. Final Weights DATASET  Cameras 20 inches apart, 120 o between optical axis  Training Set: 8 subjects, ~450 frames each Subjects rotating head center to right and then left and up, down.  Test Set: 26 subjects, 2 sequences per subject, ~1200 frames in each sequence. Recording data from left and right cameras, while subject is using a mouse-replacement interface from frontal camera RESULTS  The feature was lost in both views 9 times, but was declared as lost in only one of the views.  53 false alarms, but in all cases the feature was reinitialized to a location at most 3 pixels from the actual location, hence the false alarm rate is negligible.  For 254 correctly detected tracking failures, the system was able to recover 181 times (71.3%). 304 feature loss events in one view = 254 detected in correct view + 25 detected in the other view + 25 not detected True positive rate 83.5% Feature Loss Problem with Camera Mouse:  Camera Mouse is a mouse-replacement interface for people with disabilities.  Automatic re-initialization would enable the subject to use Camera Mouse freely without the intervention of a caregiver. Adjusting Weights References [1] Camera Mouse, accessed August [2] C. Connor, E. Yu, J. Magee, E. Cansizoglu, S. Epstein, and M. Betke, "Movement and Recovery Analysis of a Mouse-Replacement Interface for Users with Severe Disabilities," 13th Int. Conference on Human-Computer Interaction, 10 pp., San Diego, USA, July [3] Y. Tong, Y. Wang, Z. Zhu, and Q. Ji, “Robust Facial Feature Tracking under Varying Face Pose and Facial Expression,” Pattern Recognition, 40(11): , November [4] C. Fagiani, M. Betke, and J. Gips, “Evaluation of tracking methods for human-computer interaction,” IEEE Workshop on Applications in Computer Vision (WACV 2002), pp , Orlando, USA, December 2002.