EE392J Final Project, March 20, 2002 1 Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Dynamic Occlusion Analysis in Optical Flow Fields
Face Alignment with Part-Based Modeling
Chapter 6 Feature-based alignment Advanced Computer Vision.
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Computer Vision Optical Flow
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
1 Interest Operators Find “interesting” pieces of the image –e.g. corners, salient regions –Focus attention of algorithms –Speed up computation Many possible.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Announcements Project1 artifact reminder counts towards your grade Demos this Thursday, 12-2:30 sign up! Extra office hours this week David (T 12-1, W/F.
Computing motion between images
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Motion Computing in Image Analysis
Augmented Reality: Object Tracking and Active Appearance Model
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
Presented by Pat Chan Pik Wah 28/04/2005 Qualifying Examination
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Motion Detection in UAV Videos by Cooperative Optical Flow and Parametric Analysis Masaharu Kobashi.
Motion and optical flow Thursday, Nov 20 Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik.
Automatic Camera Calibration
05 - Feature Detection Overview Feature Detection –Intensity Extrema –Blob Detection –Corner Detection Feature Descriptors Feature Matching Conclusion.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Detecting Pedestrians Using Patterns of Motion and Appearance Paul Viola Microsoft Research Irfan Ullah Dept. of Info. and Comm. Engr. Myongji University.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
The Measurement of Visual Motion P. Anandan Microsoft Research.
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
3D Imaging Motion.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
3.7 Adaptive filtering Joonas Vanninen Antonio Palomino Alarcos.
By Naveen kumar Badam. Contents INTRODUCTION ARCHITECTURE OF THE PROPOSED MODEL MODULES INVOLVED IN THE MODEL FUTURE WORKS CONCLUSION.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
1.Optical Flow 2.LogPolar Transform 3.Inertial Sensor 4.Corner Detection 5. Feature Tracking 6.Lines.
Motion / Optical Flow II Estimation of Motion Field Avneesh Sud.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Motion Estimation Multimedia Systems and Standards S2 IF Telkom University.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Motion and Optical Flow
Motion Detection And Analysis
Range Imaging Through Triangulation
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
CSSE463: Image Recognition Day 30
Announcements Questions on the project? New turn-in info online
Coupled Horn-Schunck and Lukas-Kanade for image processing
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Presentation transcript:

EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama

EE392J Final Project, March 20, Outline b Introduction b Point Correspondence between multiple cameras b Robust Object Tracking b Camera Communication and decision making b Results

EE392J Final Project, March 20, Object Tracking b The objective is to obtain an accurate estimate of the position (x,y) of the object tracked b Tracking algorithms can be classified into Single object & Single CameraSingle object & Single Camera Multiple object & Single CameraMultiple object & Single Camera Multiple objects & Multiple CamerasMultiple objects & Multiple Cameras Single object & Multiple CamerasSingle object & Multiple Cameras

EE392J Final Project, March 20, Single Object & Single Camera b Accurate camera calibration and scene model b Suffers from Occlusions b Not robust and object dependant

EE392J Final Project, March 20, Single Object & Multiple Camera b Accurate point correspondence between scenes b Occlusions can be minimized or even avoided b Redundant information for better estimation b Multiple camera Communication problem

EE392J Final Project, March 20, System Architecture

EE392J Final Project, March 20, Static Point Correspondence b The output of the tracking stage is b A simple scene model is used to get real estimate of coordinates b Both Affine and Perspective models were used for the scene modeling and static corresponding points were used for parameter estimation b Least mean squares was used to improve parameter estimation

EE392J Final Project, March 20, Dynamic Point Correspondence

EE392J Final Project, March 20, Block-Based Motion Estimation b Typically, in object tracking precise sub-pixel optical flow estimation is not needed. b Furthermore, motion can be on the order of several pixels, thereby precluding use of gradient methods. b We started with a simple sum of squared differences error criterion coupled with full search in a limited region around the tracking window.

EE392J Final Project, March 20, Adaptive Window Sizing b Although simple block-based motion estimation may work reasonably well when motion is purely translational, it can lose the object if its relative size changes. If the object’s camera field of view shrinks, the SSD error is strongly influenced by the background.If the object’s camera field of view shrinks, the SSD error is strongly influenced by the background. If the object’s camera field of view grows, the window fails to make use of entire object information and can slip away.If the object’s camera field of view grows, the window fails to make use of entire object information and can slip away.

EE392J Final Project, March 20, Four Corner Method b This technique divides the rectangular object window into 4 basic regions - each one quadrant. b Motion vectors are calculated for each subregion and each controls one of four corners. b Translational motion is captured by all four moving equally, while window size is modulated when motion is differential. b Resultant tracking window can be non-rectangular, i.e., any quadrilateral approximated by four rectangles with a shared center corner.

EE392J Final Project, March 20, Example: Four Corner Method Synthetically generated test sequences:

EE392J Final Project, March 20, Correlative Method b Four corner method is strongly subject to error accumulation which can result in drift of one or more of the tracking window quadrants. b Once drift occurs, sizing of window is highly inaccurate. b Need a method that has some corrective feedback so window can converge to correct size even after some errors. b Correlation of current object features to some template view is one solution.

EE392J Final Project, March 20, Correlative Method (con’t) b Basic form of technique involves storing initial view of object as a reference image. b Block matching is performed through a combined interframe and correlative MSE: where sc’(x0,y0,0) is the resized stored template image. b Furthermore, minimum correlative MSE is used to direct resizing of current window.

EE392J Final Project, March 20, Example: Correlative Method

EE392J Final Project, March 20, Occlusion Detection b In order for multi-camera feature tracking to work, each camera must possess an ability to assess the validity of its tracking (e.g. to detect occlusion). b Comparing the minimum error at each point to some absolute threshold is problematic since error can grow even when tracking is still valid. b Threshold must be adaptive to current conditions. b One solution is to use a threshold of k (constant > 1) times the moving average of the MSE. b Thus, only precipitous changes in error trigger indication of possibly fallacious tracking.

EE392J Final Project, March 20, Redetection Procedure (1 Camera) Normal TrackingOcclusion Detected Motion TrackingRemain Stationary I t < d  Noise I t > d  Noise Error <  Err avg Error > k  Err avg b Redetection is difficult at most general level – Object recognition. b Proximity and size constancy constraints can be imposed to simplify redetection.

EE392J Final Project, March 20, Example: Occlusion

EE392J Final Project, March 20, Camera Communication

EE392J Final Project, March 20, Result

EE392J Final Project, March 20, Conclusion b Multiple cameras can do more than just 3D imaging b Camera calibration only works if you have an accurate scene and camera model b Tracking is sensitive to the camera characteristics (noise, blur, frame rate,..) b Tracking accuracy can be improved using multiple cameras