Presentation is loading. Please wait.

Presentation is loading. Please wait.

Visibility map (high low) Surveillance with Visual Tagging and Camera Placement J. Zhao and S.-C. Cheung — Center for Visualization and Virtual Environment,

Similar presentations


Presentation on theme: "Visibility map (high low) Surveillance with Visual Tagging and Camera Placement J. Zhao and S.-C. Cheung — Center for Visualization and Virtual Environment,"— Presentation transcript:

1 Visibility map (high low) Surveillance with Visual Tagging and Camera Placement J. Zhao and S.-C. Cheung — Center for Visualization and Virtual Environment, University of Kentucky INTRODUCTION Summary and Future work Visual Tagging  To identify and locate common objects across disparate camera views  Based on identifying “semantically rich” visual features such as faces, gaits or artificial markers The “Camera Placement” Question: Given a surveillance environment, how many cameras and how should the cameras be placed to achieve the best visual tagging performance? Contributions:  A general statistical framework of calculating visual tagging performance of a camera network  Analytical solution for a single camera  Monte-Carlo based solution for any placement with arbitrary number of cameras  Iterative integer-programming based algorithm to compute “optimal” camera placement  Application in “Privacy-protected” camera network I. Statistical Visibility Model II. Visibility from a single camera Visibility Model It is unnecessary for the tag to be visible to all cameras. All it takes are TWO cameras! Two cases:  Uniquely Identified Tags (e.g. faces) - need homographies between camera pairs - get tag location by intersecting epipolar lines  Ambiguous Tags (e.g. colored tags) - need full calibration - get tag location by intersecting light rays III. Visibility for arbitrary numbers of cameras Optimal Camera placement II. Deciding the grid density Experimental results A generic metric model for camera placement on “Visual Tagging” problem. Optimal placement by adaptive grid-based BP Application in privacy protected surveillance Occlusion from multiple objects Ambiguity caused by similar tags The binary visibility function indicates whether the tag P can be successfully detected from the camera C is We need the projected tag to be at least T pixel long for proper detection: Problem: Solution may not exist for a dense tag grid Adaptive Algorithm:  Starting from a sparse grid lattice  Increase density of gridC & gridP until  A predefined average target visibility, or  Density of gridC exceeds a limit. Fixed Parameters: easily measured  room topology  cameras’ intrinsic parameters  dimensions (lengths) of a tag  number of tags Design Parameters: we can control  number of cameras  position of each camera  orientation of each camera Random Parameters: little or no control  position (x,y) of a tag  orientation of a tag a ssume a a-prior statistical model Simple 2D geometry (in paper) shows that, the length l of the image of the tag is given by l = optimal placement maximizing the visibility metric. Very challenging because  Nonlinear  No analytic solution Proposed Approximate solution:  Discretize the domain into grid points  Progressive refinement on grid density I. Solving the discrete problem Divide the environment into a lattice  gridP, N P grid points for the tag  gridC, N C grid points for cameras Visibility = tag visible to at least two cameras or with Visibility map (high low) Objective function: Constraints: b i indicates whether to put a camera on the i th grid points Require each tag is visible to at least 2 cameras Each physical position has at most one camera Standard Binary Programming – solved by lp_solve III. Results The followings show the results after 1, 3, 5 iterations: camera grid tag grid computed camera position & pose Corresponding visibility map and average visibility: Simulation of Optimal Camera Placement: -Twelve “optimal” camera views (iteration 5) of a randomly moving humanoid with a tag Application in Privacy Protected Surveillance: -Even though the tag is not visible in Cam3, its location is determined using epipolar geometry. Contact: jian.zhao@uky.edu, cheung@engr.uky.edu Visit: http://www.vis.uky.edu/mialab


Download ppt "Visibility map (high low) Surveillance with Visual Tagging and Camera Placement J. Zhao and S.-C. Cheung — Center for Visualization and Virtual Environment,"

Similar presentations


Ads by Google