Visibility map (high low) Surveillance with Visual Tagging and Camera Placement J. Zhao and S.-C. Cheung — Center for Visualization and Virtual Environment,

Slides:



Advertisements
Similar presentations
CSE473/573 – Stereo and Multiple View Geometry
Advertisements

SE263 Video Analytics Course Project Initial Report Presented by M. Aravind Krishnan, SERC, IISc X. Mei and H. Ling, ICCV’09.
3D reconstruction.
Coverage Estimation in Heterogeneous Visual Sensor Networks Mahmut Karakaya and Hairong Qi Advanced Imaging & Collaborative Information Processing Laboratory.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Neurocomputing,Neurocomputing, Haojie Li Jinhui Tang Yi Wang Bin Liu School of Software, Dalian University of Technology School of Computer Science,
Structure from motion.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Motion based Correspondence for Distributed 3D tracking of multiple dim objects Ashok Veeraraghavan.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Multi-view stereo Many slides adapted from S. Seitz.
Flexible Bump Map Capture From Video James A. Paterson and Andrew W. Fitzgibbon University of Oxford Calibration Requirement:
Face Recognition Based on 3D Shape Estimation
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
Stereopsis Mark Twain at Pool Table", no date, UCR Museum of Photography.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
CS 223b 1 More on stereo and correspondence. CS 223b 2 =?f g Mostpopular For each window, match to closest window on epipolar line in other image. (slides.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
CSE473/573 – Stereo Correspondence
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Computer vision: models, learning and inference
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
Path-Based Constraints for Accurate Scene Reconstruction from Aerial Video Mauricio Hess-Flores 1, Mark A. Duchaineau 2, Kenneth I. Joy 3 Abstract - This.
Sequential Reconstruction Segment-Wise Feature Track and Structure Updating Based on Parallax Paths Mauricio Hess-Flores 1, Mark A. Duchaineau 2, Kenneth.
Introduction à la vision artificielle III Jean Ponce
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Structure from images. Calibration Review: Pinhole Camera.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
-Global Illumination Techniques
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
A 3D Model Alignment and Retrieval System Ding-Yun Chen and Ming Ouhyoung.
Geometry 3: Stereo Reconstruction Introduction to Computer Vision Ronen Basri Weizmann Institute of Science.
CSCE 643 Computer Vision: Structure from Motion
BING: Binarized Normed Gradients for Objectness Estimation at 300fps
Computer Vision Michael Isard and Dimitris Metaxas.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
Intelligent Database Systems Lab N.Y.U.S.T. I. M. Externally growing self-organizing maps and its application to database visualization and exploration.
Plane-based external camera calibration with accuracy measured by relative deflection angle Chunhui Cui , KingNgiNgan Journal Image Communication Volume.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
3D Reconstruction Using Image Sequence
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
EECS 274 Computer Vision Projective Structure from Motion.
Camera calibration from multiple view of a 2D object, using a global non linear minimization method Computer Engineering YOO GWI HYEON.
Computational Vision CSCI 363, Fall 2012 Lecture 17 Stereopsis II
CSE 185 Introduction to Computer Vision Stereo 2.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
Epipolar geometry.
R-CNN region By Ilia Iofedov 11/11/2018 BGU, DNN course 2016.
3D reconstruction class 11
Multiple View Geometry for Robotics
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Course 6 Stereo.
Binocular Stereo Vision
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Presentation transcript:

Visibility map (high low) Surveillance with Visual Tagging and Camera Placement J. Zhao and S.-C. Cheung — Center for Visualization and Virtual Environment, University of Kentucky INTRODUCTION Summary and Future work Visual Tagging  To identify and locate common objects across disparate camera views  Based on identifying “semantically rich” visual features such as faces, gaits or artificial markers The “Camera Placement” Question: Given a surveillance environment, how many cameras and how should the cameras be placed to achieve the best visual tagging performance? Contributions:  A general statistical framework of calculating visual tagging performance of a camera network  Analytical solution for a single camera  Monte-Carlo based solution for any placement with arbitrary number of cameras  Iterative integer-programming based algorithm to compute “optimal” camera placement  Application in “Privacy-protected” camera network I. Statistical Visibility Model II. Visibility from a single camera Visibility Model It is unnecessary for the tag to be visible to all cameras. All it takes are TWO cameras! Two cases:  Uniquely Identified Tags (e.g. faces) - need homographies between camera pairs - get tag location by intersecting epipolar lines  Ambiguous Tags (e.g. colored tags) - need full calibration - get tag location by intersecting light rays III. Visibility for arbitrary numbers of cameras Optimal Camera placement II. Deciding the grid density Experimental results A generic metric model for camera placement on “Visual Tagging” problem. Optimal placement by adaptive grid-based BP Application in privacy protected surveillance Occlusion from multiple objects Ambiguity caused by similar tags The binary visibility function indicates whether the tag P can be successfully detected from the camera C is We need the projected tag to be at least T pixel long for proper detection: Problem: Solution may not exist for a dense tag grid Adaptive Algorithm:  Starting from a sparse grid lattice  Increase density of gridC & gridP until  A predefined average target visibility, or  Density of gridC exceeds a limit. Fixed Parameters: easily measured  room topology  cameras’ intrinsic parameters  dimensions (lengths) of a tag  number of tags Design Parameters: we can control  number of cameras  position of each camera  orientation of each camera Random Parameters: little or no control  position (x,y) of a tag  orientation of a tag a ssume a a-prior statistical model Simple 2D geometry (in paper) shows that, the length l of the image of the tag is given by l = optimal placement maximizing the visibility metric. Very challenging because  Nonlinear  No analytic solution Proposed Approximate solution:  Discretize the domain into grid points  Progressive refinement on grid density I. Solving the discrete problem Divide the environment into a lattice  gridP, N P grid points for the tag  gridC, N C grid points for cameras Visibility = tag visible to at least two cameras or with Visibility map (high low) Objective function: Constraints: b i indicates whether to put a camera on the i th grid points Require each tag is visible to at least 2 cameras Each physical position has at most one camera Standard Binary Programming – solved by lp_solve III. Results The followings show the results after 1, 3, 5 iterations: camera grid tag grid computed camera position & pose Corresponding visibility map and average visibility: Simulation of Optimal Camera Placement: -Twelve “optimal” camera views (iteration 5) of a randomly moving humanoid with a tag Application in Privacy Protected Surveillance: -Even though the tag is not visible in Cam3, its location is determined using epipolar geometry. Contact: Visit: