3D Single Image Scene Reconstruction For Video Surveillance Systems

Slides:



Advertisements
Similar presentations
Vanishing points  .
Advertisements

More on single-view geometry
3D reconstruction.
Single View Metrology A. Criminisi, I. Reid, and A. Zisserman University of Oxford IJCV Nov 2000 Presentation by Kenton Anderson CMPT820 March 24, 2005.
Camera calibration and epipolar geometry
Camera Models A camera is a mapping between the 3D world and a 2D image The principal camera of interest is central projection.
Computer Vision : CISC 4/689
1 Basic geometric concepts to understand Affine, Euclidean geometries (inhomogeneous coordinates) projective geometry (homogeneous coordinates) plane at.
Lecture 19: Single-view modeling CS6670: Computer Vision Noah Snavely.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
Uncalibrated Geometry & Stratification Sastry and Yang
Epipolar Geometry and the Fundamental Matrix F
CS485/685 Computer Vision Prof. George Bebis
Image Formation1 Projection Geometry Radiometry (Image Brightness) - to be discussed later in SFS.
Multiple-view Reconstruction from Points and Lines
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Projected image of a cube. Classical Calibration.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Instance-level recognition I. - Camera geometry and image alignment Josef Sivic INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire.
Automatic Camera Calibration
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Single View Geometry Course web page: vision.cis.udel.edu/cv April 7, 2003  Lecture 19.
Epipolar geometry The fundamental matrix and the tensor
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
I 3D: Interactive Planar Reconstruction of Objects and Scenes Adarsh KowdleYao-Jen Chang Tsuhan Chen School of Electrical and Computer Engineering Cornell.
University of Palestine Faculty of Applied Engineering and Urban Planning Software Engineering Department Introduction to computer vision Chapter 2: Image.
Projective geometry ECE 847: Digital Image Processing Stan Birchfield Clemson University.
Lecture 04 22/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Metrology 1.Perspective distortion. 2.Depth is lost.
Single View Geometry Course web page: vision.cis.udel.edu/cv April 9, 2003  Lecture 20.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Multi-linear Systems and Invariant Theory in the Context of Computer Vision and Graphics CS329 Amnon Shashua.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Computer vision: models, learning and inference M Ahad Multiple Cameras
CS552: Computer Graphics Lecture 11: Orthographic Projection.
Camera Model Calibration
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
PROBABILISTIC DETECTION AND GROUPING OF HIGHWAY LANE MARKS James H. Elder York University Eduardo Corral York University.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Automatic 3D modelling of Architecture Anthony Dick 1 Phil Torr 2 Roberto Cipolla 1 1 Department of Engineering 2 Microsoft Research, University of Cambridge.
Lecture 18: Cameras CS4670 / 5670: Computer Vision KavitaBala Source: S. Lazebnik.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Ancona - Municipality Hall The principal point PP is the intersection of the three heights of the triangle with vertices the three V.P. The V.P. procedure.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Urban Scene Analysis James Elder & Patrick Denis York University.
Lecture 16: Image alignment
Nazar Khan PUCIT Lecture 19
Lec 23: Single View Modeling
Rendering Pipeline Fall, 2015.
PERSPECTIVE PROJECTION…...
Instructor: Mircea Nicolescu
Segmentation Based Environment Modeling Using a Single Image
CSCE 441 Computer Graphics 3-D Viewing
Scene Modeling for a Single View
I. BASICS Thanks to Peter Corke for the use of some slides
3D Photography: Epipolar geometry
Computer Graphics Recitation 12.
Lecture 13: Cameras and geometry
3D reconstruction class 11
By the end of Week 3: You would learn how to solve many problems involving lines/planes and manipulate with different coordinate systems. These are.
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Projective geometry Readings
Video Compass Jana Kosecka and Wei Zhang George Mason University
Course 6 Stereo.
Presentation transcript:

3D Single Image Scene Reconstruction For Video Surveillance Systems Patrick Denis, Francisco Estrada and James Elder pdenis@elderlab.yorku.ca Goal Results Compute 3D models of urban scenes from surveillance cameras. Manhattan World Manhattan World Vanishing Points Using homogeneous coordinates [4] for 3D points, we have: Typical man-made urban structures are built as rectangles that lie at right angles to each other. Urban scenes that conform to an orthogonal three-dimensional grid are said to follow a “Manhattan World Assumption” [1]. Manhattan World rectangles must lie in one of the three canonical families of planes defined by the grid. Due to perspective projection, these rectangles appear in the image as general quadrilaterals. A vanishing point is a point in the image where parallel lines intersect. The 3D orientation of these lines is given by the orientation of the line passing through the optical centre and the vanishing point [2]. Calculate the normal to the interpretation plane using the optical centre, and two homogeneous image points of a line segment (u1 & u2) Calculate the plane  using the vanishing point vector , a homogeneous image point u1 and Compute the intersection u3 between  and the line L2 defined by the join of u2 and the optical centre o using Plücker [4] notation Calculate the angles of the triangle defined by the optical centre o, u1 and u3 Find  and , the distances from the optical centre to the 3D endpoints of the line segment. Orientation of lines and Vanishing Points Vanishing point Introduction We suggest that the strong constraints of an urban environment (the “Manhattan World Assumption”) and the internal parameters from a single camera are sufficient to infer enough of the three-dimensional context to produce a 3D rendering useful for visual surveillance. The 3D model is only reconstructed up to a projective scale. The following steps are proposed to render an image of an urban environment as a three-dimensional model: 1) Extract the quadrilaterals from monocular video frames. 2) Infer the 3D orientation and location of the quadrilaterals. 3) Render the scene to the user. At this time, quadrilaterals are identified interactively. We plan to automate this step in the future Optical Centre Health, Nursing & Environmental Studies building @ York University Line through optical centre and vanishing point Parallel Image lines Figure 2. The line through the optical centre and the vanishing point has the same 3D orientation as the parallel lines identified as meeting at the vanishing point. where & 3D Finding world coordinates of an image line A line in the image can be generated by an infinite number of lines in space. The triangular plane defined by two rays that begin at the optical centre of the camera and continue through the two endpoints of the line in the image is known as the interpretation plane [3]. Any line segment in space that extends from one ray to the other in the interpretation plane will project to the same image line. Each of these lines is identified by its orientation and length. Ross building @ York University Geometrical Representation to Find World Coordinates for an Image Line Relationship between Image Line and 3D Lines Conclusion Image plane |u1| 2 u1 |u1| The effectiveness of urban visual surveillance may be enhanced if activities can be rendered within a three-dimensional context. Preliminary results show that semi-automatic inference of 3D models of urban buildings from monocular imagery is feasible, based on the Manhattan World assumption and the laws of perspective projection. Models are inferred up to a global scale factor. 2 g 1 L2 1 L1 o |u2| u3 3 Interpretation plane Image plane |u2| Image Line Figure 4. Necessary steps to find the world coordinates of a line given the orientation and length. Figure 1. User interface created to define the set of quadrilaterals following the Manhattan World Constraint. Figure 1. User interface created to define the set of quadrilaterals following the Manhattan World Constraint. Figure 3. A line in the image may project from any one of many lines in space. Each line in 3D space has a unique orientation  and length L. Creating a Model Creating a Model 3D References Texture Mapping For purposes of texture mapping, we estimate a homography for each quadrilateral in the image that will map pixels from the image to a rectangle for the rendered environment. To make a 3D model from a set of identified quadrilaterals in an image we must: Identify the orientation of the Manhattan World relative to the camera optical centre with the use of Vanishing Points. Find 3D world coordinates of an image line. Find the remaining world coordinates for all defined quadrilaterals. Extract the quadrilaterals in the image suitable for textures rendering. To make a 3D model from a set of identified quadrilaterals in an image we must: Identify the orientation of the Manhattan World relative to the camera optical centre with the use of Vanishing Points. Find 3D world coordinates of an image line. Find the remaining world coordinates for all defined quadrilaterals. Extract the quadrilaterals in the image suitable for textures rendering. [1] Coughlan, J. and Yuille A.L., “Manhattan World: Orientation and Outlier Detection by Bayesian Inference,” in Neural Computation, vol. 15, no. 5, pages 1063-1088, 2003 [2] Collins R.T. and Weiss R.S. “Vanishing Point Calculation as a Statistical Inference on the Unit Sphere,” In Proceedings of the Third International Conference on Computer Vision, pages 400-403, December 1990 [3] Stephen T. Barnard, “Interpreting Perspective Images,” in Artificial Intelligence, vol. 21, pages 435-462, 1982 [4] Hartley R. and Zisserman A., “Multiple View Geometry in Computer Vision,” Cambridge University Press, Second Edition, 2003 The 3D orientation of an image lines is determined using the vanishing point. The length of a line is determined up to a global scale factor. Figure 5. A homography maps an image quadrilateral to a rectangle.