What Does the Scene Look Like From a Scene Point? Donald Tanguay August 7, 2002 M. Irani, T. Hassner, and P. Anandan ECCV 2002.

Slides:



Advertisements
Similar presentations
Vanishing points  .
Advertisements

Epipolar Geometry.
More on single-view geometry
Lecture 11: Two-view geometry
CSE473/573 – Stereo and Multiple View Geometry
3D reconstruction.
The Trifocal Tensor Multiple View Geometry. Scene planes and homographies plane induces homography between two views.
Two-View Geometry CS Sastry and Yang
Two-view geometry.
Lecture 8: Stereo.
Dr. Hassan Foroosh Dept. of Computer Science UCF
Relations between image coordinates Given coordinates in one image, and the tranformation Between cameras, T = [R t], what are the image coordinates In.
View Morphing (Seitz & Dyer, SIGGRAPH 96)
Camera calibration and epipolar geometry
Computer Vision : CISC 4/689
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Uncalibrated Geometry & Stratification Sastry and Yang
Multi-view stereo Many slides adapted from S. Seitz.
Projected image of a cube. Classical Calibration.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
Algorithm Evaluation and Error Analysis class 7 Multiple View Geometry Comp Marc Pollefeys.
Scene planes and homographies. Homographies given the plane and vice versa.
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
Stereo vision A brief introduction Máté István MSc Informatics.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Automatic Camera Calibration
Computer vision: models, learning and inference
Metric Self Calibration From Screw-Transform Manifolds Russell Manning and Charles Dyer University of Wisconsin -- Madison.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
stereo Outline : Remind class of 3d geometry Introduction
Feature Matching. Feature Space Outlier Rejection.
Multi-linear Systems and Invariant Theory
Computer vision: models, learning and inference M Ahad Multiple Cameras
3D Reconstruction Using Image Sequence
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Model Refinement from Planar Parallax Anthony DickRoberto Cipolla Department of Engineering University of Cambridge.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
CSE 185 Introduction to Computer Vision Stereo 2.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
COSC579: Image Align, Mosaic, Stitch
René Vidal and Xiaodong Fan Center for Imaging Science
The Brightness Constraint
Epipolar geometry.
3D Photography: Epipolar geometry
The Brightness Constraint
More on single-view geometry class 10
The Brightness Constraint
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Two-view geometry.
Two-view geometry.
Multi-view geometry.
Presentation transcript:

What Does the Scene Look Like From a Scene Point? Donald Tanguay August 7, 2002 M. Irani, T. Hassner, and P. Anandan ECCV 2002

Overview Categorization of novel view synthesis Outline of approach Planar parallax formulation Synthesizing the virtual view Practical simplification Results Assessment

Novel View Synthesis Breakdown into 3 categories: 3D reconstruction View transfer Sampling methods

3D Reconstruction Fully reconstruct scene, then render view Geometric error criteria do not translate well to errors in novel view Reconstruction and rendering occur in different coordinate systems Problems amplified with novel viewpoints significantly different from real cameras

“View Transfer” For example: 2 images, dense correspondence, and trifocal tensor Avoids reconstruction Errors in correspondence Synthesis uses forward warping step, which results in “hole-filling” at surface discontinuities Problems amplified by severe changes in viewpoint

Sampling Methods E.g., lightfield and lumigraph Avoid reconstruction and correspondence Require very large sampling of view-space Data acquisition is problematic Space-time costs are impractical

Features of Their Method Avoids reconstruction, correspondence Backward (“inverse”) warping avoids holes Optimizes errors in coordinate system of novel view Handles significantly different viewpoints Small number of input images (~10)

Typical Scenario Choose a scene point V from which to look.

Imaging Geometry Black point in each image is the virtual epipole (image of selected COP).

Color Consistency Test However: Only one correspondence is known – the virtual epipole All other correspondences are warped because the lines are in different coordinate systems. If projections were aligned, matching determines the correct color:

Overview of Approach Choose virtual viewpoint V (a scene point) For each pixel in the virtual image: –Calculate the line of sight L –Map images of L into a common coordinate system –Stack the colorings of L for comparison –Select the first consistent color as the color of the pixel

Mapping onto a Common Coordinate System One camera is selected as the reference camera R. Projections of L in all other cameras will be mapped into R’s image.

Imaging Camera 1 Transform the line in C 1 into R by the homography induced by the ground plane.

Imaging Camera 3 Geometrically, the homography displaces each pixel in C i as though the corresponding 3D point was on the ground plane. The “piercing point” always maps to the same point in R.

Pencil of Lines in Reference Camera After plane alignment, the lines in the reference camera fan from the imaged piercing point to the virtual epipoles.

Projective Geometry Review Homography: –A.k.a. Collineation, projective transformation –In P 3 : 3x4 matrix with 11 degrees of freedom Points and lines: –Point x lies on line l  x†  l  0 –Intersection of lines l and m is point p  l  m –Line joining points p and q is l  p  q

Line Configuration In R’s image plane, what is the relationship between blue and red lines?

Line Alignment redgreen Given real epipoles e i and virtual epipoles v i : for any axis point p V, M i is the projective transformation that brings each line l i into alignment with l R.

Virtual View H syn is the homography between the synthesized view and the reference view R. Position is fixed by the virtual epipoles Free parameters (can be specified in H syn ): –Orientation (look direction) –Intrinsic parameters (e.g., zoom)

Virtual Epipoles In an uncalibrated setting, the position of the virtual camera can be specified in several ways: Manually pin-point same scene point in all cameras. Pin-point in two images and geometrically infer in others using “trifocal constraints.” Pin-point in one image and use correlation techniques to find correspondence in others

Relating Virtual to Reference

Algorithm Outline For each pixel p in the synthesized image: Find the imaged piercing point p v = H syn ·p. Align all imaged lines of sight using the line-to-line transformations M i. Find the first color-consistent column. Assign pixel this color.

Color Consistency Assume Lambertian objects. A is a (n+1)  3 matrix of the column of colors in YIQ color space. is the maximal eigenvalue of the covariance matrix of A. Select first column with under a threshold. Paint with the median color of that column.

Important Details Local “smoothing”: They prefer color consistent columns whose 3D position is spatially consistent with that of neighboring pixels. Uniform regions: They flag used pixels in source images to prevent their repeated matching. Pixel scanning order: They evaluate for physical points closer to ground plane first; then farther. Ground subtraction: Except for piercing point, remove ground plane from coloring stack

Practical Simplification Cameras are coplanar Real epipoles lie on a line in R Rectifying R into the “nadir view” makes the line of epipoles go to infinity The M i line-to-line transformations become affine – simple linear stretching of the lines

Synthetic Scene Extreme change in viewpoint Objects seen through gate, while source images have only floor seen through gate

Color Analysis

Folder Scene Off-the-shelf digital camera, constant-height tripod Triangle occludes distant folder 11 images used for (e)

Puppet Scene Green smear on lower left of (e): “This floor region was not visible in any of the input images.” 9 images used for (e)

Assessment +Interesting use of projective, epipolar geometry +Needs only weak calibration -Needs failure analysis -How to define H syn ? -Explicit notion of visibility could help -Manual selection among source images? -Observation: no occlusions in source imagery – hmm…