Self-calibration Class 13 Read Chapter 6. Assignment 3 Collect potential matches from all algorithms for all pairs Matlab ASCII format, exchange data.

Slides:



Advertisements
Similar presentations
Zhengyou Zhang Vision Technology Group Microsoft Research
Advertisements

More on single-view geometry
3D reconstruction.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Two-View Geometry CS Sastry and Yang
Multiple View Reconstruction Class 24 Multiple View Geometry Comp Marc Pollefeys.
N-view factorization and bundle adjustment CMPUT 613.
Self-calibration.
Structure from motion Class 9 Read Chapter 5. 3D photography course schedule (tentative) LectureExercise Sept 26Introduction- Oct. 3Geometry & Camera.
Camera calibration and epipolar geometry
Structure from motion.
Self-calibration and multi-view geometry Class 10 Read Chapter 6 and 3.2.
Projective structure from motion
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Robot Vision SS 2008 Matthias Rüther 1 ROBOT VISION Lesson 6: Shape from Stereo Matthias Rüther Slides partial courtesy of Marc Pollefeys Department of.
3D reconstruction class 11
Computer Vision Projective structure from motion Marc Pollefeys COMP 256 Some slides and illustrations from J. Ponce, …
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Many slides and illustrations from J. Ponce
Self-calibration Class 21 Multiple View Geometry Comp Marc Pollefeys.
Multiple View Geometry
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Triangulation and Multi-View Geometry Class 9 Read notes Section 3.3, , 5.1 (if interested, read Triggs’s paper on MVG using tensor notation, see.
Camera calibration and single view metrology Class 4 Read Zhang’s paper on calibration
Appearance modeling: textures and IBR Class 17. 3D photography course schedule Introduction Aug 24, 26(no course) Aug.31,Sep.2(no course) Sep. 7, 9(no.
Structure from motion Class 12 Read Chapter 5. Assignment 2 ChrisMS regions Nathan… BrianM&S LoG features LiSIFT features ChadMS regions Seon JooSIFT.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Reconstruction Class 23 Multiple View Geometry Comp Marc Pollefeys.
3D photography Marc Pollefeys Fall 2004 / Comp Tue & Thu 9:30-10:45
Epipolar geometry Class 5. Geometric Computer Vision course schedule (tentative) LectureExercise Sept 16Introduction- Sept 23Geometry & Camera modelCamera.
Automatic Camera Calibration
875: Recent Advances in Geometric Computer Vision & Recognition Jan-Michael Frahm Fall 2011.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
CSCE 643 Computer Vision: Structure from Motion
Objects at infinity used in calibration
3D Reconstruction Jeff Boody. Goals ● Reconstruct 3D models from a sequence of at least two images ● No prior knowledge of the camera or scene ● Use the.
Robot Vision SS 2007 Matthias Rüther 1 ROBOT VISION Lesson 6a: Shape from Stereo, short summary Matthias Rüther Slides partial courtesy of Marc Pollefeys.
Two-view geometry Epipolar geometry F-matrix comp. 3D reconstruction
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
HONGIK UNIVERSITY School of Radio Science & Communication Engineering Visual Information Processing Lab Hong-Ik University School of Radio Science & Communication.
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Affine Structure from Motion
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
EECS 274 Computer Vision Affine Structure from Motion.
3D reconstruction from uncalibrated images
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Auto-calibration we have just calibrated using a calibration object –another calibration object is the Tsai grid of Figure 7.1 on HZ182, which can be used.
MASKS © 2004 Invitation to 3D vision Uncalibrated Camera Chapter 6 Reconstruction from Two Uncalibrated Views Modified by L A Rønningen Oct 2008.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
EECS 274 Computer Vision Projective Structure from Motion.
Parameter estimation class 5
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
3D Photography: Epipolar geometry
The Brightness Constraint
More on single-view geometry class 10
Estimating 2-view relationships
3D reconstruction class 11
Uncalibrated Geometry & Stratification
Presentation transcript:

Self-calibration Class 13 Read Chapter 6

Assignment 3 Collect potential matches from all algorithms for all pairs Matlab ASCII format, exchange data Implement RANSAC that uses combined match dataset Compute consistent set of matches and epipolar geometry Report thresholds used, match sets used, number of consistent matches obtained, epipolar geometry, show matches and epipolar geometry (plot some epipolar lines). Due next Tuesday, Nov. 2 naming convention: firstname_ij.dat chris_56.dat [F,inliers]=FRANSAC([chris_56; brian_56; …])

Papers Each should present a paper during minutes followed by discussion. Partially outside of class schedule to make up for missed classes. (When?) List of proposed papers will come on-line by Thursday, feel free to propose your own (suggestion: something related to your project). Make choice by Thursday, assignments will be made in class. Everybody should have read papers that are being discussed.

Papers Chris Nathan Brian Li Chad Seon Joo Jason Sudipta Sriram Christine

3D photography course schedule Introduction Aug 24, 26(no course) Aug.31,Sep.2(no course) Sep. 7, 9(no course) Sep. 14, 16Projective GeometryCamera Model and Calibration (assignment 1) Feb. 21, 23Camera Calib. and SVMFeature matching (assignment 2) Feb. 28, 30Feature trackingEpipolar geometry (assignment 3) Oct. 5, 7Computing FTriangulation and MVG Oct. 12, 14(university day)(fall break) Oct. 19, 21StereoActive ranging Oct. 26, 28Structure from motionSfM and Self-calibration Nov. 2, 4Shape-from-silhouettesSpace carving Nov. 9, 113D modelingAppearance Modeling Nov.12 papers (2-3pm SN115) Nov. 16, 18(VMV’04) Nov. 23, 25papers & discussion(Thanksgiving) Nov.30,Dec.2papers & discussionpapers and discussion Dec.3 papers (2-3pm SN115) Dec. 7?Project presentations

Ideas for a project? ChrisWide-area display reconstruction Nathan? Brian? LiVisual-hulls with occlusions ChadLaser scanner for 3D environments Seon JooCollaborative 3D tracking JasonSfM for long sequences Sudipta Combining exact silhouettes and photoconsistency SriramPanoramic cameras self-calibration Christine desktop lamp scanner

Dealing with dominant planar scenes USaM fails when common features are all in a plane Solution: part 1 Model selection to detect problem (Pollefeys et al., ECCV‘02)

Dealing with dominant planar scenes USaM fails when common features are all in a plane Solution: part 2 Delay ambiguous computations until after self-calibration (couple self-calibration over all 3D parts) (Pollefeys et al., ECCV‘02)

Non-sequential image collections 4.8im/pt 64 images 3792 points Problem: Features are lost and reinitialized as new features Solution: Match with other close views

For every view i Extract features Compute two view geometry i-1/i and matches Compute pose using robust algorithm Refine existing structure Initialize new structure Relating to more views Problem: find close views in projective frame For every view i Extract features Compute two view geometry i-1/i and matches Compute pose using robust algorithm For all close views k Compute two view geometry k/i and matches Infer new 2D-3D matches and add to list Refine pose using all 2D-3D matches Refine existing structure Initialize new structure

Determining close views If viewpoints are close then most image changes can be modelled through a planar homography Qualitative distance measure is obtained by looking at the residual error on the best possible planar homography Distance =

9.8im/pt 4.8im/pt 64 images 3792 points 2170 points Non-sequential image collections (2)

Hierarchical structure and motion recovery Compute 2-view Compute 3-view Stitch 3-view reconstructions Merge and refine reconstruction F T H PM

Stitching 3-view reconstructions Different possibilities 1. Align (P 2,P 3 ) with (P’ 1,P’ 2 ) 2. Align X,X’ (and C,C’) 3. Minimize reproj. error 4. MLE (merge)

Refining structure and motion Minimize reprojection error Maximum Likelyhood Estimation (if error zero-mean Gaussian noise) Huge problem but can be solved efficiently (Bundle adjustment)

Sparse bundle adjustment U1U1 U2U2 U3U3 WTWT W V P1P1 P2P2 P3P3 M Non-linear min. requires to solve Jacobian of has sparse block structure 12xm 3xn (in general much larger) im.pts. view 1 Needed for non-linear minimization

Sparse bundle adjustment Eliminate dependence of camera/motion parameters on structure parameters Note in general 3n >> 11m WTWT V U-WV -1 W T 11xm 3xn Allows much more efficient computations e.g. 100 views,10000 points, solve  1000x1000, not  30000x30000 Often still band diagonal use sparse linear algebra algorithms

Self-calibration Introduction Self-calibration Dual Absolute Quadric Critical Motion Sequences

Motivation Avoid explicit calibration procedure Complex procedure Need for calibration object Need to maintain calibration

Motivation Allow flexible acquisition No prior calibration necessary Possibility to vary intrinsics Use archive footage

Projective ambiguity Reconstruction from uncalibrated images  projective ambiguity on reconstruction

Stratification of geometry 15 DOF 12 DOF plane at infinity parallelism More general More structure ProjectiveAffineMetric 7 DOF absolute conic angles, rel.dist.

Constraints ? Scene constraints Parallellism, vanishing points, horizon,... Distances, positions, angles,... Unknown scene  no constraints Camera extrinsics constraints –Pose, orientation,... Unknown camera motion  no constraints Camera intrinsics constraints –Focal length, principal point, aspect ratio & skew Perspective camera model too general  some constraints

Euclidean projection matrix Factorization of Euclidean projection matrix Intrinsics: Extrinsics: Note: every projection matrix can be factorized, but only meaningful for euclidean projection matrices (camera geometry) (camera motion)

Constraints on intrinsic parameters Constant e.g. fixed camera: Known e.g. rectangular pixels: square pixels: principal point known:

Self-calibration Upgrade from projective structure to metric structure using constraints on intrinsic camera parameters Constant intrinsics Some known intrinsics, others varying Constraints on intrincs and restricted motion (e.g. pure translation, pure rotation, planar motion) (Faugeras et al. ECCV´92, Hartley´93, Triggs´97, Pollefeys et al. PAMI´99,...) (Heyden&Astrom CVPR´97, Pollefeys et al. ICCV´98,...) (Moons et al.´94, Hartley ´94, Armstrong ECCV´96,...)

A counting argument To go from projective (15DOF) to metric (7DOF) at least 8 constraints are needed Minimal sequence length should satisfy Independent of algorithm Assumes general motion (i.e. not critical)

Outline Introduction Self-calibration Dual Absolute Quadric Critical Motion Sequences

The Dual Absolute Quadric The absolute dual quadric Ω * ∞ is a fixed conic under the projective transformation H iff H is a similarity 1.8 dof 2.plane at infinity π ∞ is the nullvector of Ω ∞ 3.Angles:

Absolute Dual Quadric and Self-calibration Eliminate extrinsics from equation Equivalent to projection of Dual Abs.Quadric Dual Abs.Quadric also exists in projective world Transforming world so that reduces ambiguity to similarity

** ** projection constraints Absolute conic = calibration object which is always present but can only be observed through constraints on the intrinsics Absolute Dual Quadric and Self-calibration Projection equation: Translate constraints on K through projection equation to constraints on  *

Constraints on  *  Zero skewquadratic m Principal pointlinear 2m2m Zero skew (& p.p.)linear m Fixed aspect ratio (& p.p.& Skew) quadratic m-1 Known aspect ratio (& p.p.& Skew) linear m Focal length (& p.p. & Skew) linear m conditionconstrainttype #constraints

Linear algorithm Assume everything known, except focal length (Pollefeys et al.,ICCV´98/IJCV´99) Yields 4 constraint per image Note that rank-3 constraint is not enforced

Linear algorithm revisited (Pollefeys et al., ECCV‘02) assumptions Weighted linear equations

Projective to metric Compute T from using eigenvalue decomposition of and then obtain metric reconstruction as

Alternatives: (Dual) image of absolute conic Equivalent to Absolute Dual Quadric Practical when H  can be computed first Pure rotation (Hartley’94, Agapito et al.’98,’99) Vanishing points, pure translations, modulus constraint, …

Note that in the absence of skew the IAC can be more practical than the DIAC!

Kruppa equations Limit equations to epipolar geometry Only 2 independent equations per pair But independent of plane at infinity

Refinement Metric bundle adjustment Enforce constraints or priors on intrinsics during minimization (this is „self-calibration“ for photogrammetrist )

Outline Introduction Self-calibration Dual Absolute Quadric Critical Motion Sequences

Critical motion sequences Self-calibration depends on camera motion Motion sequence is not always general enough Critical Motion Sequences have more than one potential absolute conic satisfying all constraints Possible to derive classification of CMS (Sturm, CVPR´97, Kahl, ICCV´99, Pollefeys,PhD´99)

Critical motion sequences: constant intrinsic parameters Most important cases for constant intrinsics Critical motion typeambiguity pure translationaffine transformation (5DOF) pure rotation arbitrary position for   (3DOF) orbital motionproj.distortion along rot. axis (2DOF) planar motion scaling axis  plane (1DOF) Note relation between critical motion sequences and restricted motion algorithms

Critical motion sequences: varying focal length Most important cases for varying focal length (other parameters known) Critical motion typeambiguity pure rotation arbitrary position for   (3DOF) forward motionproj.distortion along opt. axis (2DOF) translation and rot. about opt. axis scaling optical axis (1DOF) hyperbolic and/or elliptic motion one extra solution

Critical motion sequences: algorithm dependent Additional critical motion sequences can exist for some specific algorithms when not all constraints are enforced (e.g. not imposing rank 3 constraint) Kruppa equations/linear algorithm: fixating a point Some spheres also project to circles located in the image and hence satisfy all the linear/kruppa self-calibration constraints

Non-ambiguous new views for CMS restrict motion of virtual camera to CMS use (wrong) computed camera parameters (Pollefeys,ICCV´01)

Next class: shape from silhouettes