CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.

Slides:



Advertisements
Similar presentations
Single-view geometry Odilon Redon, Cyclops, 1914.
Advertisements

The fundamental matrix F
Last 4 lectures Camera Structure HDR Image Filtering Image Transform.
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Two-View Geometry CS Sastry and Yang
Chapter 6 Feature-based alignment Advanced Computer Vision.
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Camera calibration and epipolar geometry
Structure from motion.
Stanford CS223B Computer Vision, Winter 2005 Lecture 11: Structure From Motion 2 Sebastian Thrun, Stanford Rick Szeliski, Microsoft Hendrik Dahlkamp and.
Stanford CS223B Computer Vision, Winter 2007 Lecture 8 Structure From Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Stanford CS223B Computer Vision, Winter 2006 Lecture 8 Structure From Motion Professor Sebastian Thrun CAs: Dan Maynes-Aminzade, Mitul Saha, Greg Corrado.
Uncalibrated Epipolar - Calibration
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Structure From Motion Sebastian Thrun, Gary Bradski, Daniel Russakoff
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Single-view geometry Odilon Redon, Cyclops, 1914.
Advanced Computer Vision Structure from Motion. Geometric structure-from-motion problem: using image matches to estimate: The 3D positions of the corresponding.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Today: Calibration What are the camera parameters?
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Computer vision: models, learning and inference
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
Camera Geometry and Calibration Thanks to Martial Hebert.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Camera calibration Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz,, Fred Pighin and Marc Pollefyes.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
CSCE 643 Computer Vision: Structure from Motion
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Affine Structure from Motion
Single-view geometry Odilon Redon, Cyclops, 1914.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
EECS 274 Computer Vision Affine Structure from Motion.
Calibration.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Structure from Motion ECE 847: Digital Image Processing
3D Reconstruction Using Image Sequence
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Reconstruction from Two Calibrated Views Two-View Geometry
Single-view geometry Odilon Redon, Cyclops, 1914.
Geometry Reconstruction March 22, Fundamental Matrix An important problem: Determine the epipolar geometry. That is, the correspondence between.
Structure from Motion Paul Heckbert, Nov , Image-Based Modeling and Rendering.
Structure from motion Multi-view geometry Affine structure from motion Projective structure from motion Planches : –
EECS 274 Computer Vision Projective Structure from Motion.
Calibrating a single camera
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Computer vision: models, learning and inference
Depth from disparity (x´,y´)=(x+D(x,y), y)
Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/17
Epipolar geometry.
Structure from motion Input: Output: (Tomasi and Kanade)
CSCE 441: Computer Graphics Forward/Inverse kinematics
Multiple View Geometry for Robotics
Uncalibrated Geometry & Stratification
Noah Snavely.
Single-view geometry Odilon Redon, Cyclops, 1914.
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai

Image-based Modeling and Rendering Images user input range scans Model Images Image based modeling Image- based rendering Geometry+ Images Geometry+ Materials Images + Depth Light field Panoroma Kinematics Dynamics Etc. Camera + geometry

Stereo reconstruction Given two or more images of the same scene or object, compute a representation of its shape knowncameraviewpoints

Stereo reconstruction Given two or more images of the same scene or object, compute a representation of its shape knowncameraviewpoints How to estimate camera parameters? - where is the camera? - where is it pointing? - what are internal parameters, e.g. focal length?

Spectrum of IBMR Images user input range scans Model Images Image based modeling Image- based rendering Geometry+ Images Geometry+ Materials Images + Depth Light field Panoroma Kinematics Dynamics Etc. Camera + geometry

How can we estimate the camera parameters?

Camera calibration Augmented pin-hole camera - focal point, orientation - focal length, aspect ratio, center, lens distortion Known 3D Classical calibration - 3D 2D - correspondence Camera calibration online resources

Camera and calibration target

Classical camera calibration Known 3D coordinates and 2D coordinates - known 3D points on calibration targets - find corresponding 2D points in image using feature detection algorithm

Camera parameters u0u0 v0v s y 0 sxsx а u v 1 Perspective proj. View trans. Viewport proj. Known 3D coords and 2D coords

Camera parameters u0u0 v0v s y 0 sxsx а u v 1 Perspective proj. View trans. Viewport proj. Known 3D coords and 2D coords Intrinsic camera parameters (5 parameters) extrinsic camera parameters (6 parameters)

Camera matrix Fold intrinsic calibration matrix K and extrinsic pose parameters (R,t) together into a camera matrix M = K [R | t ] (put 1 in lower r.h. corner for 11 d.o.f.)

Camera matrix calibration Directly estimate 11 unknowns in the M matrix using known 3D points (X i,Y i,Z i ) and measured feature positions (u i,v i )

Camera matrix calibration Linear regression: Bring denominator over, solve set of (over-determined) linear equations. How?

Camera matrix calibration Linear regression: Bring denominator over, solve set of (over-determined) linear equations. How? Least squares (pseudo-inverse) - 11 unknowns (up to scale) - 2 equations per point (homogeneous coordinates) - 6 points are sufficient

Nonlinear camera calibration Perspective projection:

Nonlinear camera calibration Perspective projection: KRTP

Nonlinear camera calibration Perspective projection: 2D coordinates are just a nonlinear function of its 3D coordinates and camera parameters: KRTP

Nonlinear camera calibration Perspective projection: 2D coordinates are just a nonlinear function of its 3D coordinates and camera parameters: KRTP

Multiple calibration images Find camera parameters which satisfy the constraints from M images, N points: for j=1,…,M for i=1,…,N This can be formulated as a nonlinear optimization problem:

Multiple calibration images Find camera parameters which satisfy the constraints from M images, N points: for j=1,…,M for i=1,…,N This can be formulated as a nonlinear optimization problem: Solve the optimization using nonlinear optimization techniques: - Gauss-newton - Levenberg-MarquardtLevenberg-Marquardt

Nonlinear approach Advantages: can solve for more than one camera pose at a time fewer degrees of freedom than linear approach Standard technique in photogrammetry, computer vision, computer graphics - [Tsai 87] also estimates lens distortions CMU) Disadvantages: more complex update rules need a good initialization (recover K [R | t] from M)

How can we estimate the camera parameters?

Application: camera calibration for sports video [Farin et. Al] imagesCourt model

Calibration from 2D motion Structure from motion (SFM) - track points over a sequence of images - estimate for 3D positions and camera positions - calibrate intrinsic camera parameters before hand Self-calibration: - solve for both intrinsic and extrinsic camera parameters

SFM = Holy Grail of 3D Reconstruction Take movie of object Reconstruct 3D model Would be commercially highly viable

How to Get Feature Correspondences Feature-based approach - good for images - feature detection (corners) - feature matching using RANSAC (epipolar line) Pixel-based approach - good for video sequences - patch based registration with lucas-kanade algorithm - register features across the entire sequence

Structure from Motion Two Principal Solutions Bundle adjustment (nonlinear optimization) Factorization (SVD, through orthographic approximation, affine geometry)

Nonlinear Approach for SFM What’s the difference between camera calibration and SFM?

Nonlinear Approach for SFM What’s the difference between camera calibration and SFM? - camera calibration: known 3D and 2D

Nonlinear Approach for SFM What’s the difference between camera calibration and SFM? - camera calibration: known 3D and 2D - SFM: unknown 3D and known 2D

Nonlinear Approach for SFM What’s the difference between camera calibration and SFM? - camera calibration: known 3D and 2D - SFM: unknown 3D and known 2D - what’s 3D-to-2D registration problem?

Nonlinear Approach for SFM What’s the difference between camera calibration and SFM? - camera calibration: known 3D and 2D - SFM: unknown 3D and known 2D - what’s 3D-to-2D registration problem?

SFM: Bundle Adjustment SFM = Nonlinear Least Squares problem Minimize through Gradient Descent Conjugate Gradient Gauss-Newton Levenberg Marquardt common method Prone to local minima

Count # Constraints vs #Unknowns M camera poses N points 2MN point constraints 6M+3N unknowns Suggests: need 2mn  6m + 3n But: Can we really recover all parameters???

Count # Constraints vs #Unknowns M camera poses N points 2MN point constraints 6M+3N unknowns (known intrinsic camera parameters) Suggests: need 2mn  6m + 3n But: Can we really recover all parameters??? Can’t recover origin, orientation (6 params) Can’t recover scale (1 param) Thus, we need 2mn  6m + 3n - 7

Are We Done? No, bundle adjustment has many local minima.

SFM Using Factorization Assume an othorgraphic camera Image World

SFM Using Factorization Assume othorgraphic camera Image World Subtract the mean

SFM Using Factorization Stack all the features from the same frame:

SFM Using Factorization Stack all the features from the same frame: Stack all the features from all the images: W

SFM Using Factorization Stack all the features from the same frame: Stack all the features from all the images: W

SFM Using Factorization Stack all the features from all the images: W Factorize the matrix into two matrix using SVD:

SFM Using Factorization Stack all the features from all the images: Factorize the matrix into two matrix using SVD:

SFM Using Factorization Stack all the features from all the images: W Factorize the matrix into two matrix using SVD: How to compute the matrix ?

SFM Using Factorization M is the stack of rotation matrix:

SFM Using Factorization M is the stack of rotation matrix: Orthogonal constraints from rotation matrix

SFM Using Factorization M is the stack of rotation matrix: Orthogonal constraints from rotation matrix

SFM Using Factorization Orthogonal constraints from rotation matrices:

SFM Using Factorization Orthogonal constraints from rotation matrices: QQ: symmetric 3 by 3 matrix

SFM Using Factorization Orthogonal constraints from rotation matrices: How to compute QQ T ? least square solution - 4F linear constraints, 9 unknowns (6 independent due to symmetric matrix) QQ: symmetric 3 by 3 matrix

SFM Using Factorization Orthogonal constraints from rotation matrices: How to compute QQ T ? least square solution - 4F linear constraints, 9 unknowns (6 independent due to symmetric matrix) How to compute Q from QQ T : SVD again: QQ: symmetric 3 by 3 matrix

SFM Using Factorization M is the stack of rotation matrix: Orthogonal constraints from rotation matrix QQ T : symmetric 3 by 3 matrix Computing QQ T is easy: - 3F linear equations - 6 independent unknowns

SFM Using Factorization 1.Form the measurement matrix 2.Decompose the matrix into two matrices and using SVD 3.Compute the matrix Q with least square and SVD 4.Compute the rotation matrix and shape matrix: and

Weak-perspective Projection Factorization also works for weak-perspective projection (scaled orthographic projection): d z0z0

Factorization for Full-perspective Cameras [Han and Kanade]

SFM for Deformable Objects For detail, click herehere

SFM Using Factorization Bundle adjustment (nonlinear optimization) - work with perspective camera model - work with incomplete data - prone to local minima Factorization: - closed-form solution for weak perspective camera - simple and efficient - usually need complete data - becomes complicated for full-perspective camera model Phil Torr’s structure from motion toolkit in matlab (click here)here Voodoo camera tracker (click here)here

All Together Video Click herehere - feature detection - feature matching (epipolar geometry) - structure from motion - stereo reconstruction - triangulation - texture mapping