The Brightness Constraint

Slides:



Advertisements
Similar presentations
3D reconstruction.
Advertisements

Motion Estimation I What affects the induced image motion? Camera motion Object motion Scene structure.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Two-View Geometry CS Sastry and Yang
Self-calibration.
Jan-Michael Frahm, Enrique Dunn Spring 2012
Two-view geometry.
Computer Vision Optical Flow
Camera calibration and epipolar geometry
Structure from motion.
3D Computer Vision and Video Computing 3D Vision Topic 4 of Part II Visual Motion CSc I6716 Fall 2011 Cover Image/video credits: Rick Szeliski, MSR Zhigang.
Motion Estimation I What affects the induced image motion? Camera motion Object motion Scene structure.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Multiple-view Reconstruction from Points and Lines
CSc83029 – 3-D Computer Vision/ Ioannis Stamos 3-D Computational Vision CSc Optical Flow & Motion The Factorization Method.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
3D Motion Estimation. 3D model construction Video Manipulation.
Optical Flow Digital Photography CSE558, Spring 2003 Richard Szeliski (notes cribbed from P. Anandan)
3D Computer Vision and Video Computing 3D Vision Topic 8 of Part 2 Visual Motion (II) CSC I6716 Spring 2004 Zhigang Zhu, NAC 8/203A
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
A plane-plus-parallax algorithm Basic Model: When FOV is not very large and the camera motion has a small rotation, the 2D displacement (u,v) of an image.
Automatic Camera Calibration
Computer vision: models, learning and inference
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
What Does the Scene Look Like From a Scene Point? Donald Tanguay August 7, 2002 M. Irani, T. Hassner, and P. Anandan ECCV 2002.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
The Brightness Constraint
The Measurement of Visual Motion P. Anandan Microsoft Research.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Uses of Motion 3D shape reconstruction Segment objects based on motion cues Recognize events and activities Improve video quality Track objects Correct.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Feature Matching. Feature Space Outlier Rejection.
Motion Estimation I What affects the induced image motion?
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
MASKS © 2004 Invitation to 3D vision Uncalibrated Camera Chapter 6 Reconstruction from Two Uncalibrated Views Modified by L A Rønningen Oct 2008.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Lecture 16: Image alignment
Estimating Parametric and Layered Motion
Depth from disparity (x´,y´)=(x+D(x,y), y)
Motion and Optical Flow
René Vidal and Xiaodong Fan Center for Imaging Science
3D Vision Topic 4 of Part II Visual Motion CSc I6716 Fall 2009
The Brightness Constraint
3D Motion Estimation.
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
3D Photography: Epipolar geometry
Structure from motion Input: Output: (Tomasi and Kanade)
The Brightness Constraint
Uncalibrated Geometry & Stratification
Filtering Things to take away from this lecture An image as a function
Two-view geometry.
Two-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
Computational Photography
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

The Brightness Constraint Brightness Constancy Equation: Linearizing (assuming small (u,v)): Where: ) , ( y x J I t - = Each pixel provides 1 equation in 2 unknowns (u,v). Insufficient info. Another constraint: Global Motion Model Constraint

The 2D/3D Dichotomy  Requires prior model selection + + = 3D Camera motion + 3D Scene structure Independent motions Camera induced motion + = Independent motions Image motion = 2D techniques 3D techniques Do not model “3D scenes” Singularities in “2D scenes” 2

The 2D/3D Dichotomy In the uncalibrated case (unknown calibration matrix K)  Cannot recover 3D rotation or Plane parameters either (cannot tell the difference between a planar H and KR) The only part with 3D depth information When cannot recover any 3D info? Planar scene:

Global Motion Models 2D Models: 2D Similarity 2D Affine * 2D models always provide dense correspondences. * 2D Models are easier to estimate than 3D models (much fewer unknowns  numerically more stable). Global Motion Models 2D Models: 2D Similarity 2D Affine Homography (2D projective transformation) 3D Models: 3D Rotation + 3D Translation + Depth Essential/Fundamental Matrix Plane+Parallax  Relevant for: *Airborne video (distant scene) * Remote Surveillance (distant scene) * Camera on tripod (pure Zoom/Rotation)  Relevant when camera is translating, scene is near, and non-planar.

Example: Affine Motion Substituting into the B.C. Equation: Each pixel provides 1 linear constraint in 6 global unknowns Least Square Minimization (over all pixels): (minimum 6 pixels necessary) Every pixel contributes  Confidence-weighted regression

Example: Affine Motion Differentiating w.r.t. a1 , …, a6 and equating to zero  6 linear equations in 6 unknowns: Summation is over all the pixels in the image!

Coarse-to-Fine Estimation Parameter propagation: Pyramid of image J Pyramid of image I image I image J Jw warp refine + u=10 pixels u=5 pixels u=2.5 pixels u=1.25 pixels ==> small u and v ... image J image I

Other 2D Motion Models 2D Projective – planar motion (Homography H)

Panoramic Mosaic Image Alignment accuracy (between a pair of frames): error < 0.1 pixel Original video clip Generated Mosaic image

Video Removal Original Original Outliers Synthesized

Video Enhancement ORIGINAL ENHANCED

Direct Methods: Methods for motion and/or shape estimation, which recover the unknown parameters directly from image intensities.  Error measure based on dense image quantities (Confidence-weighted regression; Exploits all available information) Feature-based Methods: Methods for motion and/or shape estimation based on feature matches (e.g., SIFT, HOG).  Error measure based on sparse distinct features (Features matches + RANSAC + Parameter estimation)

Benefits of Direct Methods High subpixel accuracy. Simultaneously estimate matches + transformation  Do not need distinct features for image alignment: Strong locking property.

Limitations of Direct Methods Limited search range (up to ~10% of the image size). Brightness constancy assumption.

Video Indexing and Editing DEMO: Video Indexing and Editing Exercise 4: Image alignment (will be posted in a few days) Keep reference image the same (i.e., unwarp target image)  Estimate derivatives only once per pyramid level. Avoid repeated warping of the target image  Compose transformations and unwarp target image only.

The 2D/3D Dichotomy Source of dichotomy: Camera-centric models (R,T,Z) Camera motion + Scene structure Independent motions Camera induced motion = + Independent motions = Image motion = 2D techniques 3D techniques Do not model “3D scenes” Singularities in “2D scenes”

The Plane+Parallax Decomposition Move from CAMERA-centric to a SCENE-centric model Original Sequence Plane-Stabilized Sequence The residual parallax lies on a radial (epipolar) field: epipole

Benefits of the P+P Decomposition Eliminates effects of rotation Eliminates changes in camera calibration parameters / zoom 1. Reduces the search space: Camera parameters: Need to estimate only the epipole. (i.e., 2 unknowns) Image displacements: Constrained to lie on radial lines (i.e., reduces to a 1D search problem)  A result of aligning an existing structure in the image.

Benefits of the P+P Decomposition 2. Scene-Centered Representation: Translation or pure rotation ??? Focus on relevant portion of info Remove global component which dilutes information !

Benefits of the P+P Decomposition 2. Scene-Centered Representation: Shape = Fluctuations relative to a planar surface in the scene STAB_RUG SEQ

Benefits of the P+P Decomposition 2. Scene-Centered Representation: Shape = Fluctuations relative to a planar surface in the scene Height vs. Depth (e.g., obstacle avoidance) Appropriate units for shape A compact representation - fewer bits, progressive encoding total distance [97..103] camera center scene global (100) component local [-3..+3] component

Benefits of the P+P Decomposition 3. Stratified 2D-3D Representation: Start with 2D estimation (homography). 3D info builds on top of 2D info. Avoids a-priori model selection.

Dense 3D Reconstruction (Plane+Parallax) Epipolar geometry in this case reduces to estimating the epipoles. Everything else is captured by the homography. Original sequence Plane-aligned sequence Recovered shape

Dense 3D Reconstruction (Plane+Parallax) Original sequence Plane-aligned sequence Recovered shape

Dense 3D Reconstruction (Plane+Parallax) Original sequence Plane-aligned sequence Epipolar geometry in this case reduces to estimating the epipoles. Everything else is captured by the homography. Recovered shape

P+P Correspondence Estimation 1. Eliminating Aperture Problem Brightness Constancy constraint Epipolar line p epipole The intersection of the two line constraints uniquely defines the displacement.

Multi-Frame vs. 2-Frame Estimation 1. Eliminating Aperture Problem Brightness Constancy constraint another epipole other epipolar line Epipolar line p epipole The other epipole resolves the ambiguity ! The two line constraints are parallel ==> do NOT intersect

What about moving objects…? Dual epipole Parallax Geometry of Pairs of Points [ Irani & Anandan, ECCV’96 ]