The Brightness Constraint

Slides:



Advertisements
Similar presentations
Motion Estimation I What affects the induced image motion? Camera motion Object motion Scene structure.
Advertisements

MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Two-View Geometry CS Sastry and Yang
Self-calibration.
Computer Vision Optical Flow
Camera calibration and epipolar geometry
Image alignment Image from
Structure from motion.
3D Computer Vision and Video Computing 3D Vision Topic 4 of Part II Visual Motion CSc I6716 Fall 2011 Cover Image/video credits: Rick Szeliski, MSR Zhigang.
Motion Estimation I What affects the induced image motion? Camera motion Object motion Scene structure.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
CSc83029 – 3-D Computer Vision/ Ioannis Stamos 3-D Computational Vision CSc Optical Flow & Motion The Factorization Method.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from:  David Lowe, UBC  Jiri Matas, CMP Prague
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
3D Motion Estimation. 3D model construction Video Manipulation.
Optical Flow Digital Photography CSE558, Spring 2003 Richard Szeliski (notes cribbed from P. Anandan)
3D Computer Vision and Video Computing 3D Vision Topic 8 of Part 2 Visual Motion (II) CSC I6716 Spring 2004 Zhigang Zhu, NAC 8/203A
Structure Computation. How to compute the position of a point in 3- space given its image in two views and the camera matrices of those two views Use.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
A plane-plus-parallax algorithm Basic Model: When FOV is not very large and the camera motion has a small rotation, the 2D displacement (u,v) of an image.
Automatic Camera Calibration
Computer vision: models, learning and inference
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Projective cameras Motivation Elements of Projective Geometry Projective structure from motion Planches : –
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
The Brightness Constraint
The Measurement of Visual Motion P. Anandan Microsoft Research.
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Uses of Motion 3D shape reconstruction Segment objects based on motion cues Recognize events and activities Improve video quality Track objects Correct.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
3D Imaging Motion.
EECS 274 Computer Vision Affine Structure from Motion.
Feature Matching. Feature Space Outlier Rejection.
Motion Estimation I What affects the induced image motion?
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
MASKS © 2004 Invitation to 3D vision Uncalibrated Camera Chapter 6 Reconstruction from Two Uncalibrated Views Modified by L A Rønningen Oct 2008.
Uncalibrated reconstruction Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Lecture 16: Image alignment
Estimating Parametric and Layered Motion
COSC579: Image Align, Mosaic, Stitch
Depth from disparity (x´,y´)=(x+D(x,y), y)
Motion and Optical Flow
René Vidal and Xiaodong Fan Center for Imaging Science
3D Vision Topic 4 of Part II Visual Motion CSc I6716 Fall 2009
The Brightness Constraint
3D Motion Estimation.
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
3D Photography: Epipolar geometry
Structure from motion Input: Output: (Tomasi and Kanade)
The Brightness Constraint
Advanced Computer Vision
Uncalibrated Geometry & Stratification
George Mason University
Filtering Things to take away from this lecture An image as a function
Two-view geometry.
Computational Photography
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Filtering An image as a function Digital vs. continuous images
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

The Brightness Constraint Brightness Constancy Equation: Linearizing (assuming small (u,v)): Where: ) , ( y x J I t - = Each pixel provides 1 equation in 2 unknowns (u,v). Insufficient info. Another constraint: Global Motion Model Constraint

Global Motion Models 2D Models: Affine Quadratic Homography (Planar projective transform) 3D Models: Rotation, Translation, 1/Depth Instantaneous camera motion models Plane+Parallax

Example: Affine Motion Substituting into the B.C. Equation: Each pixel provides 1 linear constraint in 6 global unknowns Least Square Minimization (over all pixels): (minimum 6 pixels necessary) Every pixel contributes  Confidence-weighted regression

Example: Affine Motion Differentiating w.r.t. a1 , …, a6 and equating to zero  6 linear equations in 6 unknowns:

Coarse-to-Fine Estimation Parameter propagation: Pyramid of image J Pyramid of image I image I image J Jw warp refine + u=10 pixels u=5 pixels u=2.5 pixels u=1.25 pixels ==> small u and v ... image J image I

Other 2D Motion Models Quadratic – instantaneous approximation to planar motion Projective – exact planar motion (Homography H)

Panoramic Mosaic Image Alignment accuracy (between a pair of frames): error < 0.1 pixel Original video clip Generated Mosaic image

Video Removal Original Original Outliers Synthesized

Video Enhancement ORIGINAL ENHANCED

Direct Methods: Methods for motion and/or shape estimation, which recover the unknown parameters directly from measurable image quantities at each pixel in the image. Minimization step: Direct methods: Error measure based on dense measurable image quantities (Confidence-weighted regression; Exploits all available information) Feature-based methods: Error measure based on distances of a sparse set of distinct feature matches.

Benefits of Direct Methods High subpixel accuracy. Do not need distinct features. Locking property.

Limitations Limited search range (up to ~10% of the image size). Brightness constancy assumption.

Video Indexing and Editing

Ex#4: Image Alignment (2D Translation) Differentiating w.r.t. a1 and a2 and equating to zero  2 linear equations in 2 unknowns:

Camera induced motion = The 2D/3D Dichotomy Camera motion + Scene structure Independent motions Camera induced motion = + Independent motions = Image motion = 2D techniques 3D techniques Do not model “3D scenes” Singularities in “2D scenes”

The Plane+Parallax Decomposition Original Sequence Plane-Stabilized Sequence The residual parallax lies on a radial (epipolar) field: epipole

Benefits of the P+P Decomposition Eliminates effects of rotation Eliminates changes in camera parameters / zoom 1. Reduces the search space: Camera parameters: Need to estimate only epipole. (gauge ambiguity: unknown scale of epipole) Image displacements: Constrained to lie on radial lines (1-D search problem) A result of aligning an existing structure in the image.

Benefits of the P+P Decomposition 2. Scene-Centered Representation: Translation or pure rotation ??? Focus on relevant portion of info Remove global component which dilutes information !

Benefits of the P+P Decomposition 2. Scene-Centered Representation: Shape = Fluctuations relative to a planar surface in the scene STAB_RUG SEQ

Benefits of the P+P Decomposition 2. Scene-Centered Representation: Shape = Fluctuations relative to a planar surface in the scene Height vs. Depth (e.g., obstacle avoidance) Appropriate units for shape A compact representation - fewer bits, progressive encoding total distance [97..103] camera center scene global (100) component local [-3..+3] component

Benefits of the P+P Decomposition 3. Stratified 2D-3D Representation: Start with 2D estimation (homography). 3D info builds on top of 2D info. Avoids a-priori model selection.

Dense 3D Reconstruction (Plane+Parallax) Epipolar geometry in this case reduces to estimating the epipoles. Everything else is captured by the homography. Original sequence Plane-aligned sequence Recovered shape

Dense 3D Reconstruction (Plane+Parallax) Original sequence Plane-aligned sequence Recovered shape

Dense 3D Reconstruction (Plane+Parallax) Original sequence Plane-aligned sequence Epipolar geometry in this case reduces to estimating the epipoles. Everything else is captured by the homography. Recovered shape