A plane-plus-parallax algorithm Basic Model: When FOV is not very large and the camera motion has a small rotation, the 2D displacement (u,v) of an image.

Slides:



Advertisements
Similar presentations
Motion.
Advertisements

Optimizing and Learning for Super-resolution
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 University.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Mapping: Scaling Rotation Translation Warp
Computer Vision Optical Flow
X From Video - Seminar By Randa Khayr Eli Shechtman, Yaron Caspi & Michal Irani.
Automatic Image Alignment (direct) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and Rick.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Announcements Project1 artifact reminder counts towards your grade Demos this Thursday, 12-2:30 sign up! Extra office hours this week David (T 12-1, W/F.
Announcements Project 1 test the turn-in procedure this week (make sure your folder’s there) grading session next Thursday 2:30-5pm –10 minute slot to.
Caltech, Oct Lihi Zelnik-Manor
Motion Computing in Image Analysis
Optical Flow Estimation
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
KLT tracker & triangulation Class 6 Read Shi and Tomasi’s paper on good features to track
Optical Flow Digital Photography CSE558, Spring 2003 Richard Szeliski (notes cribbed from P. Anandan)
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Announcements Project1 due Tuesday. Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Supplemental:
CSCE 641 Computer Graphics: Image Registration Jinxiang Chai.
Motion from normal flow. Optical flow difficulties The aperture problemDepth discontinuities.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha Efros and Bill Freeman.
Feature Tracking and Optical Flow
Generating panorama using translational movement model.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
The Brightness Constraint
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
The Measurement of Visual Motion P. Anandan Microsoft Research.
CSCE 643 Computer Vision: Structure from Motion
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
CSE 185 Introduction to Computer Vision Feature Tracking and Optical Flow.
Uses of Motion 3D shape reconstruction Segment objects based on motion cues Recognize events and activities Improve video quality Track objects Correct.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #16.
Over-Parameterized Variational Optical Flow
Pyramidal Implementation of Lucas Kanade Feature Tracker Jia Huang Xiaoyan Liu Han Xin Yizhen Tan.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Motion estimation Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/4/12 with slides by Michael Black and P. Anandan.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
MOTION Model. Road Map Motion Model Non Parametric Motion Field : Algorithms 1.Optical flow field estimation. 2.Block based motion estimation. 3.Pel –recursive.
Motion estimation Digital Visual Effects, Spring 2005 Yung-Yu Chuang 2005/3/23 with slides by Michael Black and P. Anandan.
SIFT.
SIFT Scale-Invariant Feature Transform David Lowe
Estimating Parametric and Layered Motion
Motion and Optical Flow
The Brightness Constraint
3D Motion Estimation.
Epipolar geometry.
The Brightness Constraint
The Brightness Constraint
Motion Estimation Today’s Readings
Announcements more panorama slots available now
Announcements Questions on the project? New turn-in info online
Announcements more panorama slots available now
Optical flow Computer Vision Spring 2019, Lecture 21
Optical flow and keypoint tracking
Presentation transcript:

A plane-plus-parallax algorithm Basic Model: When FOV is not very large and the camera motion has a small rotation, the 2D displacement (u,v) of an image point (x,y) in the image plane can be expressed: where

General Framework Estimate the 2D parameters of a single motion Register two successive frames according to the computed 2D parameters. This alignment was proved to cancel the rotation motion for the entire scene, resulting in a new sequence which only contains 3D translation, and looks as if taken from a stabilized platform with no jitter. Further option: Compute FOE (translation) from the residual epipolar displacement field between the two registered frames and recover the 3D rotation parameters from the 2D parameters and 3D translation parameters.

2D parameters estimation: Overview For small motions, assume gray level constancy: - For translation: 2 parameters, u(x,y) = a, v(x,y) = b - For Affine: 6 parameters, u(x,y) = a+bx+cy, v(x,y) = d+ex+fy. - For a pseudo projective transformation: u(x,y)=a+bx+cy+gx^2+hxy, v(x,y)=c+dx+fy+gxy+hy^2 First order Taylor expansion and neglecting all nonlinear terms: where Minimize the error function at Frame t in the region of R: (Its derivatives with respect to the parameters are set to zero.)

2D parameters estimation: Details Multiresolution iterative framework Construct a Gaussian pyramid Start at the lowest resolution level, motion parameters are estimated by solving the set of linear equation to minimize the error function according to the appropriate model. Incremental motion estimation can be performed at each resolution level. Coarse-fine alignment: at each successive iteration, the shift and estimates steps are performed on the next higher resolution pyramid level, that is, the shift estimated at level k+1 of frame t is applied to level k of frame t to form a new level k of frame t, then the residual can be computed between the new level k of frame t and the level k of frame t+1.

Motion Correction Warping two successive frames according to the computed 2D parameters. Theoretically, this step should be able to cancel the rotation motion for the entire scene, resulting in a new sequence which only contains 3D translation, and looks as if taken from a stabilized platform with no jitter. Stabilized GoCart (not so good) Problems: Didn’t work for other test videos ( Got zero gradient, thus NaN …. Need debug and further improvements)