Motion and Optical Flow

Slides:



Advertisements
Similar presentations
Feature extraction: Corners
Advertisements

Computer Vision Optical Flow
Automatic Image Alignment (direct) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and Rick.
Motion Estimation I What affects the induced image motion? Camera motion Object motion Scene structure.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Motion and Optical Flow. Moving to Multiple Images So far, we’ve looked at processing a single imageSo far, we’ve looked at processing a single image.
Announcements Project1 artifact reminder counts towards your grade Demos this Thursday, 12-2:30 sign up! Extra office hours this week David (T 12-1, W/F.
Feature matching and tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on.
Computing motion between images
Announcements Project 1 test the turn-in procedure this week (make sure your folder’s there) grading session next Thursday 2:30-5pm –10 minute slot to.
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Lecture 9: Image alignment CS4670: Computer Vision Noah Snavely
Optical Flow Estimation
Lecture 19: Optical flow CS6670: Computer Vision Noah Snavely
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
KLT tracker & triangulation Class 6 Read Shi and Tomasi’s paper on good features to track
Optical Flow Digital Photography CSE558, Spring 2003 Richard Szeliski (notes cribbed from P. Anandan)
Announcements Project1 due Tuesday. Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Supplemental:
CSCE 641 Computer Graphics: Image Registration Jinxiang Chai.
Motion and optical flow Thursday, Nov 20 Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik.
Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha Efros and Bill Freeman.
Image Stitching Ali Farhadi CSE 455
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
The Brightness Constraint
The Measurement of Visual Motion P. Anandan Microsoft Research.
CSE 185 Introduction to Computer Vision Feature Tracking and Optical Flow.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Uses of Motion 3D shape reconstruction Segment objects based on motion cues Recognize events and activities Improve video quality Track objects Correct.
Effective Optical Flow Estimation
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Motion Estimation I What affects the induced image motion?
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Image Alignment and Mosaicing. Image Alignment Applications Local alignment:Local alignment: – Tracking – Stereo Global alignment:Global alignment: –
Structure from Motion Paul Heckbert, Nov , Image-Based Modeling and Rendering.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Motion estimation Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/4/12 with slides by Michael Black and P. Anandan.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Announcements Final is Thursday, March 18, 10:30-12:20 –MGH 287 Sample final out today.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Motion estimation Digital Visual Effects, Spring 2005 Yung-Yu Chuang 2005/3/23 with slides by Michael Black and P. Anandan.
Feature Tracking and Optical Flow
CPSC 641: Image Registration
Announcements Final is Thursday, March 20, 10:30-12:20pm
Lecture 7: Image alignment
The Brightness Constraint
Image Primitives and Correspondence
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
The Brightness Constraint
The Brightness Constraint
Motion Estimation Today’s Readings
Filtering Things to take away from this lecture An image as a function
Announcements more panorama slots available now
CSSE463: Image Recognition Day 30
Announcements Questions on the project? New turn-in info online
Coupled Horn-Schunck and Lukas-Kanade for image processing
Announcements more panorama slots available now
Optical flow Computer Vision Spring 2019, Lecture 21
CSSE463: Image Recognition Day 30
Filtering An image as a function Digital vs. continuous images
Optical flow and keypoint tracking
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Motion and Optical Flow

Moving to Multiple Images So far, we’ve looked at processing a single image Multiple images Multiple cameras at one time: stereo Single camera at many times: video (Multiple cameras at multiple times)

Applications of Multiple Images 2D Feature / object tracking Segmentation based on motion 3D Shape extraction Motion capture

Applications of Multiple Images in Graphics Stitching images into panoramas Automatic image morphing Reconstruction of 3D models for rendering Capturing articulated motion for animation

Applications of Multiple Images in Biological Systems Shape inference Peripheral sensitivity to motion (low-level) Looming field – obstacle avoidance Very similar applications in robotics

Looming Field Pure translation: motion looks like it originates at a point – focus of expansion

Key Problem Main problem in most multiple-image methods: correspondence

Correspondence Small displacements Large displacements Differential algorithms Based on gradients in space and time Dense correspondence estimates Most common with video Large displacements Matching algorithms Based on correlation or features Sparse correspondence estimates Most common with multiple cameras / stereo

Result of Correspondence For points in image i displacements to corresponding locations in image j In stereo, usually called disparity In video, usually called motion field

Computing Motion Field Basic idea: a small portion of the image (“local neighborhood”) shifts position Assumptions No / small changes in reflected light No / small changes in scale No occlusion or disocclusion Neighborhood is correct size: aperture problem

Actual and Apparent Motion If these assumptions violated, can still use the same methods – apparent motion Result of algorithm is optical flow (vs. ideal motion field) Most obvious effects: Aperture problem: can only get motion perpendicular to edges Errors near discontinuities (occlusions)

Aperture Problem Too big: confused by multiple motions Too small: only get motion perpendicular to edge

Computing Optical Flow: Preliminaries Image sequence I(x,y,t) Uniform discretization along x,y,t – “cube” of data Differential framework: compute partial derivatives along x,y,t by convolving with derivative of Gaussian

Computing Optical Flow: Image Brightness Constancy Basic idea: a small portion of the image (“local neighborhood”) shifts position Brightness constancy assumption

Computing Optical Flow: Image Brightness Constancy This does not say that the image remains the same brightness! vs. : total vs. partial derivative Use chain rule

Computing Optical Flow: Image Brightness Constancy Given optical flow v(x,y) Image brightness constancy equation

Computing Optical Flow: Discretization Look at some neighborhood N:

Computing Optical Flow: Least Squares In general, overconstrained linear system Solve by least squares

Computing Optical Flow: Stability Has a solution unless C = ATA is singular

Computing Optical Flow: Stability Where have we encountered C before? Corner detector! C is singular if constant intensity or edge Use eigenvalues of C: to evaluate stability of optical flow computation to find good places to compute optical flow (finding good features to track) [Shi-Tomasi]

Computing Optical Flow: Improvements Assumption that optical flow is constant over neighborhood not always good Decreasing size of neighborhood  C more likely to be singular Alternative: weighted least-squares Points near center = higher weight Still use larger neighborhood

Computing Optical Flow: Weighted Least Squares Let W be a matrix of weights

Computing Optical Flow: Improvements What if windows are still bigger? Adjust motion model: no longer constant within a window Popular choice: affine model

Computing Optical Flow: Affine Motion Model Translational model Affine model Solved as before, but 6 unknowns instead of 2

Computing Optical Flow: Improvements Larger motion: how to maintain “differential” approximation? Solution: iterate Even better: adjust window / smoothing Early iterations: use larger Gaussians to allow more motion Late iterations: use less blur to find exact solution, lock on to high-frequency detail

Iteration Local refinement of optical flow estimate Sort of equivalent to multiple iterations of Newton’s method

Computing Optical Flow: Lucas-Kanade Iterative algorithm: Set s = large (e.g. 3 pixels) Set I’  I1 Set v  0 Repeat while SSD(I’, I2) > t v += Optical flow(I’  I2) I’  Warp(I1, v) After n iterations, set s = small (e.g. 1.5 pixels)

Computing Optical Flow: Lucas-Kanade I’ always holds warped version of I1 Best estimate of I2 Gradually reduce thresholds Stop when difference between I’ and I2 small Simplest difference metric = sum of squared differences (SSD) between pixels

Image Warping Given a coordinate transform x’ = h(x) and a source image f(x), how do we compute a transformed image g(x’) = f(h(x))? h(x) x x’ f(x) g(x’) Szeliski

Forward Warping Send each pixel f(x) to its corresponding location x’ = h(x) in g(x’) What if pixel lands “between” two pixels? h(x) x x’ f(x) g(x’) Szeliski

Forward Warping Send each pixel f(x) to its corresponding location x’ = h(x) in g(x’) What if pixel lands “between” two pixels? Answer: add “contribution” to several pixels, normalize later (splatting) h(x) x x’ f(x) g(x’) Szeliski

Inverse Warping Get each pixel g(x’) from its corresponding location x = h-1(x’) in f(x) What if pixel comes from “between” two pixels? h-1(x’) x x’ f(x) g(x’) Szeliski

Inverse Warping Get each pixel g(x’) from its corresponding location x = h-1(x’) in f(x) What if pixel comes from “between” two pixels? Answer: resample color value from interpolated (prefiltered) source image Szeliski

Optical Flow Applications Video Frames [Feng & Perona]

Optical Flow Applications Depth Reconstruction [Feng & Perona]

Optical Flow Applications Obstacle Detection: Unbalanced Optical Flow [Temizer]

Optical Flow Applications Collision avoidance: keep optical flow balanced between sides of image [Temizer]