MSU Fall 20141 Computing Motion from Images Chapter 9 of S&S plus otherwork.

Slides:



Advertisements
Similar presentations
Computer Vision Optical Flow
Advertisements

Motion Estimation I What affects the induced image motion? Camera motion Object motion Scene structure.
CSE 803: image compression Why? How much? How? Related benefits and liabilities.
1 Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Announcements Project1 artifact reminder counts towards your grade Demos this Thursday, 12-2:30 sign up! Extra office hours this week David (T 12-1, W/F.
Feature matching and tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on.
Computing motion between images
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Caltech, Oct Lihi Zelnik-Manor
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
CS 376b Introduction to Computer Vision 04 / 01 / 2008 Instructor: Michael Eckmann.
Motion Computing in Image Analysis
Stockman MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.
Optical Flow Estimation
Lecture 19: Optical flow CS6670: Computer Vision Noah Snavely
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
1 Stanford CS223B Computer Vision, Winter 2006 Lecture 7 Optical Flow Professor Sebastian Thrun CAs: Dan Maynes-Aminzade, Mitul Saha, Greg Corrado Slides.
1 Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or.
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
Motion and optical flow Thursday, Nov 20 Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik.
Generating panorama using translational movement model.
Exploiting video information for Meeting Structuring ….
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
1 Motion Estimation Readings: Ch 9: plus papers change detection optical flow analysis Lucas-Kanade method with pyramid structure Ming Ye’s improved.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Uses of Motion 3D shape reconstruction Segment objects based on motion cues Recognize events and activities Improve video quality Track objects Correct.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #16.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
Pyramidal Implementation of Lucas Kanade Feature Tracker Jia Huang Xiaoyan Liu Han Xin Yizhen Tan.
3D Imaging Motion.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
Miguel Tavares Coimbra
Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 19 Motion.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Motion Estimation I What affects the induced image motion?
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Motion / Optical Flow II Estimation of Motion Field Avneesh Sud.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
1 Motion Estimation Readings: Ch 9: plus papers change detection optical flow analysis Lucas-Kanade method with pyramid structure Ming Ye’s improved.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Motion and optical flow
CSSE463: Image Recognition Day 29
Range Imaging Through Triangulation
CSSE463: Image Recognition Day 29
Motion Estimation Today’s Readings
Announcements more panorama slots available now
Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
Announcements Questions on the project? New turn-in info online
Coupled Horn-Schunck and Lukas-Kanade for image processing
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
Announcements more panorama slots available now
Open Topics.
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
Optical flow and keypoint tracking
Presentation transcript:

MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.

MSU Fall General topics Low level change detection Region tracking or matching over time Interpretation of motion MPEG compression Interpretation of scene changes in video Understanding human activites

MSU Fall Motion important to human vision

MSU Fall What’s moving: different cases

MSU Fall Image subtraction Simple method to remove unchanging background from moving regions.

MSU Fall Change detection for surveillance

MSU Fall Change detection by image subtraction

Closing = dilation+erosion MSU Fall

MSU Fall What to do with regions of change? Discard small regions Discard regions of non interesting features Keep track of regions with interesting features Track in future frames from motion plus component features

MSU Fall Some effects of camera motion that can cause problems

MSU Fall Motion field

MSU Fall FOE and FOC Will return to use the FOE or FOC or detection of panning to determine what the camera is doing in video tapes.

MSU Fall Gaming using a camera to recognize the player’s motion Decathlete game

MSU Fall Decathlete game Cheap camera replaces usual mouse for input Running speed and jumping of the avatar is controlled by detected motion of the player’s hands.

MSU Fall Motion detection input device Running (hands) Jumping (hands)

MSU Fall Motion analysis controls hurdling event (console) Top left shows video frame of player Middle left shows motion vectors from multiple frames Center shows jumping patterns

MSU Fall Related work Motion sensed by crude cameras Person dances/gestures in space Kinect/Leap motion sensorsLeap motion System maps movement into music Creative environment? Good exercise room?

MSU Fall Computing motion vectors from corresponding “points” High energy neighborhoods are used to define points for matching

MSU Fall Match points between frames Such large motions are unusual. Most systems track small motions.

MSU Fall Requirements for interest points Match small neighborhood to small neighborhood. The previous “scene” contains several highly textured neighborhoods.

MSU Fall Interest = minimum directional variance Used by Hans Moravec in his robot stereo vision system. Interest points were used for stereo matching.

MSU Fall Detecting interest points in I1

MSU Fall Match points from I1 in I2

MSU Fall Search for best match of point P1 in nearby window of I2 For both motion and stereo, we have some constraints on where to search for a matching interest point.

MSU Fall Motion vectors clustered to show 3 coherent regions All motion vectors are clustered into 3 groups of similar vectors showing motion of 3 independent objects. (Dina Eldin) Motion coherence: points of same object tend to move in the same way

MSU Fall Two frames of aerial imagery Video frame N and N+1 shows slight movement: most pixels are same, just in different locations.

MSU Fall Can code frame N+d with displacments relative to frame N for each 16 x 16 block in the 2 nd image find a closely matching block in the 1 st image replace the 16x16 intensities by the location in the 1 st image (dX, dY) 256 bytes replaced by 2 bytes! (If blocks differ too much, encode the differences to be added.)

MSU Fall Frame approximation Left is original video frame N+1. Right is set of best image blocks taken from frame N. (Work of Dina Eldin)

MSU Fall Best matching blocks between video frames N+1 to N (motion vectors) The bulk of the vectors show the true motion of the airplane taking the pictures. The long vectors are incorrect motion vectors, but they do work well for compression of image I2! Best matches from 2 nd to first image shown as vectors overlaid on the 2 nd image. (Work by Dina Eldin.)

MSU Fall Motion coherence provides redundancy for compression MPEG “motion compensation” represents motion of 16x16 pixels blocks, NOT objects

MSU Fall MPEG represents blocks that move by the motion vector

MSU Fall MPEG has ‘I’, ‘P’, and ‘B’ frames

MSU Fall Computing Image Flow

MSU Fall

Motion Field & Optical Flow Field Motion Field = Real world 3D motion Optical Flow Field = Projection of the motion field onto the 2d image 3D motion vector 2D optical flow vector CCD Slides from Lihi Zelnik-Manor

MSU Fall Assumptions

When does it break? The screen is stationary yet displays motion Homogeneous objects generate zero optical flow. Fixed sphere. Changing light source. Non-rigid texture motion Slides from Lihi Zelnik-Manor

MSU Fall Image flow equation 1 of 2

MSU Fall Image flow equation 2 of 2

Estimating Optical Flow Assume the image intensity is constant Time = t Time = t+dt Slides from Lihi Zelnik-Manor MSU Fall

Brightness Constancy Equation First order Taylor Expansion Simplify notations: Divide by dt and denote: Problem I: One equation, two unknowns Slides from Lihi Zelnik-Manor MSU Fall

Time t+dt Problem II: “ The Aperture Problem ” Time t ? Time t+dt Where did the yellow point move to? We need additional constraints For points on a line of fixed intensity we can only recover the normal flow Slides from Lihi Zelnik-Manor MSU Fall

Use Local Information Sometimes enlarging the aperture can help Slides from Lihi Zelnik-Manor MSU Fall

Local smoothness Lucas Kanade (1984) Assume constant (u,v) in small neighborhood Slides from Lihi Zelnik-Manor MSU Fall

Lucas Kanade (1984) Goal: Minimize 2x2 2x1 Method: Least-Squares Slides from Lihi Zelnik-Manor MSU Fall

Lucas-Kanade Solution We want this matrix to be invertible. i.e., no zero eigenvalues Slides from Lihi Zelnik-Manor MSU Fall

Break-downs Brightness constancy is not satisfied A point does not move like its neighbors what is the ideal window size? The motion is not small (Taylor expansion doesn ’ t hold) Correlation based methods Regularization based methodsUse multi-scale estimation Slides from Lihi Zelnik-Manor MSU Fall

Multi-Scale Flow Estimation image I t-1 image I Gaussian pyramid of image I t Gaussian pyramid of image I t+1 image I t+1 image I t u=10 pixels u=5 pixels u=2.5 pixels u=1.25 pixels Slides from Lihi Zelnik-Manor MSU Fall

MSU Fall Tracking several objects Use assumptions of physics to compute multiple smooth paths. (work of Sethi and R. Jain)

MSU Fall

MSU Fall Tracking in images over time

MSU Fall General constraints from physics

MSU Fall Other possible constraints Background statistics stable Object color/texture/shape might change slowly over frames Might have knowledge of objects under survielance Objects appear/disappear at boundary of the frame

MSU Fall

MSU Fall Sethi-Jain algorithm

MSU Fall

MSU Fall Total smoothness of m paths

MSU Fall Greedy exchange algorithm

MSU Fall Example data structure Total smoothness for trajectories of Figure 9.14

MSU Fall Example of domain specific tracking (Vera Bakic) Tracking eyes and nose of PC user. System presents menu (top). User moves face to position cursor to a particular box (choice). System tracks face movement and moves cursor accordingly: user gets into feedback- control loop.

MSU Fall Segmentation of videos/movies Segment into scenes, shots, specific actions, etc.

MSU Fall Types of changes in videos

MSU Fall Anchor person scene at left Street scene for news story Scene break From Zhang et al 1993 How do we compute the scene change?

MSU Fall Histograms of frames across the scene change Histograms at left are from anchor person frames, while histogram at bottom right is from the street frame.

MSU Fall Heuristics for ignoring zooms

MSU Fall American sign language example

MSU Fall Example from Yang and Ahuja

MSU Fall

MSU Fall