Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.

Slides:



Advertisements
Similar presentations
MPEG-4 Objective Standardize algorithms for audiovisual coding in multimedia applications allowing for Interactivity High compression Scalability of audio.
Advertisements

Adviser : Ming-Yuan Shieh Student ID : M Student : Chung-Chieh Lien VIDEO OBJECT SEGMENTATION AND ITS SALIENT MOTION DETECTION USING ADAPTIVE BACKGROUND.
Computer Vision Optical Flow
Lecture 6 Image Segmentation
Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Probabilistic video stabilization using Kalman filtering and mosaicking.
E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Segmentation Divide the image into segments. Each segment:
Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California.
Clustering Color/Intensity
Segmentation by Clustering Reading: Chapter 14 (skip 14.5) Data reduction - obtain a compact representation for interesting image data in terms of a set.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Augmented Reality: Object Tracking and Active Appearance Model
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
Optical Flow Digital Photography CSE558, Spring 2003 Richard Szeliski (notes cribbed from P. Anandan)
CSCE 641 Computer Graphics: Image Registration Jinxiang Chai.
Motion Detection in UAV Videos by Cooperative Optical Flow and Parametric Analysis Masaharu Kobashi.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
A plane-plus-parallax algorithm Basic Model: When FOV is not very large and the camera motion has a small rotation, the 2D displacement (u,v) of an image.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
CS 376b Introduction to Computer Vision 04 / 02 / 2008 Instructor: Michael Eckmann.
Generating panorama using translational movement model.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
Chapter 14: SEGMENTATION BY CLUSTERING 1. 2 Outline Introduction Human Vision & Gestalt Properties Applications – Background Subtraction – Shot Boundary.
Robust global motion estimation and novel updating strategy for sprite generation IET Image Processing, Mar H.K. Cheung and W.C. Siu The Hong Kong.
Introduction EE 520: Image Analysis & Computer Vision.
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Segmentation.
EECS 274 Computer Vision Motion Estimation.
#MOTION ESTIMATION AND OCCLUSION DETECTION #BLURRED VIDEO WITH LAYERS
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
O PTICAL F LOW & L AYERED M OTION Prepared by: Lidya Tonkonogi Mu’in Rizik 1.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
Photoconsistency constraint C2 q C1 p l = 2 l = 3 Depth labels If this 3D point is visible in both cameras, pixels p and q should have similar intensities.
Motion Segmentation at Any Speed Shrinivas J. Pundlik Department of Electrical and Computer Engineering, Clemson University, Clemson, SC.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
MOTION Model. Road Map Motion Model Non Parametric Motion Field : Algorithms 1.Optical flow field estimation. 2.Block based motion estimation. 3.Pel –recursive.
Image Mosaicing with Motion Segmentation from Video Augusto Roman, Taly Gilat EE392J Final Project 03/20/01.
Video object segmentation and its salient motion detection using adaptive background generation Kim, T.K.; Im, J.H.; Paik, J.K.;  Electronics Letters 
CSSE463: Image Recognition Day 29
Estimating Parametric and Layered Motion
A Closed Form Solution to Direct Motion Segmentation
Motion Detection And Analysis
Segmentation of Dynamic Scenes from Image Intensities
Computer Vision Lecture 12: Image Segmentation II
Vehicle Segmentation and Tracking in the Presence of Occlusions
Representing Moving Images with Layers
CSSE463: Image Recognition Day 29
Representing Moving Images with Layers
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 29
Filtering An image as a function Digital vs. continuous images
CSSE463: Image Recognition Day 29
Optical flow and keypoint tracking
Image Stitching Linda Shapiro ECE/CSE 576.
Presentation transcript:

Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab

Goal Represent moving images with sets of overlapping layers Layers are ordered in depth and occlude each other Velocity maps indicate how the layers are to be warped over time

Simple Domain: Gesture Recognition

More Complex: What are the layers?

Even More Complex: How many layers are there?

Definition of Layer Maps Each layer contains three maps Layers are ordered by depth This can be for vision or graphics or both 1.intensity map (or texture map) 2.alpha map (opacity at each point) 3.velocity map (warping over time)

Layers for the Hand Gestures Background Layer Hand Layer Re-synthesized Sequence

Optical Flow Doesn’t Work Optical flow techniques typically model the world as a 2-D rubber sheet that is distorted over time. When one object moves in front of another, the rubber sheet model fails. Image information appears and disappears; optical flow can’t represent this. Motion estimates near boundaries are bad.

Block Matching Can’t Do It Block motion only handles translation well. Like optical flow, block matching doesn’t deal with occlusion or objects that suddenly appear or disappear off the edge of the frame.

Layered Representation: Compositing E 0 is the background layer. E 1 is the next layer (and there can be more).  1 is the alpha channel of E 1, with values between 0 and 1 (for graphics). The velocity map tells us how to warp the frames over time. The intensity map and alpha map are warped together, so they stay registered.

Analysis: Flower Garden Sequence Camera is panning to the right. What’s going on here? Frame 1 Frame 15 Frame 30 warped

Accumulation of the Flowerbed Layer

Motion Analysis 1.Robust motion segmentation using a parametric (affine) model. V x (x,y) = a x0 + a xx x + a xy y V y (x,y) = a y0 + a yx x + a yy y 2. Synthesis of the layered representation.

Motion Analysis Example 2 separate layers shown as 2 affine models (lines); The gaps show the occlusion.

Motion Estimation Steps 1.Conventional optical flow algorithm and representation (uses multi-scale, coarse- to-fine Lucas-Kanade approach). 2.From the optical flow representation, determine a set of affine motions. Segment into regions with an affine motion within each region.

Motion Segmentation 1.Use an array of non-overlapping square regions to derive an initial set of motion models. 2.Estimate the affine parameters within these regions by linear regression, applied separately on each velocity component (dx, dy). 3.Compute the reliability of each hypothesis according to its residual error. 4.Use an adaptive k-means clustering that merges two clusters when the distance between their centers is smaller than a threshold to produce a set of likely affine models.

Region Assignment by Hypothesis Testing Use the motion models derived from the motion segmentation step to identify the coherent regions. Do this by minimizing an error (distortion) function : G(i(x,y)) = ∑ (V(x,y) – V ai (x,y)) 2 x,y where i(x,y) is the model assigned to pixel (x,y) and V ai (x,y) is the affine motion for that model. The error is minimized at each pixel to give the best model for that pixel position. Pixels with too high error are not assigned to models.

Iterative Algorithm The initial segmentation step uses an array of square regions. At each iteration, the segmentation becomes more accurate, because the parameter estimation is within a single coherent motion region. A region splitter separates disjoint regions. A filter eliminates small regions. At the end, intensity is used to match unassigned pixels to the best adjacent region.

Layer Synthesis The information from a longer sequence must be combined over time, to accumulate each layer. The transforms for each layer are used to warp its pixels to align a set of frames. The median value of a pixel over the set is used for the layer. Occlusion relationships are determined.

Results