Download presentation

Presentation is loading. Please wait.

Published byTessa Popple Modified about 1 year ago

1
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 daraei@soe.ucsc.edu University of California, Santa Cruz Multi-dimensional Signal Processing Lab

2
Optical Flow Computation The problem of optical flow computation for two frames f i and f i+1 can be addressed as finding a displacement field matching the frames with minimum error. Given p i as a pixel in the first image and w(p i ) as the flow vector …it will project onto: Then, the brightness constancy assumption states that the next frame should be:

3
Objective Function Let’s consider two consequent frames f i and f j. Then, E D (u,v) must be minimized over (u,v), as horizontal and vertical components of w ij. An interpolation method has to be employed in order to generate this frame, so direct optimization is not straightforward… First order Taylor approximation… leads to a quadratic objective function in terms of u and v, which benefits from painless optimization, but is not sufficient to uniquely determine both u and v (i.e. aperture problem) fifi fjfj (x 1,y 1 ) (x 2,y 2 ) w ij (x 1,y 1 ) w ij (x 2,y 2 )

4
Traditional Optical Flow Methods: Local and Global Techniques Local Methods Local methods, e.g. Lucas-Kanade, do not provide a dense flow field over the frame. They can estimate the flow at non-smooth locations. However, they are more robust against noise compared to global methods. In order to cope with the aperture problem, they smooth the data term by convolving it with a Gaussian kernel K ρ where ρ is the Gaussian parameter Minimization of E LK could be addressed as a mean squared error minimization of form: which could turn into an ill- conditioned problem if not much details are present in the neighborhood Global Methods Global methods, e.g. Horn-Schunck, generate a densely computed global flow field over the image. They are not as robust as local methods against noise. Based on the intuition that displacement fields are in general uniform, Horn- Schunck method employs a functional that in addition to data fidelity term incorporates a smoothness term on first-order flow field gradients. data term smoothness term Optimization is performed by SOR or CG iterations, based on the associated Euler-Lagrange equations

5
Traditional Optical Flow Methods: Combined Local-Global (CLG) Charbonnier penalizer: which allows outliers in the flow field not to be penalized, as they might be due to object deformation, occlusion, changes in illumination or other dissimilarities, β=0.001 data term smoothness term Smoothness parameter, set to 0.012 as a constant. The important characteristic of CLG method is the simultaneous utilization of a smoothness term, and a Gaussian smoothed data fidelity term. The former allows for a densely computed flow, and the latter makes the estimates robust against noise.

6
Evaluation of Estimated Flow Fields Estimated flow Ground truth

7
An arbitrary point in the scene (object) moves along a path Lets consider the scenario of a camera recording a video sequence f i in a non-stationary environment The brightness constancy assumption states that the object will appear similarly in adjacent frames However, shutter time is non-zero in practical cameras And if the object moves during acquisition time, it will be corrupted by motion blur. So the assumption no longer holds, and traditional methods result in artifacts.

8
CLG BA BlurFlow MB-CLG Ground Truth Traditional methods Blur-aware methods Deformation Artifacts

9
Motion Blur Model Lets assume the scene to be projected onto the CCD as a frame f i at time t i In practical cameras, the shutter is kept open for a non-zero time, i.e., acquisition interval. Thus, the integrated image g i is the aggregation of all f i s in the interval Integrated blurred image Unblurred frame Spatially-varying motion blur kernel based on w i Spatially-varying convolution

10
Motion Blur Model In order to express B wi in terms of {w i } = {w i,i-1, w i,i+1 }, we take linear approxi- mations for the moving object trajectory. Moving object trajectory Point d in f i+1 Point d in f i-1 An arbitrary point d in f i Coordinates of point d in f i-1 Coordinates of point d in f i+1 Approximated path for d from f i-1 to f i Approximated path for d from f i to f i+1 With the linearized trajectories, each point d in the unblurred frame f i integrates as two line segments, i.e., g i can be expressed as the sum of two terms. Projection of linearized path of d from t i -τ to t i Projection of linearized path of d from t i to t i +τ

11
The key observation is that if we take the blur functions of each frame, translate them into the coordinates of the other frame, and apply them accordingly, the brightness constancy assumption will be valid for the new set of frames. The key observation And warp the flows according to another flow wi,i+1 in order to transform them onto the coordinates of the f i+1 If we take the flow fields that match f i with the next and the previous frames… And we apply the corresponding blur functions on blurred frame g i+1 And repeat the same procedure for the other frame in a similar manner

12
Motion Blur Aware Combined Local-Global (MB-CLG) Optical Flow We start by generating a Gaussian pyramid of L levels for each frame g i in the sequence Starting from the coarsest level, we apply MB-CLG to estimate all of the forward and backward flows over the sequence Then, we upscale these estimates and apply MB-CLG to the next level We repeat this step until we reach to the finest level, and we refine the flows over the sequence again to get the final estimations. The Proposed Method:

13
Initialization: Then given estimated flows from level l-1, we use this algorithm to refine them for the next level l: Estimated flows of f i+1 for level l-1 Estimated flows of f i for level l-1 Brought to the coordinates of f i Brought to the coordinates of fi+1 Upscaled by pyramid’s scale parameter As previously mentioned, we apply the corresponding blur functions to get k i As well as k i+1, in a similar manner Estimating flows matching k i and k i+1 Then, we get updated forward/backward flows for the next level

14
Handling Moving Objects and Occlusion Smoothing parameter α = 0.012 K=10 σ d = 0.4 Smoothing Matrix A(x,y)

15
Results: Homography Sequences Error maps for matching latent frames f i and f i+1 with estimated flows

16
Results: Homography Sequences

17
Results: Varying Homography Parameters

18
Results: Effect of Adding Noise White Gaussian iid noise with standard deviation σ

19
Results: Wrinkle Artifacts

20
Moving Object Results: The Astronaut Sequence

22
Moving Object Results: The Bird Sequence

24
Summary The proposed method, MB-CLG, Is aimed to solve optical flow in the presence of motion blur Employs a coarse-to-fine approach by constructing a Gaussian pyramid Estimates blur functions of both the target and the source images Projects the blur functions onto different coordinates using “warp-the-flow” Applies exchanged blur functions on both frames Accounts for moving objects and occluded regions by replacing α with A(x,y) Is proved to have brightness constancy assumption valid for new pair of frames Achieves superior results compared to BlurFlow and traditional methods Main contributions

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google