Presentation is loading. Please wait.

Presentation is loading. Please wait.

Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.

Similar presentations


Presentation on theme: "Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1."— Presentation transcript:

1 Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1

2 Introduction When given a video input, we would like to divide it into segments according to the different movement types. This is useful for object tracking and video analysis 2

3 Session Map Building Blocks: ▫Layered Image representation ▫Optic Flow Estimation ▫Affine Motion Estimation Algorithm Walkthrough Examples 3

4 Session Map Building Blocks: ▫Layered Image representation ▫Optic Flow Estimation ▫Affine Motion Estimation Algorithm Walkthrough Examples 4

5 Layered Image Representation Given a simple movement, what would be the best way to represent it? Which parameters would you select to represent for the following scene? 5

6 Layered Image Representation For any movement we would like to have 3 maps: The Intensity Map The Alpha Channel (or- opacity) The Warp Map (or- optic flow) 6

7 For example, if we take a scene of a hand moving over a background, we would like to get: 7

8 Given these maps, it’s possible to easily reproduce the occurring movement. But how can we produce these maps? 8

9 Session Map Building Blocks: ▫Layered Image representation ▫Optic Flow Estimation ▫Affine Motion Estimation Algorithm Walkthrough Examples 9

10 Optic Flow Estimation (this is the heavy part) The Optical Flow is a field of vectors describing the movement in the image For example: 10

11 Optic Flow Estimation (this is the heavy part) Note! Optical Flow doesn’t describe the occurring movement, but the movement we perceive. Look at the Barber’s pole for example 11

12 Optic Flow Estimation (this is the heavy part) The actual motion is RIGHT But the perceived motion (or- Optical Flow) is UP 12

13 Optic Flow Estimation- the Lucas-Kanade method In order to identify movements correctly, we have to work with several assumptions: Brightness Consistency- the movement won’t affect the brightness of the object Constant Motion in a neighborhood- neighboring pixels will move together 13

14 Optic Flow Estimation- the Lucas-Kanade method Definitions: X(t) is the point X at time t (X=x,y) I(X(t)) is the brightness of point X at time t. is the gradient. 14

15 Optic Flow Estimation- the Lucas-Kanade method Brightness Consistency Assumption: I(x(t),t)= const for any t Meaning- the brightness of point x(t) is constant. Therefore the time derivative must be 0: We would like to focus on this part, the velocity 15

16 Optic Flow Estimation- the Lucas-Kanade method But why is the Intensity assumption not enough? Let’s look at the following example and try to determine the optical flow: 16

17 Optic Flow Estimation- the Lucas-Kanade method It looks like the grid is moving down and to the right But it can actually be one of the following: 17

18 Optic Flow Estimation- the Lucas-Kanade method Since our window of observation is too small, we can’t infer the actual motion taking place. This is called the Aperture Problem And this is why we need the 2 nd constraint 18

19 Optic Flow Estimation- the Lucas-Kanade method Constant Motion In a Neighborhood: We assume the velocity is the same in our entire window of observation- W(x) is the window or- environment, of x 19

20 Optic Flow Estimation- the Lucas-Kanade method There is a trade off here- The larger the window the less accurately it represents the velocity (since we assume the velocity is constant there) And in the other direction- the smaller the window the more likely we are to have the aperture problem 20

21 Optic Flow Estimation- the Lucas-Kanade method Sadly, since there are some changes in intensity (due to environment changes or even sensor noise) –the derivative will never actually be 0. So, we take the least square error: 21

22 Optic Flow Estimation- the Lucas-Kanade method The minimal value will occur when the 2 nd derivative is 0: When: So V is: 22

23 Optic Flow Estimation- the Lucas-Kanade method A few notes regarding M: M is a 2x2 matrix -made up of the gradient times its transpose: We can divide M into 3 cases (this is going to be very similar to the Harris corner detection) 23

24 Optic Flow Estimation- the Lucas-Kanade method Case1: If the gradient is 0, M=0,there are no eigenvalues and V can have any value. This occurs when our window is at a clear region: 24

25 Optic Flow Estimation- the Lucas-Kanade method Case2: If the gradient is constant, M is not 0 but we’ll receive only 1 eigenvalue. This occurs when our window is at an edge: 25

26 Optic Flow Estimation- the Lucas-Kanade method Case1: If M is invertible (det=0), we can find V easily This occurs when our window is at a corner: 26

27 Optic Flow Estimation- the Lucas-Kanade method After we find V for every window, we get Velocity vector map, or- the Optical Flow. 27

28 Session Map Building Blocks: ▫Layered Image representation ▫Optic Flow Estimation ▫Affine Motion Estimation Algorithm Walkthrough Examples 28

29 Affine Estimation In Affine Estimation, we assume our motions can be described by affine transformations. This includes: ▫Translations ▫Rotations ▫Zoom ▫Shear And this does cover a lot of the motions we encounter in the real world 29

30 Affine Estimation The idea behind Affine Estimation is quite simple- Find the affine transformation between 2 images, that will have the minimal difference. 30

31 Affine Estimation Quick reminder: 31

32 Affine Estimation 32

33 Affine Estimation 33

34 Affine Estimation There are several ways to do this, most commonly by matching feature-points between the 2 images and calculating the Affine transformation matrix (remember?) What we’ll use won’t be based on feature points, but on the Velocity vector calculated from the Optical Flow. We’ll get to that later though, so for now- no formulas! 34

35 Session Map Building Blocks: ▫Layered Image representation ▫Optic Flow Estimation ▫Affine Motion Estimation Algorithm Walkthrough Examples 35

36 Part 2- The Algorithm Walkthrough So how can we combine all the information we gathered so far into creating our 3 maps for every frame? 36

37 The Algorithm Walkthrough Here’s the basic idea: 37

38 The Algorithm Walkthrough Here’s the basic idea: First, we calculate the Optical Flow- this gives us the Warp map. But since it will only look for 1 overall motion, it may disregard object boundaries and we’ll get several different objects in our motion. 38

39 Optical Flow Estimator 39

40 The Algorithm Walkthrough Here’s the basic idea: Then, we divide the image(s) into arbitrary sub-regions, and use Affine Estimation, which helps us find the local motions within every sub- region 40

41 Affine Regression and Clustering 41

42 The Algorithm Walkthrough Here’s the basic idea: Then we check the difference between our initial guess and the movement observed. And reassign the sub-regions to minimize the error 42

43 Our estimation using an affine transformation Actual change Hypothesis Testing 43

44 The Algorithm Walkthrough Here’s the basic idea: We repeat the cycle iteratively, constantly refining the motion estimation. Convergence is achieved when either: 1.Only a few points are reassigned in each iteration 2.Max number of iterations is reached 44

45 Region reassignments- in each iteration we refine our estimation results This segmentation is what provides us with the Opacity Map 45

46 The Algorithm Walkthrough Reminder- This is an Affine Transformation matrix: Made up of 6 variables, to cover the rotation, translation, zoom and shear operations 46

47 The Algorithm Walkthrough- definitions Let V be our Velocity (obtained by the Optical Flow estimation) We would like to use the velocity to represent the Affine Transformation: But how can we work with V in such a way? We break V into Vx and Vy, 2 vectors representing the velocity in the X and Y direction respectively 47

48 The Algorithm Walkthrough- definitions Vx(x,y)= [aX, bY, c] Vy(x,y)= [dX, eY, f] Where a,b,c,d,e,f are the variables of the affine transformation 48

49 The Algorithm Walkthrough- definitions 49

50 The Algorithm Walkthrough- definitions And last but not least- We define *That’s our original coordinates vector 50

51 The Algorithm Walkthrough So basically so far we got the following parameterization: V HiHi Φ 51

52 The Algorithm Walkthrough Then we can define our affine equations like this: 52

53 The Algorithm Walkthrough And we can calculate H i from V using the following formula: This is the Velocity x and y parameters This is the pseudo inverse matrix 53 Summed over all the pixels in the region

54 The Algorithm Walkthrough We know we can divide our region segmentations into 2 cases: A sub-region contains several object boundaries (ie- the region contains several small movements) An object is covered by several sub-regions (ie- we need to merge regions in order to view the full movement) 54

55 The Algorithm Walkthrough Case 1- sub-region contains several object boundaries 55

56 The Algorithm Walkthrough Case 1- sub-region contains several object boundaries 56

57 The Algorithm Walkthrough In this case, since we divide our image into fairly small regions, we would like to ignore these sections. These regions will have a large residual error, so we can identify them and remove them from our calculations 57

58 The Algorithm Walkthrough Case 2- An object is covered by several sub-regions 58

59 The Algorithm Walkthrough In this case, we would like to merge the 2 (or more) sub-regions, so they would cover our entire object. Since the sub-regions contains the same moving object, its movement parameters will be very similar. So how do we do it? 59

60 The Algorithm Walkthrough We move our Hypotheses into affine motion space- Parameterizing them using the velocity rather than the spatial values. Then we group them using K-Means Clustering. (we already know how to do that!) This merges similar Hypotheses and provides us with a single representative for each motion. 60

61 The Algorithm Walkthrough Now that we calculated the affine transformation for each region, we would like check how we did compared to the actual movement. For this we use a mean square cost function: Where: i(x,y) is the sub-region assigned to that x,y coordinate V(x,y) is the estimated motion field V Hi (x,y) is the affine motion field of the ith hypothesis 61

62 The Algorithm Walkthrough We wish to minimize the difference between our hypothesis and the actual motion, so for each sub- region we’ll take the minimum value: i 0 is the minimum cost assignment- the minimal value for the i th region 62

63 The Algorithm Walkthrough Now we divide the image(s) into motion regions by taking a threshold on the i 0 values as: 63 This gives us the opacity map!

64 The Algorithm Walkthrough And then we just iterate until the regions stop changing, or until the max number of iterations is reached. And we’re done! 64

65 Examples! https://www.youtube.com/watch?v=7BtlB8rEqrY https://www.youtube.com/watch?v=nnp9qc8O8eE https://www.youtube.com/watch?v=4ny8rR1hesU 65

66 Summary We saw how to calculate the Optical Flow in a given video, and how to use the Optical Flow in combination with the Affine estimation model iteratively to get better approximation of the motion. 66

67 Conclusions Motion segmentation is an important part of any motion related algorithm, and is a useful and powerful tool for in computer vision. 67

68 Credits John Y.A. Wang & Edward H. Adelson- Layered Representation for Motion Analysis (1993) Edward H. Adelson- Layered Representation for Image Coding (1991) Lucas B. & Kanade T.- Optical flow algorithm 68


Download ppt "Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1."

Similar presentations


Ads by Google