Presentation is loading. Please wait.

Presentation is loading. Please wait.

Counting Crowded Moving Objects Vincent Rabaud and Serge Belongie Department of Computer Science and Engineering University of California, San Diego

Similar presentations


Presentation on theme: "Counting Crowded Moving Objects Vincent Rabaud and Serge Belongie Department of Computer Science and Engineering University of California, San Diego"— Presentation transcript:

1 Counting Crowded Moving Objects Vincent Rabaud and Serge Belongie Department of Computer Science and Engineering University of California, San Diego {vrabaud,sjb}@cs.ucsd.edu Presentation by: Yaron Koral IDC, Herzlia, ISRAEL

2 AGENDA Motivation Challenges Algorithm Experimental Results 2

3 AGENDA Motivation Challenges Algorithm Experimental Results 3

4 Motivation Counting crowds of people Counting herds of animals Counting migrating cells Everything goes as long as the crowd is homogeneous!! 4

5 AGENDA Motivation Challenges Algorithm Experimental Results 5

6 Challenges The problem of occlusion – Inter-object – Self occlusion Large number of independent motions – Dozens of erratically moving objects – Require more than two successive frames 6 Surveillance camera viewing a crowd from a distant viewpoint, but zoomed in, such that the effects of perspective are minimized.

7 AGENDA Motivation Challenges Algorithm Experimental Results 7

8 Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 8

9 Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 9

10 Harris Corner Detector – What are Good Features? C.Harris, M.Stephens. “A Combined Corner and Edge Detector”. 1988 We should easily recognize a corner by looking through a small window Shifting a window in any direction should give a large change in intensity

11 Harris Detector: Basic Idea “flat” region: no change in all directions “edge”: no change along the edge direction “corner”: significant change in all directions

12 Harris Detector: Mathematics Change of intensity for shift in [u,v] direction: Intensity Shifted intensity Window function or Window function w(x,y) = Gaussian1 in window, 0 outside

13 Harris Detector: Mathematics For small [u,v]: We have:

14 Harris Detector: Mathematics For small shifts [u,v ] we have a bilinear approximation: where M is a 2  2 matrix computed from image derivatives:

15 Harris Detector: Mathematics Denotes by e i the i th eigen-vactor of M whose eigen-value is i : Conclusions:

16 Harris Detector: Mathematics Intensity change in shifting window: eigenvalue analysis 1, 2 – eigenvalues of M direction of the slowest change direction of the fastest change ( max ) -1/2 ( min ) -1/2 Ellipse E(u,v) = const

17 Harris Detector: Mathematics 1 2 “Corner” 1 and 2 are large, 1 ~ 2 ; E increases in all directions 1 and 2 are small; E is almost constant in all directions “Edge” 1 >> 2 “Edge” 2 >> 1 “Flat” region Classification of image points using eigenvalues of M:

18 Feature point extraction homogeneous edge corner Find points for which the following is maximum i.e. maximize smallest eigenvalue of M

19 Tracking Features Sum of Squared Differences – Tracking Features SSD is optimal in the sense of ML when 1.Constant brightness assumption 2.i.i.d. additive Gaussian noise

20 Exhaustive Search Loop over all parameter space No realistic in most cases – Computationally expensive E.g. to search 100X100 image in 1000X1000 image using only translation  ~10 10 operations! – Explodes with number of parameters – Precision limited to step size

21 The Problem Find (u,v) that minimizes the SSD over region A. Assume that (u,v) are constant over all A

22 Iterative Solution Lucas Kanade (1981) – Use Taylor expansion of I (the optical flow equation) – Find

23 Feature Tracking with KLT (We’re back to crowd counting…) KLT is a feature tracking algorithm Driving Principle: – Determine the motion parameter of local window W from image I to consecutive image J – The center of the window defines the tracked feature 23

24 Feature Tracking with KLT Given a window W – the affine motion parameters A and d are chosen to minimize the dissimilarity 24

25 Feature Tracking with KLT It is assumed that only d matters between 2 frames. Therefore a variation of SSD is used 25 A window is accepted as a candidate feature if in the center of the window, both eigenvalues exceed a predefined threshold t min( λ 1, λ 2 ) > t

26 Feature Tracking with KLT Tracker does not track single pixels but windows of pixels. Good window is one that can be tracked well. A good feature is a texture patch with high intensity variation in both x and y directions, such as a corner Denote the intensity function as g(x,y) and consider the local intensity variation matrix: A window is accepted as a candidate feature if in the center of the window, both eigenvalues of Z, exceed a predefined threshold t min( λ 1, λ 2 ) > t

27 KLT: Interpreting the Eigenvalues The eigenvectors and eigenvalues of M relate to edge direction and magnitude – The eigenvector associated with the larger eigenvalue points in the direction of fastest intensity change – The other eigenvector is orthogonal to it 27

28 Feature Tracking with KLT The good windows are chosen to be the ones leading to a Z whose minimal eigenvalue is above a threshold Once this eigenvalue drops below threshold, the feature track is terminated Original KLT ends when all features tracks are terminated Current KLT re-spawns features all the time 28

29 KLT: Interpreting the Eigenvalues 29

30 30

31 31

32 32

33 Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 33

34 Increased Efficiency #1 Associating only one window with each feature – Giving a uniform weight function that depends on 1/(window area |w|) – Determining quality by comparing: Computation of different Z matrices is accelerated by “integral image” [1] 34 [1] Viola & Jones 2004

35 Increased Efficiency #2 Run on sample training frames first – Determine parameters that lead to the optimal windows sizes – Reduces to less than 5% of the possible parameter set – All objects are from the same class 35 Surveillance camera viewing a crowd from a distant viewpoint, but zoomed in, such that the effects of perspective are minimized.

36 Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 36

37 Feature Re-Spawning Along time, KLT looses track: – Inter-object occlusion – Self occlusion – Exit from picture – Appearance change due to perspective and articulation KLT recreates features all the time – Computationally intensive – Weak features are renewed 37

38 Feature Re-Spawning Re-Spawn features only at specific locations in space and time Propagate them forward and backward in time – Find the biggest “holes” – Re-spawn features in frame with the weighted average of times 38

39 Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 39

40 Trajectory Conditioning KLT tracker gives a set of trajectories with poor homogeneity – Don’t begin and end at the same times – Occlusions can result in trajectory fragmentation – Feature can lose its strength resulting in less precise tracks Solution: condition the data – Spatially and temporally 40

41 Trajectory Conditioning Each trajectory is influenced by its spatial neighbors Apply a box to each raw trajectory Follow all neighbor trajectories from the time the trajectory started 41

42 Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 42

43 Trajectory Clustering t Determine number of object at time t by clustering trajectories t Since at time t objects may be close, focus attention on a time interval (half-width of 200 frames) Build connectivity graph – At each time step, the present features form the nodes of a connectivity graph G – Edges indicate possible membership to a common object. 43

44 Trajectory Clustering Connectivity Graph – Bounding Box: as small as possible, able to contain every possible instance of the object – If two features do not stay in a certain box, they do not belong to the same object. – The 3 parameters of this box are learned from training data. 44 Surveillance camera viewing a crowd from a distant viewpoint, but zoomed in, such that the effects of perspective are minimized. Articulation factor

45 Trajectory Clustering Rigid parts merging rigid part of an object common object – Features share similar movement during whole life span, belong to a rigid part of an object, and consequently to a common object – RANSAC is applied to sets of trajectories Within time window Connected in graph G 45

46 Trajectory Clustering Agglomerative Clustering – At each iteration, the two closest sets are considered – If all features are linked to each other in the connectivity graph, they are merged together. – Otherwise, the next closest sets are considered – Proceed until all possible pairs are analyzed 46

47 AGENDA Motivation Challenges Algorithm Experimental Results 47

48 Experimental results Datasets – USC: elevated view of a crowd consisting of zero to twelve persons – LIBRARY: elevated view of a crowd of twenty to fifty persons – CELLS: red blood cell dataset consisting of fifty to hundred blood cells 48

49 Experimental results 49

50 Experimental results 50 Estimated Ground Truth

51 Experimental results 51

52 Conclusion A new way for segmenting motions generated by multiple objects in crowd Enhancements to KLT tracker Conditioning and Clustering techniques 52

53 Thank You! 53


Download ppt "Counting Crowded Moving Objects Vincent Rabaud and Serge Belongie Department of Computer Science and Engineering University of California, San Diego"

Similar presentations


Ads by Google