Counting Crowded Moving Objects Vincent Rabaud and Serge Belongie Department of Computer Science and Engineering University of California, San Diego

Slides:



Advertisements
Similar presentations
Feature extraction: Corners
Advertisements

Feature Detection. Description Localization More Points Robust to occlusion Works with less texture More Repeatable Robust detection Precise localization.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
Matching with Invariant Features
Computer Vision Optical Flow
Computational Photography
Algorithms and Applications in Computer Vision
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Harris corner detector
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Segmentation Divide the image into segments. Each segment:
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Feature extraction: Corners and blobs
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
CSCE 641 Computer Graphics: Image Registration Jinxiang Chai.
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
Lecture 06 06/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.
From Pixels to Features: Review of Part 1 COMP 4900C Winter 2008.
CSE 185 Introduction to Computer Vision Feature Tracking and Optical Flow.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
CSE 185 Introduction to Computer Vision Local Invariant Features.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Pyramidal Implementation of Lucas Kanade Feature Tracker Jia Huang Xiaoyan Liu Han Xin Yizhen Tan.
Notes on the Harris Detector
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Features Jan-Michael Frahm.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
CS654: Digital Image Analysis
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
CSE 185 Introduction to Computer Vision Local Invariant Features.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Lecture 10: Harris Corner Detector CS4670/5670: Computer Vision Kavita Bala.
Keypoint extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Motion estimation Parametric motion (image alignment) Tracking Optical flow.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
Interest Points EE/CSE 576 Linda Shapiro.
EE465: Introduction to Digital Image Processing Copyright Xin Li'2003
3D Vision Interest Points.
Motion and Optical Flow
Image Primitives and Correspondence
Effective and Efficient Detection of Moving Targets From a UAV’s Camera
Lecture 5: Feature detection and matching
CSSE463: Image Recognition Day 30
Announcements Questions on the project? New turn-in info online
Detection of salient points
CSSE463: Image Recognition Day 30
Lecture VI: Corner and Blob Detection
CSSE463: Image Recognition Day 30
Optical flow and keypoint tracking
Corner Detection COMP 4900C Winter 2008.
Presentation transcript:

Counting Crowded Moving Objects Vincent Rabaud and Serge Belongie Department of Computer Science and Engineering University of California, San Diego Presentation by: Yaron Koral IDC, Herzlia, ISRAEL

AGENDA Motivation Challenges Algorithm Experimental Results 2

AGENDA Motivation Challenges Algorithm Experimental Results 3

Motivation Counting crowds of people Counting herds of animals Counting migrating cells Everything goes as long as the crowd is homogeneous!! 4

AGENDA Motivation Challenges Algorithm Experimental Results 5

Challenges The problem of occlusion – Inter-object – Self occlusion Large number of independent motions – Dozens of erratically moving objects – Require more than two successive frames 6 Surveillance camera viewing a crowd from a distant viewpoint, but zoomed in, such that the effects of perspective are minimized.

AGENDA Motivation Challenges Algorithm Experimental Results 7

Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 8

Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 9

Harris Corner Detector – What are Good Features? C.Harris, M.Stephens. “A Combined Corner and Edge Detector” We should easily recognize a corner by looking through a small window Shifting a window in any direction should give a large change in intensity

Harris Detector: Basic Idea “flat” region: no change in all directions “edge”: no change along the edge direction “corner”: significant change in all directions

Harris Detector: Mathematics Change of intensity for shift in [u,v] direction: Intensity Shifted intensity Window function or Window function w(x,y) = Gaussian1 in window, 0 outside

Harris Detector: Mathematics For small [u,v]: We have:

Harris Detector: Mathematics For small shifts [u,v ] we have a bilinear approximation: where M is a 2  2 matrix computed from image derivatives:

Harris Detector: Mathematics Denotes by e i the i th eigen-vactor of M whose eigen-value is i : Conclusions:

Harris Detector: Mathematics Intensity change in shifting window: eigenvalue analysis 1, 2 – eigenvalues of M direction of the slowest change direction of the fastest change ( max ) -1/2 ( min ) -1/2 Ellipse E(u,v) = const

Harris Detector: Mathematics 1 2 “Corner” 1 and 2 are large, 1 ~ 2 ; E increases in all directions 1 and 2 are small; E is almost constant in all directions “Edge” 1 >> 2 “Edge” 2 >> 1 “Flat” region Classification of image points using eigenvalues of M:

Feature point extraction homogeneous edge corner Find points for which the following is maximum i.e. maximize smallest eigenvalue of M

Tracking Features Sum of Squared Differences – Tracking Features SSD is optimal in the sense of ML when 1.Constant brightness assumption 2.i.i.d. additive Gaussian noise

Exhaustive Search Loop over all parameter space No realistic in most cases – Computationally expensive E.g. to search 100X100 image in 1000X1000 image using only translation  ~10 10 operations! – Explodes with number of parameters – Precision limited to step size

The Problem Find (u,v) that minimizes the SSD over region A. Assume that (u,v) are constant over all A

Iterative Solution Lucas Kanade (1981) – Use Taylor expansion of I (the optical flow equation) – Find

Feature Tracking with KLT (We’re back to crowd counting…) KLT is a feature tracking algorithm Driving Principle: – Determine the motion parameter of local window W from image I to consecutive image J – The center of the window defines the tracked feature 23

Feature Tracking with KLT Given a window W – the affine motion parameters A and d are chosen to minimize the dissimilarity 24

Feature Tracking with KLT It is assumed that only d matters between 2 frames. Therefore a variation of SSD is used 25 A window is accepted as a candidate feature if in the center of the window, both eigenvalues exceed a predefined threshold t min( λ 1, λ 2 ) > t

Feature Tracking with KLT Tracker does not track single pixels but windows of pixels. Good window is one that can be tracked well. A good feature is a texture patch with high intensity variation in both x and y directions, such as a corner Denote the intensity function as g(x,y) and consider the local intensity variation matrix: A window is accepted as a candidate feature if in the center of the window, both eigenvalues of Z, exceed a predefined threshold t min( λ 1, λ 2 ) > t

KLT: Interpreting the Eigenvalues The eigenvectors and eigenvalues of M relate to edge direction and magnitude – The eigenvector associated with the larger eigenvalue points in the direction of fastest intensity change – The other eigenvector is orthogonal to it 27

Feature Tracking with KLT The good windows are chosen to be the ones leading to a Z whose minimal eigenvalue is above a threshold Once this eigenvalue drops below threshold, the feature track is terminated Original KLT ends when all features tracks are terminated Current KLT re-spawns features all the time 28

KLT: Interpreting the Eigenvalues 29

30

31

32

Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 33

Increased Efficiency #1 Associating only one window with each feature – Giving a uniform weight function that depends on 1/(window area |w|) – Determining quality by comparing: Computation of different Z matrices is accelerated by “integral image” [1] 34 [1] Viola & Jones 2004

Increased Efficiency #2 Run on sample training frames first – Determine parameters that lead to the optimal windows sizes – Reduces to less than 5% of the possible parameter set – All objects are from the same class 35 Surveillance camera viewing a crowd from a distant viewpoint, but zoomed in, such that the effects of perspective are minimized.

Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 36

Feature Re-Spawning Along time, KLT looses track: – Inter-object occlusion – Self occlusion – Exit from picture – Appearance change due to perspective and articulation KLT recreates features all the time – Computationally intensive – Weak features are renewed 37

Feature Re-Spawning Re-Spawn features only at specific locations in space and time Propagate them forward and backward in time – Find the biggest “holes” – Re-spawn features in frame with the weighted average of times 38

Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 39

Trajectory Conditioning KLT tracker gives a set of trajectories with poor homogeneity – Don’t begin and end at the same times – Occlusions can result in trajectory fragmentation – Feature can lose its strength resulting in less precise tracks Solution: condition the data – Spatially and temporally 40

Trajectory Conditioning Each trajectory is influenced by its spatial neighbors Apply a box to each raw trajectory Follow all neighbor trajectories from the time the trajectory started 41

Algorithm Highlights Feature Tracking with KLT Increased Efficiency Feature Re-Spawning Trajectory Conditioning Trajectory Clustering 42

Trajectory Clustering t Determine number of object at time t by clustering trajectories t Since at time t objects may be close, focus attention on a time interval (half-width of 200 frames) Build connectivity graph – At each time step, the present features form the nodes of a connectivity graph G – Edges indicate possible membership to a common object. 43

Trajectory Clustering Connectivity Graph – Bounding Box: as small as possible, able to contain every possible instance of the object – If two features do not stay in a certain box, they do not belong to the same object. – The 3 parameters of this box are learned from training data. 44 Surveillance camera viewing a crowd from a distant viewpoint, but zoomed in, such that the effects of perspective are minimized. Articulation factor

Trajectory Clustering Rigid parts merging rigid part of an object common object – Features share similar movement during whole life span, belong to a rigid part of an object, and consequently to a common object – RANSAC is applied to sets of trajectories Within time window Connected in graph G 45

Trajectory Clustering Agglomerative Clustering – At each iteration, the two closest sets are considered – If all features are linked to each other in the connectivity graph, they are merged together. – Otherwise, the next closest sets are considered – Proceed until all possible pairs are analyzed 46

AGENDA Motivation Challenges Algorithm Experimental Results 47

Experimental results Datasets – USC: elevated view of a crowd consisting of zero to twelve persons – LIBRARY: elevated view of a crowd of twenty to fifty persons – CELLS: red blood cell dataset consisting of fifty to hundred blood cells 48

Experimental results 49

Experimental results 50 Estimated Ground Truth

Experimental results 51

Conclusion A new way for segmenting motions generated by multiple objects in crowd Enhancements to KLT tracker Conditioning and Clustering techniques 52

Thank You! 53