Presentation is loading. Please wait.

Presentation is loading. Please wait.

Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.

Similar presentations


Presentation on theme: "Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University."— Presentation transcript:

1 Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University of the Negev Department of Communication Systems Engineering

2 Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

3 Motivation National security is a matter of high priority. Main method of security – cameras. – Disadvantage – requirement for man power. A better method is required.

4 Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

5 Our System An automatic security system. –Stationary camera. Goal – detect, track and recognize a moving human object in a video sequence. –Each phase is implemented as a separate algorithm. Simulation in Matlab.

6 Hall Monitor

7 A General View on the System Detect Recognize Track State machine : Tracking successful? YesNo

8 Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

9 Object Detection Find edge map of difference image Find moving object edge ME n for frame n Delay Previous gray level image Current gray level image ME n-1 Use Canny edge detector to find all edge maps. Use four images: –Background edge map E b. –Difference edge map DE n. –Current edge map E n. –Previous algorithm output ME n-1. EnEn EbEb ME n DE n

10 Object Detection (Cont.) Extracted background edge map Background frame

11 Object Detection (Cont.) Use four images: –Background edge map E b. –Difference edge map DE n. –Current edge map E n. –Previous algorithm output Me n-1. Difference image contains the moving parts of the frame. Find edge map of difference image Find moving object edge ME n for frame n Delay Previous gray level image Current gray level image ME n-1 EnEn EbEb ME n DE n

12 Object Detection (Cont.) Current frame Edges extracted from the difference image Edges extracted from the original frame

13 Object Detection (Cont.) Use four images: –Background edge map E b. –Difference edge map DE n. –Current edge map E n. –Previous algorithm output ME n-1. Find pixels that belong to the moving parts of the object. Find pixels that belong to the still parts of the object. Combine the two components.

14 Detection Result Final edge map

15 Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

16 Object Tracking Input: –Current edge map –Previous edge map Goal - estimate object’s location in the next frame. The estimation is performed for several frames, until tracking fails. In that case, the object is re- detected.

17 Object Tracking (Cont.) Divide the previous edge map into square blocks. Use thresholding to determine which blocks contain parts of objects. Empty blockBlock containing object

18 Object Tracking (Cont.) Locate the blocks containing objects from the previous edge map in the current edge map: –To match the blocks, we use correlation. previous Current Matched x =

19 Correlation Statistics

20 Previous edge map Previous edge map with blocks Current edge map with matched blocks Matching Results

21 Object Tracking (Cont.) For each matched block, calculate the amount of pixels it moved in each axis. Calculate the average E and standard deviation, and divide into five ranges. E

22 Object Tracking (Cont.) Estimate the location of each block in the next edge map according to the corresponding speed. Verify true location of each block using correlation.

23 Tracking Statistics Threshold

24 Tracking Results Current, next and following frames, double estimation.

25 Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

26 Environment Actors Equipment Static Dynamic Size Motion Interaction Static Dynamic Scenario Analysis

27 Recognition Monitor amount of objects Recognize behavior of human object –Size – based on detected edges Size indicates motion inwards or outwards –Motion – based on tracked blocks Recognize sudden changes in speed as suspicious –Interaction with environment – based on skeleton Recognize suspicious postures.

28 Monitoring Amount of Objects

29 Locate center of mass –Using: Find end points of limbs Mark spine and limbs location Calculate angles between spine and limbs to recognize the posture Skeleton Construction

30 Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

31 Separated Objects Demo

32 Click to edit Master title style Click to edit Master subtitle style 32 Thank You!

33 Freq. Domain Correlation If there is a match between an image f and an object h, the correlation will be maximal at the location of h in f. Correlation is given by: When: denotes the complex conjugate of f.

34 Mathematical Background Difference edge map: Pixels that belong to the moving parts of the object : Pixels that belong to the still parts of the object : Result:


Download ppt "Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University."

Similar presentations


Ads by Google