Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Data Mining for Surveillance Applications Suspicious Event Detection Dr. Bhavani Thuraisingham April 2006.

Similar presentations


Presentation on theme: "1 Data Mining for Surveillance Applications Suspicious Event Detection Dr. Bhavani Thuraisingham April 2006."— Presentation transcript:

1 1 Data Mining for Surveillance Applications Suspicious Event Detection Dr. Bhavani Thuraisingham April 2006

2 2 Outline Acknowledgements Acknowledgements Data Mining for Security Applications Data Mining for Security Applications Surveillance and Suspicious Event Detection Surveillance and Suspicious Event Detection Directions for Surveillance Directions for Surveillance Other applications Other applications

3 3 Acknowledgements Prof. Latifur Khan Prof. Latifur Khan Gal Lavee Gal Lavee Ryan Layfield Ryan Layfield Sai Chaitanya Sai Chaitanya

4 4 Data Mining for Security Applications Data Mining has many applications in Cyber Security and National Security Data Mining has many applications in Cyber Security and National Security Intrusion detection, worm detection, firewall policy management Intrusion detection, worm detection, firewall policy management Counter-terrorism applications and Surveillance Counter-terrorism applications and Surveillance Fraud detection, Insider threat analysis Fraud detection, Insider threat analysis Need to enforce security but at the same time ensure privacy Need to enforce security but at the same time ensure privacy

5 5 Data Mining for Surveillance Problems Addressed Huge amounts of surveillance and video data available in the security domain Huge amounts of surveillance and video data available in the security domain Analysis is being done off-line usually using “Human Eyes” Analysis is being done off-line usually using “Human Eyes” Need for tools to aid human analyst ( pointing out areas in video where unusual activity occurs) Need for tools to aid human analyst ( pointing out areas in video where unusual activity occurs)

6 6 Example Using our proposed system: Using our proposed system: Greatly Increase video analysis efficiency Greatly Increase video analysis efficiency User Defined Event of interest Video Data Annotated Video w/ events of interest highlighted

7 7 The Semantic Gap The disconnect between the low-level features a machine sees when a video is input into it and the high- level semantic concepts (or events) a human being sees when looking at a video clip The disconnect between the low-level features a machine sees when a video is input into it and the high- level semantic concepts (or events) a human being sees when looking at a video clip Low-Level features: color, texture, shape Low-Level features: color, texture, shape High-level semantic concepts: presentation, newscast, boxing match High-level semantic concepts: presentation, newscast, boxing match

8 8 Our Approach Event Representation Event Representation Estimate distribution of pixel intensity change Estimate distribution of pixel intensity change Event Comparison Event Comparison Contrast the event representation of different video sequences to determine if they contain similar semantic event content. Contrast the event representation of different video sequences to determine if they contain similar semantic event content. Event Detection Event Detection Using manually labeled training video sequences to classify unlabeled video sequences Using manually labeled training video sequences to classify unlabeled video sequences

9 9 Event Representation Measures the quantity and type of changes occurring within a scene Measures the quantity and type of changes occurring within a scene A video event is represented as a set of x, y and t intensity gradient histograms over several temporal scales. A video event is represented as a set of x, y and t intensity gradient histograms over several temporal scales. Histograms are normalized and smoothed Histograms are normalized and smoothed

10 10 Event Comparison Determine if the two video sequences contain similar high-level semantic concepts (events). Determine if the two video sequences contain similar high-level semantic concepts (events). Produces a number that indicates how close the two compared events are to one another. Produces a number that indicates how close the two compared events are to one another. The lower this number is the closer the two events are. The lower this number is the closer the two events are.

11 11 Event Detection A robust event detection system should be able to A robust event detection system should be able to Recognize an event with reduced sensitivity to actor (e.g. clothing or skin tone) or background lighting variation. Recognize an event with reduced sensitivity to actor (e.g. clothing or skin tone) or background lighting variation. Segment an unlabeled video containing multiple events into event specific segments Segment an unlabeled video containing multiple events into event specific segments

12 12 Labeled Video Events These events are manually labeled and used to classify unknown events These events are manually labeled and used to classify unknown events Walking1 Running1Waving2

13 13 Labeled Video Events walking1walking2walking3running1running2running3running4 waving 2 walking100.276250.245081.22621.3830.974721.379110.961 walking20.2762500.178881.47571.50031.29081.54110.581 walking30.245080.1788801.12981.09330.886041.122110.231 running11.22621.47571.129800.438290.304510.3982314.469 running21.3831.50031.09330.4382900.238040.1076115.05 running30.974721.29080.886040.304510.2380400.2048914.2 running41.37911.5411.12210.398230.107610.20489015.607 waving210.96110.58110.23114.46915.0514.215.6070

14 14 Experiment #1 Problem: Recognize and classify events irrespective of direction (right-to-left, left-to-right) and with reduced sensitivity to spatial variations (Clothing) Problem: Recognize and classify events irrespective of direction (right-to-left, left-to-right) and with reduced sensitivity to spatial variations (Clothing) “Disguised Events”- Events similar to testing data except subject is dressed differently “Disguised Events”- Events similar to testing data except subject is dressed differently Compare Classification to “Truth” (Manual Labeling) Compare Classification to “Truth” (Manual Labeling)

15 15 Experiment #1 Classification: Walking Disguised Walking 1walking1walking2walking3running1running2running3running4waving20.976530.451540.596081.54761.46331.57241.540612.225

16 16 Experiment #1 Classification: Running Disguised Running 1walking1walking2walking3running1running2running3running4waving21.4111.38411.06370.567240.974170.935871.095711.629

17 17 Classifying Disguised Events Classification: Running Disguised Running 3walking1walking2walking3running1running2running3running4waving21.30491.00210.880920.81141.10421.11891.090212.801

18 18 Classifying Disguised Events Classification: Waving Disguised Waving 1walking1walking2walking3running1running2running3running4waving213.64613.11313.45218.61519.59218.62120.2392.2451

19 19 Classifying Disguised Events Disguisewalking1Disguisewalking2Disguiserunning1Disguiserunning2Disguiserunning3Disguisewaving1Disguisewaving2 Disguisewalking100.193391.21590.859380.6757714.47113.429 Disguisewalking20.1933901.43171.18240.9558212.29511.29 Disguiserunning11.21591.431700.375920.4518715.26615.007 DisguiseRunning20.859381.18240.3759200.1334616.7616.247 DisguiseRunning30.675770.955820.451870.13346016.25215.621 Disguisewaving114.47112.29515.26616.7616.25200.45816 Disguisewaving213.42911.2915.00716.24715.6210.458160

20 20 Experiment #1 This method yielded 100% Precision (i.e. all disguised events were classified correctly). This method yielded 100% Precision (i.e. all disguised events were classified correctly). Not necessarily representative of the general event detection problem. Not necessarily representative of the general event detection problem. Future evaluation with more event types, more varied data and a larger set of training and testing data is needed Future evaluation with more event types, more varied data and a larger set of training and testing data is needed

21 21 Experiment #2 Problem: Given an unlabeled video sequence describe the high-level events within the video Problem: Given an unlabeled video sequence describe the high-level events within the video Capture events using a sliding window of a fixed width (25 frames in example) Capture events using a sliding window of a fixed width (25 frames in example)

22 22 Experiment #2 Running Similarity Graph Running Similarity Graph

23 23 Experiment #2 Walking Similarity Graph Walking Similarity Graph

24 24 Recognizing Events in Unknown Video Segment Waving Similarity Graph Waving Similarity Graph

25 25 Experiment #2 Minimum Similarity Graph Walking Running Waving Running

26 26 XML Video Annotation Using the event detection scheme we generate a video description document detailing the event composition of a specific video sequence Using the event detection scheme we generate a video description document detailing the event composition of a specific video sequence This XML document annotation may be replaced by a more robust computer-understandable format (e.g. the VEML video event ontology language). This XML document annotation may be replaced by a more robust computer-understandable format (e.g. the VEML video event ontology language). <videoclip> H:\Research\MainEvent\ H:\Research\MainEvent\ Movies\test_runningandwaving.AVI Movies\test_runningandwaving.AVI 600 600 unknown unknown 1 1 106 106 walking walking 107 107 6 6 </videoclip>

27 27 Video Analysis Tool Takes annotation document as input and organizes the corresponding video segment accordingly. Takes annotation document as input and organizes the corresponding video segment accordingly. Functions as an aid to a surveillance analyst searching for “Suspicious” events within a stream of video data. Functions as an aid to a surveillance analyst searching for “Suspicious” events within a stream of video data. Activity of interest may be defined dynamically by the analyst during the running of the utility and flagged for analysis. Activity of interest may be defined dynamically by the analyst during the running of the utility and flagged for analysis.

28 28 Directions Enhancements to the work Enhancements to the work Working toward bridging the semantic gap and enabling more efficient video analysis Working toward bridging the semantic gap and enabling more efficient video analysis More rigorous experimental testing of concepts More rigorous experimental testing of concepts Refine event classification through use of multiple machine learning algorithm (e.g. neural networks, decision trees, etc…). Experimentally determine optimal algorithm. Refine event classification through use of multiple machine learning algorithm (e.g. neural networks, decision trees, etc…). Experimentally determine optimal algorithm. Develop a model allowing definition of simultaneous events within the same video sequence Develop a model allowing definition of simultaneous events within the same video sequence Security and Privacy Security and Privacy Define an access control model that will allow access to surveillance video data to be restricted based on semantic content of video objects Define an access control model that will allow access to surveillance video data to be restricted based on semantic content of video objects Biometrics applications Biometrics applications Privacy preserving surveillance Privacy preserving surveillance

29 29 Access Control and Biometrics Access Control Access Control RBAC and UCON-based models for surveillance data RBAC and UCON-based models for surveillance data Initial work to appear in ACM SACMAT Conference 2006 Initial work to appear in ACM SACMAT Conference 2006 Biometrics Biometrics Restrict access based on semantic content of video rather then low-level features Restrict access based on semantic content of video rather then low-level features Behavioral type access instead of “fingerprint” Behavioral type access instead of “fingerprint” Used in combination with other biometric methods Used in combination with other biometric methods

30 Privacy Preserving Surveillance - Introduction A recent survey at Times Square found 500 visible surveillance cameras in the area and a total of 2500 in New York City. What this essentially means is that, we have scores of surveillance video to be inspected manually by security personnel We need to carry out surveillance but at the same time ensure the privacy of individuals who are good citizens

31 31 System Use Raw video surveillance data Face Detection and Face Derecognizing system Suspicious Event Detection System Manual Inspection of video data Comprehensive security report listing suspicious events and people detected Suspicious people found Suspicious events found Report of security personnel Faces of trusted people derecognized to preserve privacy

32 System Architecture Input Video Breakdown input video into sequence of images Perform Segmentation Compare face to trusted and untrusted database Finding location of the face in the image Derecognize the face in the image Raise an alarm that a potential intruder was detected Trusted face found Potential intruder found

33 33 Other Applications of Data Mining in Security Intrusion detection Intrusion detection Firewall policy management Firewall policy management Worm detection Worm detection Insider Threat Analysis – both network/host and physical Insider Threat Analysis – both network/host and physical Fraud Detection Fraud Detection Protecting children from inappropriate content on the Internet Protecting children from inappropriate content on the Internet Digital Identity Management Digital Identity Management Detecting identity theft Detecting identity theft Biometrics identification and verification Biometrics identification and verification Digital Forensics Digital Forensics Source Code Analysis Source Code Analysis National Security / Counter-terrorism National Security / Counter-terrorism

34 34 Our Vision: Assured Information Sharing Publish Data/Policy Component Data/Policy for Agency A Data/Policy for Coalition Publish Data/Policy Component Data/Policy for Agency C Component Data/Policy for Agency B Publish Data/Policy 1.Friendly partners 2.Semi-honest partners 3.Untrustworthy partners


Download ppt "1 Data Mining for Surveillance Applications Suspicious Event Detection Dr. Bhavani Thuraisingham April 2006."

Similar presentations


Ads by Google