TelosCAM: Identifying Burglar Through Networked Sensor-Camera Mates with Privacy Protection Presented by Qixin Wang Shaojie Tang, Xiang-Yang Li, Haitao.

Slides:



Advertisements
Similar presentations
Genoa, Italy September 2-4, th IEEE International Conference on Advanced Video and Signal Based Surveillance Combination of Roadside and In-Vehicle.
Advertisements

Caroline Rougier, Jean Meunier, Alain St-Arnaud, and Jacqueline Rousseau IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 21, NO. 5,
Sensor-Based Abnormal Human-Activity Detection Authors: Jie Yin, Qiang Yang, and Jeffrey Junfeng Pan Presenter: Raghu Rangan.
MOTOROLA and the Stylized M Logo are registered in the US Patent and Trademark Office. All other product or service names are the property of their respective.
Robust Object Tracking via Sparsity-based Collaborative Model
Using Auxiliary Sensors for Pair-Wise Key Establishment in WSN Source: Lecture Notes in Computer Science (2010) Authors: Qi Dong and Donggang Liu Presenter:
Edith C. H. Ngai1, Jiangchuan Liu2, and Michael R. Lyu1
1 Formation et Analyse d’Images Session 12 Daniela Hall 16 January 2006.
Broadcast Court-Net Sports Video Analysis Using Fast 3-D Camera Modeling Jungong Han Dirk Farin Peter H. N. IEEE CSVT 2008.
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Murat Demirbas Youngwhan Song University at Buffalo, SUNY
MULTI-TARGET TRACKING THROUGH OPPORTUNISTIC CAMERA CONTROL IN A RESOURCE CONSTRAINED MULTIMODAL SENSOR NETWORK Jayanth Nayak, Luis Gonzalez-Argueta, Bi.
Deployment Strategies for Differentiated Detection in Wireless Sensor Network Jingbin Zhang, Ting Yan, and Sang H. Son University of Virginia From SECON.
Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Student: Hsu-Yung Cheng Advisor: Jenq-Neng Hwang, Professor
A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications Lucia Maddalena and Alfredo Petrosino, Senior Member, IEEE.
Online Data Gathering for Maximizing Network Lifetime in Sensor Networks IEEE transactions on Mobile Computing Weifa Liang, YuZhen Liu.
A fuzzy video content representation for video summarization and content-based retrieval Anastasios D. Doulamis, Nikolaos D. Doulamis, Stefanos D. Kollias.
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Real Time Abnormal Motion Detection in Surveillance Video Nahum Kiryati Tammy Riklin Raviv Yan Ivanchenko Shay Rochel Vision and Image Analysis Laboratory.
SensEye: A Multi-Tier Camera Sensor Network by Purushottam Kulkarni, Deepak Ganesan, Prashant Shenoy, and Qifeng Lu Presenters: Yen-Chia Chen and Ivan.
Speed and Direction Prediction- based localization for Mobile Wireless Sensor Networks Imane BENKHELIFA and Samira MOUSSAOUI Computer Science Department.
Presented by: Chaitanya K. Sambhara Paper by: Maarten Ditzel, Caspar Lageweg, Johan Janssen, Arne Theil TNO Defence, Security and Safety, The Hague, The.
Autonomous Learning of Object Models on Mobile Robots Xiang Li Ph.D. student supervised by Dr. Mohan Sridharan Stochastic Estimation and Autonomous Robotics.
1 Secure Cooperative MIMO Communications Under Active Compromised Nodes Liang Hong, McKenzie McNeal III, Wei Chen College of Engineering, Technology, and.
Hongyu Gong, Lutian Zhao, Kainan Wang, Weijie Wu, Xinbing Wang
Multimedia Databases (MMDB)
Reading Notes: Special Issue on Distributed Smart Cameras, Proceedings of the IEEE Mahmut Karakaya Graduate Student Electrical Engineering and Computer.
IPCCC’111 Assessing the Comparative Effectiveness of Map Construction Protocols in Wireless Sensor Networks Abdelmajid Khelil, Hanbin Chang, Neeraj Suri.
Dynamic Clustering for Acoustic Target Tracking in Wireless Sensor Network Wei-Peng Chen, Jennifer C. Hou, Lui Sha.
Sluzek 142/MAPLD Development of a Reconfigurable Sensor Network for Intrusion Detection Andrzej Sluzek & Palaniappan Annamalai Intelligent Systems.
A General Framework for Tracking Multiple People from a Moving Camera
Tracking with Unreliable Node Sequences Ziguo Zhong, Ting Zhu, Dan Wang and Tian He Computer Science and Engineering, University of Minnesota Infocom 2009.
Energy-Aware Scheduling with Quality of Surveillance Guarantee in Wireless Sensor Networks Jaehoon Jeong, Sarah Sharafkandi and David H.C. Du Dept. of.
Distributed solutions for visual sensor networks to detect targets in crowds Cheng Qian.
K. Selçuk Candan, Maria Luisa Sapino Xiaolan Wang, Rosaria Rossini
Efficient Deployment Algorithms for Prolonging Network Lifetime and Ensuring Coverage in Wireless Sensor Networks Yong-hwan Kim Korea.
Algorithms for Wireless Sensor Networks Marcela Boboila, George Iordache Computer Science Department Stony Brook University.
Energy-Efficient Signal Processing and Communication Algorithms for Scalable Distributed Fusion.
Detection, Classification and Tracking in a Distributed Wireless Sensor Network Presenter: Hui Cao.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Mohamed Hefeeda 1 School of Computing Science Simon Fraser University, Canada Efficient k-Coverage Algorithms for Wireless Sensor Networks Mohamed Hefeeda.
Secure In-Network Aggregation for Wireless Sensor Networks
Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico Ankur Sharma Department of ECE Indian Institute of Technology,
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
Efficient Computing k-Coverage Paths in Multihop Wireless Sensor Networks XuFei Mao, ShaoJie Tang, and Xiang-Yang Li Dept. of Computer Science, Illinois.
Content-Based Image Retrieval QBIC Homepage The State Hermitage Museum db2www/qbicSearch.mac/qbic?selLang=English.
Ching-Ju Lin Institute of Networking and Multimedia NTU
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Barrier Coverage in Camera Sensor Networks ACM MobiHoc 2011 Yi Wang Guohong Cao Department of Computer Science and Engineering The Pennsylvania State University.
U of Minnesota DIWANS'061 Energy-Aware Scheduling with Quality of Surveillance Guarantee in Wireless Sensor Networks Jaehoon Jeong, Sarah Sharafkandi and.
Jinfang Jiang, Guangjie Han, Lei Shu, Han-Chieh Chao, Shojiro Nishio
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
OpenCV C++ Image Processing
Presented by: Chaitanya K. Sambhara Paper by: Rahul Gupta and Samir R. Das - Univ of Cincinnati SUNY Stony Brook.
Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.
Contents Team introduction Project Introduction Applicability
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Personalized Social Image Recommendation
Presented by Prashant Duhoon
6.4 Global Positioning of Nodes
Vehicle Segmentation and Tracking in the Presence of Occlusions
Globally Optimal Generalized Maximum Multi Clique Problem (GMMCP) using Python code for Pedestrian Object Tracking By Beni Mulyana.
Online Graph-Based Tracking
An Infant Facial Expression Recognition System Based on Moment Feature Extraction C. Y. Fang, H. W. Lin, S. W. Chen Department of Computer Science and.
CNN-based Action Recognition Using Adaptive Multiscale Depth Motion Maps And Stable Joint Distance Maps Junyou He, Hailun Xia, Chunyan Feng, Yunfei Chu.
Information Sciences and Systems Lab
Presentation transcript:

TelosCAM: Identifying Burglar Through Networked Sensor-Camera Mates with Privacy Protection Presented by Qixin Wang Shaojie Tang, Xiang-Yang Li, Haitao Zhang Jiankang Han, Guojun Dai, Cheng Wang, Xingfa Shen

Introduction Video surveillance are already widely used in Airports, Border, Railways, Underground, and Roadway… Wireless Sensors also attract significant attention these days for event monitoring and communication…

What is Sensor-Camera Network? Integrates wireless module nodes (such as TelosB nodes) with legacy surveillance cameras Wireless module node is able to detect event and trigger tracking process Surveillance camera is used to capture video of interest

Why is Sensor-Camera Network? TelosCAM system is able to track and identify the burglar who stole the property TelosCAM has the following advantages: Camera Storage Efficiency High Reliability Privacy Protection Light Modification

Frame Work

Design Requirement Computation Efficient High Accuracy Reliable Energy Efficient Privacy Aware Storage Efficient High Hitting Ratio

Privacy Aware Triggering Scheme I Naïve triggering scheme may cause serious privacy issue The tracking process is triggered if the property is out of transmission range of the owner.

Privacy Aware Triggering Scheme II However, the above design suffers from potential security issues if not designed carefully. We thus design a secured message exchanging scheme to prevent potential attacking.

TRAJECTORY BASED VIDEO EXTRACTION I Video extraction aims to archive only those videos which contain the burglar with high probability. When a burglar passes through a surveillance point, the surveillance wireless module (s) will receive some alarm messages sent by the secondary wireless module. A naive scheme is to let each camera start extract the video once they detect the appearance of the object, however, it suffers from poor storage efficiency.

TRAJECTORY BASED VIDEO EXTRACTION II We propose a trajectory based extraction scheme to ensure high storage efficiency without scarifying reliability. The basic idea is to 1.Reconstruct burglar’ trajectory based on sensing result; 2.Estimate the time when the object entered and left the camera sensing range; 3. Filter out those videos which are less likely to contain the burglar based on above information.

Burglar Identification Through Video Processing I To identify the burglar from set of retrieved videos. Among all the objects ever appeared in the extracted videos from all relevant cameras, the burglar tends to have the most occurrences.

Burglar Identification Through Video Processing II Step I. Suspicious Objects Selection From Single Camera Object Classification

Burglar Identification Through Video Processing II Step I. Suspicious Objects Selection From Single Camera We select top-k objects with longest appearance durations as k most suspicious objects for each video.

Burglar Identification Through Video Processing II Step II. Collaborative Burglar Identification Through Networked Cameras Inter-Camera Calibration: to construct pair wise camera color mapping that maps the color histogram from one camera to the other. We formulate the mapping problem as a maximum weighted matching problem A histogram, h, is a vector {h[0], …, h[N]} in which each bin h[i] contains the percentage of pixels corresponding to the color range color_i in this object. We compute a weighted bipartite graph between two histograms as the positive weighted edge represent the bin-wise histogram Distances. Finding a maximum weighted matching of this bipartite graph.

Burglar Identification Through Video Processing II Step II. Collaborative Burglar Identification Through Networked Cameras Burglar Identification: identify the object that has the most occurrences across all videos from those suspicious objects. We formulate the mapping problem as a clique problem First evaluate the likelihood of possible object matches between videos from two cameras based color, height, speed similarity. We then build a similarity graph G = (O,E), where O is the set of objects from all surveillance cameras. Consider two objects from different cameras, we add a edge between them if their similarity is greater than a pre-defined threshold. Finding a maximum clique in this graph.

Evaluation Results TelosB node [24] as wireless module node. Canon PowerShot A3300 IS (16 megapixels ) as camera. The cameras sample the visual information of the surveillance regions at a frame rate of 15 Hz, and the resolution of the captured video sequence is 360×240 pixels. The video processing algorithm was carried out on the platform of VC++.NET 2005 combined with OpenCV (the open source computer vision library supported by Intel Corporation).

Evaluation Results

Thank you!