A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

ARTIFICIAL PASSENGER.
Bayesian Decision Theory Case Studies
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science TU München, Germany A Person and Context.
Advanced Image Processing Student Seminar: Lipreading Method using color extraction method and eigenspace technique ( Yasuyuki Nakata and Moritoshi Ando.
Virtual Me. Motion Capture (mocap) Motion capture is the process of simulating actual movement in a computer generated environment The capture subject.
DDDAS: Stochastic Multicue Tracking of Objects with Many Degrees of Freedom PIs: D. Metaxas, A. Elgammal and V. Pavlovic Dept of CS, Rutgers University.
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
Cynthia Atherton.  Methodology  Code  Results  Problems  Plans.
An Infant Facial Expression Recognition System Based on Moment Feature Extraction C. Y. Fang, H. W. Lin, S. W. Chen Department of Computer Science and.
Page 1 SIXTH SENSE TECHNOLOGY Presented by: KIRTI AGGARWAL 2K7-MRCE-CS-035.
Learning to estimate human pose with data driven belief propagation Gang Hua, Ming-Hsuan Yang, Ying Wu CVPR 05.
Automatic Feature Extraction for Multi-view 3D Face Recognition
Facial feature localization Presented by: Harvest Jang Spring 2002.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Saad M. Khan and Mubarak Shah, PAMI, VOL. 31, NO. 3, MARCH 2009, Donguk Seo
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Student: Yao-Sheng Wang Advisor: Prof. Sheng-Jyh Wang ARTICULATED HUMAN DETECTION 1 Department of Electronics Engineering National Chiao Tung University.
ICIP 2000, Vancouver, Canada IVML, ECE, NTUA Face Detection: Is it only for Face Recognition?  A few years earlier  Face Detection Face Recognition 
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
Gaze Awareness for Videoconferencing: A Software Approach Nicolas Werro.
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
Facial Features Extraction Amit Pillay Ravi Mattani Amit Pillay Ravi Mattani.
Triangle-based approach to the detection of human face March 2001 PATTERN RECOGNITION Speaker Jing. AIP Lab.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science Technische Universität München Adaptive.
Sana Naghipour, Saba Naghipour Mentor: Phani Chavali Advisers: Ed Richter, Prof. Arye Nehorai.
Facial Feature Detection
Abstract Some Examples The Eye tracker project is a research initiative to enable people, who are suffering from Amyotrophic Lateral Sclerosis (ALS), to.
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
3D Fingertip and Palm Tracking in Depth Image Sequences
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Prepared By: Menna Hamza Mohamed Mohamed Hesham Fadl Mona Abdel Mageed El-Koussy Yasmine Shaker Abdel Hameed Supervised By: Dr. Magda Fayek.
M4 – Video Processing, Brno University of Technology1 M4 – Video Processing Igor Potůček, Michal Španěl, Ibrahim Abu Kteish, Olivier Lai Kan Thon, Pavel.
Shape-Based Human Detection and Segmentation via Hierarchical Part- Template Matching Zhe Lin, Member, IEEE Larry S. Davis, Fellow, IEEE IEEE TRANSACTIONS.
Motion Object Segmentation, Recognition and Tracking Huiqiong Chen; Yun Zhang; Derek Rivait Faculty of Computer Science Dalhousie University.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Presentation by: K.G.P.Srikanth. CONTENTS  Introduction  Components  Working  Applications.
A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India Ashutosh Dwivedi.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Joon Hyung Shim, Jinkyu Yang, and Inseong Kim
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
ECE738 Advanced Image Processing Face Detection IEEE Trans. PAMI, July 1997.
Tracking People by Learning Their Appearance Deva Ramanan David A. Forsuth Andrew Zisserman.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Variation of aspect ratio Voice section Correct voice section Voice Activity Detection by Lip Shape Tracking Using EBGM Purpose What is EBGM ? Experimental.
Monitoring Human Behavior in an Office Environment Douglas Ayers and Mubarak Shah * Research Conducted at the University of Central Florida *Now at Harris.
Tracking CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
Soccer Video Analysis EE 368: Spring 2012 Kevin Cheng.
CS332 Visual Processing Department of Computer Science Wellesley College Analysis of Motion Measuring image motion.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
Week 10 Emily Hand UNR.
SUMMERY 1. VOLUMETRIC FEATURES FOR EVENT DETECTION IN VIDEO correlate spatio-temporal shapes to video clips that have been automatically segmented we.
Face Recognition Technology By Catherine jenni christy.M.sc.
Face Detection 蔡宇軒.
By: Suvigya Tripathi (09BEC094) Ankit V. Gupta (09BEC106) Guided By: Prof. Bhupendra Fataniya Dept. of Electronics and Communication Engineering, Nirma.
Bayesian Decision Theory Case Studies CS479/679 Pattern Recognition Dr. George Bebis.
Week 9 Emily Hand UNR.
Seunghui Cha1, Wookhyun Kim1
Introduction Computer vision is the analysis of digital images
PCA: Hand modelling Nikzad B.Rizvandi.
An Infant Facial Expression Recognition System Based on Moment Feature Extraction C. Y. Fang, H. W. Lin, S. W. Chen Department of Computer Science and.
Introduction Computer vision is the analysis of digital images
Presentation transcript:

A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah, Dr. Niels Lobo, University of Central Florida, Computer Vision Lab Dr. George Bebis, Dr. Dwight Egbert, University of Nevada-Reno, Dept. of Computer Science Abstract: This project attempts to develop an automatic system that processes a sequence of images obtained from a video camera to detect if a person is smoking a cigarette. So far, we found methods to track mouth and arm movements through a number of image frames with the smoker’s upper body occupying 1/4 of the image size. Further work on interpreting the information extracted from tracking, and detecting smoke in an image will ultimately produce such a system. Problem Statement: Cigarette smoking poses dangers to our health, environment and safety. Therefore, when smoking is prohibited in certain areas, the enforcement of this rule is important. Using physical smoke detectors poses constraints. A smoke detector is not suitable for the level of smoke produced by a cigarette, especially in large areas. Letting a human monitor a designated area through surveillance camera is rather inefficient. If a computer is programmed to recognize this activity, an automatic system can be set up to alert humans when the computer “sees” someone smoking through a video camera. Current Approaches: There does not exist a system in which the activity of cigarette smoking can be automatically recognized by a machine. Methodology: 1. Segmentation of human skin: Skin regions are segmented out using color predicate. These regions are manually extracted in the first frame and used as a mask in the training process. Successive frames are tested against the color space generated through training. Training Image Mask Skin Regions 2. Locating the head region: Connecting components is performed to separate skin regions. The head is identified by examining the circularity of each region. Skin RegionsBoundary Points Head Region 3. Tracking of the head region during occlusion: The number of skin regions indicates whether occlusion is present. The method employed here is correlation. Since we have already found the head in the previous frame where occlusion does not occur, the search window can be established to find the head in the current frame where occlusion occurs. The method is further extended by using the boundary points of the head, which have already been defined, to correct the small variations that result from correlation. 4. Tracking of facial features: Facial features are extracted using Sobel edge detector. The size and shape of these edges are used to locate eyes. A new color space is calculated from the RGB space to identify and track lips. Edges New Color Space 5. Tracking of arms: A best-fit line that passes through the center of mass of an arm region provides orientation, as well as the end points of that region. By keeping a history of previous centroids and end points, we can track the two arms. If an arm is occluding the other arm, the program goes to the previous frame where all skin regions are separated, and takes the separated arm as a template. The template is rotated and is matched against a mask that contains all the skin regions in the current frame. If an arm is merged with the head, the program utilizes the method of rotating template for the first three frames, then uses correlation. It is also necessary to keep track of the size of each arm region, which is used to detect body rotation. Conclusion: The goal of this research project is to experiment with different ideas and methods that will enable a computer to detect if a person is smoking a cigarette. So far, we have found methods to track mouth and arms through a sequence of images with the smoker’s upper body occupying 1/4 of the image size. Future Work: The continuation of this project includes the analysis of the time a hand is positioned at the mouth, the interpretation of the gesture of an arm moving up and down, and the detection of smoke around the head region. References: Further information can be found at Results: Tracking of facial features Tracking of arms Acknowledgements: The Nation Science Foundation University of Central Florida, Computer Vision Lab University of Nevada-Reno, Computer Science Department University of Nevada-Reno, Office of Research