Sana Naghipour, Saba Naghipour Mentor: Phani Chavali Advisers: Ed Richter, Prof. Arye Nehorai.

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

Kien A. Hua Division of Computer Science University of Central Florida.
A Novel Approach of Assisting the Visually Impaired to Navigate Path and Avoiding Obstacle-Collisions.
CS 551 / CS 645 Antialiasing. What is a pixel? A pixel is not… –A box –A disk –A teeny tiny little light A pixel is a point –It has no dimension –It occupies.
Video Inpainting Under Constrained Camera Motion Kedar A. Patwardhan, Student Member, IEEE, Guillermo Sapiro, Senior Member, IEEE, and Marcelo Bertalm.
Department of Electrical and Computer Engineering He Zhou Hui Zheng William Mai Xiang Guo Advisor: Professor Patrick Kelly ASLLENGE.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
ICIP 2000, Vancouver, Canada IVML, ECE, NTUA Face Detection: Is it only for Face Recognition?  A few years earlier  Face Detection Face Recognition 
Electrical and Computer Engineer Large Portable Projected Peripheral Touchscreen Team Jackson Brian Gosselin Greg Langlois Nick Jacek Dmitry Kovalenkov.
Advanced Computer Vision Introduction Goal and objectives To introduce the fundamental problems of computer vision. To introduce the main concepts and.
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor.
May 10, 2004Facial Tracking and Animation Todd Belote Bryan Harris David Brown Brad Busse.
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
Detecting Image Region Duplication Using SIFT Features March 16, ICASSP 2010 Dallas, TX Xunyu Pan and Siwei Lyu Computer Science Department University.
1 Pupil Detection and Tracking System Lior Zimet Sean Kao EE 249 Project Mentors: Dr. Arnon Amir Yoshi Watanabe.
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
Virtual Dart – An Augmented Reality Game on Mobile Device Supervised by Prof. Michael R. Lyu LYU0604Lai Chung Sum ( )Siu Ho Tung ( )
UNIVERSITY OF MURCIA (SPAIN) ARTIFICIAL PERCEPTION AND PATTERN RECOGNITION GROUP REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos Dept.
Presented by Pat Chan Pik Wah 28/04/2005 Qualifying Examination
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
                      Digital Video 1.
Harshita Karamchandani Placement, Masters Project and Travels…..
1 Motivation Video Communication over Heterogeneous Networks –Diverse client devices –Various network connection bandwidths Limitations of Scalable Video.
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
Inputs to Signal Generation.vi: -Initial Distance (m) -Velocity (m/s) -Chirp Duration (s) -Sampling Info (Sampling Frequency, Window Size) -Original Signal.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
Abstract Design Considerations and Future Plans In this project we focus on integrating sensors into a small electrical vehicle to enable it to navigate.
The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access for People With Severe Disabilities.
Abstract Some Examples The Eye tracker project is a research initiative to enable people, who are suffering from Amyotrophic Lateral Sclerosis (ALS), to.
ICPR/WDIA-2012 High Quality Novel View Synthesis Based on Low Resolution Depth Image and High Resolution Color Image Jui-Chiu Chiang, Zheng-Feng Liu, and.
Efficient Editing of Aged Object Textures By: Olivier Clément Jocelyn Benoit Eric Paquette Multimedia Lab.
: Chapter 12: Image Compression 1 Montri Karnjanadecha ac.th/~montri Image Processing.
Optical Tracking for VR Bertus Labuschagne Christopher Parker Russell Joffe.
 In electrical engineering and computer science image processing is any form of signal processing for which the input is an image, such as a photograph.
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
Chapter 1 INTRODUCTION TO IMAGE PROCESSING Section – 1.2.
DIGITAL Video. Video Creation Video captures the real world therefore video cannot be created in the same sense that images can be created video must.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
AUTOMATIZATION OF COMPUTED TOMOGRAPHY PATHOLOGY DETECTION Semyon Medvedik Elena Kozakevich.
Computer Vision Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications –building representations.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
Stylization and Abstraction of Photographs Doug Decarlo and Anthony Santella.
Tracking People by Learning Their Appearance Deva Ramanan David A. Forsuth Andrew Zisserman.
Digital Photography with Flash and No-Flash Image Pairs Gabriela Martínez Processamento de Imagem IMPA.
MULTIMEDIA INPUT / OUTPUT TECHNOLOGIES
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
Autonomous Robots Vision © Manfred Huber 2014.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Occlusion Tracking Using Logical Models Summary. A Variational Partial Differential Equations based model is used for tracking objects under occlusions.
An Introduction to Digital Image Processing Dr.Amnach Khawne Department of Computer Engineering, KMITL.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
ENTERFACE 08 Project 9 “ Tracking-dependent and interactive video projection ” Mid-term presentation August 19th, 2008.
Detecting Occlusion from Color Information to Improve Visual Tracking
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Applications and Rendering pipeline
Ehsan Nateghinia Hadi Moradi (University of Tehran, Tehran, Iran) Video-Based Multiple Vehicle Tracking at Intersections.
Guillaume-Alexandre Bilodeau
Eye Detection and Gaze Estimation
Image Segmentation Classify pixels into groups having similar characteristics.
Digital image self-adaptive acquisition in medical x-ray imaging
Fast Preprocessing for Robust Face Sketch Synthesis
Machine Vision Acquisition of image data, followed by the processing and interpretation of these data by computer for some useful application like inspection,
David Harwin Adviser: Petros Faloutsos
Learning complex visual concepts
Presentation transcript:

Sana Naghipour, Saba Naghipour Mentor: Phani Chavali Advisers: Ed Richter, Prof. Arye Nehorai

The Eye tracker project is a research initiative to enable people, who are suffering from Amyotrophic Lateral Sclerosis (ALS), to use prosthetic limbs using their eyes by tracking the movement of the pupil. The project will be implemented in two main phases. The idea is to mount an infrared camera onto a pair of sunglasses and capture the movement of the pupil. Moving the limbs using the control signal generated, based on the pupil movements. In this semester, we focused on developing software tools for tracking the motion of the eye. In the next semester, we will build the hardware necessary to control the prosthetic limbs.

Goal: To track the location of the pupil, in a live video stream using image processing techniques. Approach:  First Phase : development of the software for pupil tracking  Second Phase : building the hardware necessary to capture the images of the eye and transfer the images to a processing unit.

Help ALS patients in various tasks such as communication, writing s, drawing, making music. It also has other applications such as Cognitive Studies Laser refractive surgery Human Factors Computer Usability Translation Process Research Training Simulators Fatigue Detection Virtual Reality Infant Research Geriatric Research Primate Research Sports Training Commercial eye tracking

We implement our project in the following steps. Image Acquisition We use Labview to capture the video using an infrared camera. There is support for recording of videos with several frame rates, and formats. After obtaining the video, we perform sequential frame by frame processing. Discarding Color information We convert the images from all the frames into to their corresponding gray scale images. To do this, we average the pixel values in all the three color channel to obtain a gray scale image. Low pass filtering We use low-pass filtering to remove the sharp edges in each image. This also helps to remove the undesired background light in the image. Scaling We scale down the filtered images to obtain lower resolution images. This serves two purposes. First, since the dimension of the image decreases, scaling improves the processing time. Second, the averaging effect removes the undesired background light. Template Matching We used a template matching algorithm to segment the darkest region of the image. Since after discarding the color information and, low-pass filtering, the pupil corresponds to the darkest spot in the eye, this method was used. We used a small patch of dark pixels as a template. The matching is done using exhaustive search over the entire image. Once a match is found, the centroid of the this block was determined to the pupil location. For the experiments, we used a block size of 5 x 5 pixels. Determining the search space Since the exhaustive search over the entire image to find a match is computationally intensive, we propose an adaptive search method. Using this method, we choose the search space based on the pupil location from earlier frame. In this manner, using the past information, we were able to greatly reduce the complexity of the search. We used a search space of 60 X 60 pixels around the pupil location from the last frame.

The challenges include but are not limited to developing algorithms that are: (i) fast, which obtain good frame rate (i) robust, that are insensitive to lighting conditions and facial irregularities.