Higher School of Economics , Moscow, 2016

Slides:



Advertisements
Similar presentations
Converting sign language gestures from digital images to text
Advertisements

VisHap: Guangqi Ye, Jason J. Corso, Gregory D. Hager, Allison M. Okamura Presented By: Adelle C. Knight Augmented Reality Combining Haptics and Vision.
Multi-scenario Gesture Recognition Using Kinect Student : Sin- Jhu YE Student Id : MA Computer Engineering & Computer Science University of Louisville.
Page 1 SIXTH SENSE TECHNOLOGY Presented by: KIRTI AGGARWAL 2K7-MRCE-CS-035.
David Wild Supervisor: James Connan Rhodes University Computer Science Department Gaze Tracking Using A Webcamera.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Department of Electrical and Computer Engineering He Zhou Hui Zheng William Mai Xiang Guo Advisor: Professor Patrick Kelly ASLLENGE.
Move With Me S.W Graduation Project An Najah National University Engineering Faculty Computer Engineering Department Supervisor : Dr. Raed Al-Qadi Ghada.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
Cindy Song Sharena Paripatyadar. Use vision for HCI Determine steps necessary to incorporate vision in HCI applications Examine concerns & implications.
CS335 Principles of Multimedia Systems Multimedia and Human Computer Interfaces Hao Jiang Computer Science Department Boston College Nov. 20, 2007.
Interactive Sand Art Draw Using RGB-D Sensor Presenter : Senhua Chang.
Visual Screen: Transforming an Ordinary Screen into a Touch Screen Zhengyou Zhang & Ying Shan Vision Technology Group Microsoft Research
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
ICBV Course Final Project Arik Krol Aviad Pinkovezky.
Knowledge Systems Lab JN 8/24/2015 A Method for Temporal Hand Gesture Recognition Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
   Input Devices Main Memory Backing Storage PROCESSOR
Professor : Yih-Ran Sheu Student’s name : Nguyen Van Binh Student ID: MA02B203 Kinect camera 1 Southern Taiwan University Department of Electrical Engineering.
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
Presentation by: K.G.P.Srikanth. CONTENTS  Introduction  Components  Working  Applications.
A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India Ashutosh Dwivedi.
Update September 14, 2011 Adrian Fletcher, Jacob Schreiver, Justin Clark, & Nathan Armentrout.
Submitted by:- Vinay kr. Gupta Computer Sci. & Engg. 4 th year.
Hand Tracking for Virtual Object Manipulation
Interactive Vision Two methods for Interactive Edge detection. Final Project by Daniel Zatulovsky
A New Fingertip Detection and Tracking Algorithm and Its Application on Writing-in-the-air System The th International Congress on Image and Signal.
Augmented Reality and 3D modelling By Stafford Joemat Supervised by Mr James Connan.
` Tracking the Eyes using a Webcam Presented by: Kwesi Ackon Kwesi Ackon Supervisor: Mr. J. Connan.
Student: Ibraheem Frieslaar Supervisor: Mehrdad Ghaziasgar.
Hand Gesture Recognition Using Haar-Like Features and a Stochastic Context-Free Grammar IEEE 高裕凱 陳思安.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Knowledge Systems Lab JN 1/15/2016 Facilitating User Interaction with Complex Systems via Hand Gesture Recognition MCIS Department Knowledge Systems Laboratory.
CONTENT FOCUS FOCUS INTRODUCTION INTRODUCTION COMPONENTS COMPONENTS TYPES OF GESTURES TYPES OF GESTURES ADVANTAGES ADVANTAGES CHALLENGES CHALLENGES REFERENCE.
A Recognition Method of Restricted Hand Shapes in Still Image and Moving Image Hand Shapes in Still Image and Moving Image as a Man-Machine Interface Speaker.
Design & Implementation of a Gesture Recognition System Isaac Gerg B.S. Computer Engineering The Pennsylvania State University.
On Wikipedia you can find the following definition of NUI: “In computing, a natural user interface, or NUI, or Natural Interface is the common parlance.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
FYP titles By Prof. KH Wong FYP v6.31.
LOGO “ Add your company slogan ” Final Project Group: T2H2 Mai Thi Thu Nguyen Van Thanh Do Van Huu Pham Ngoc Huy Supervisor: DungHA TrungNT T2H2 Group:
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
TOUCHLESS TOUCHSCREEN USER INTERFACE
SPACE MOUSE. INTRODUCTION  It is a human computer interaction technology  Helps in movement of manipulator in 6 degree of freedom * 3 translation degree.
IMAGE PROCESSING is the use of computer algorithms to perform image process on digital images   It is used for filtering the image and editing the digital.
HADI Tutorial HADI Usage Contents 1.System Requirements 2.Capture Image 3.Calibration 4.Properties of Measurement Tools 5.View and Display 6.Show.
Automatic License Plate Recognition for Electronic Payment system Chiu Wing Cheung d.
Office 2016 and Windows 10: Essential Concepts and Skills
FINGERSPELLING THE SIGN LANGUAGE TOOL
Hand Gestures Based Applications
SIXTH SENSE TECHNOLOGY
Gait Recognition Gökhan ŞENGÜL.
Sliding Puzzle Project
Submitted by: Ala Berawi Sujod Makhlof Samah Hanani Supervisor:
SIXTH SENSE TECHNOLOGY
TOUCHLESS TOUCHSCREEN USER INTERFACE
FISH IDENTIFICATION SYSTEM
Chapter 5 - Input.
Chapter 2: Input and output devices
NBKeyboard: An Arm-based Word-gesture keyboard
Senior Capstone Project Gaze Tracking System
Chapter 1: Image processing and computer vision Introduction
Mixed Reality Server under Robot Operating System
How to Build Smart Appliances?
The Implementation of a Glove-Based User Interface
Outline Announcement Perceptual organization, grouping, and segmentation Hough transform Read Chapter 17 of the textbook File: week14-m.ppt.
Higher School of Economics , Moscow, 2016
Higher School of Economics , Moscow, 2016
Sign Language Recognition With Unsupervised Feature Learning
Presentation transcript:

Higher School of Economics , Moscow, 2016 <Topic> Prepared by: <Master's student name> System and Software Engineering Supervised by: <Supervisor name> Faculty of Computer Science/School of Software Engineering June, 2018 Higher School of Economics , Moscow, 2016 www.hse.ru 1

Contents 2 Higher School of Economics , Moscow, 2018 2

Problem Statement Basically we use colored tips for detection which are captured by webcam. Here, the colored fingertip acts as an object which the web cam senses. The camera is positioned such that it recognizes the moment of finger tips and performs the operations of mouse. The utilization of virtual mouse appears in space saving situations or in movement situation. In periods of prolonged use, users' arms began to feel fatigue and/or discomfort. This effect contributed to the decline of touch-screen input. 5 Higher School of Economics , Moscow, 2018 5

Project Scope 7 Higher School of Economics , Moscow, 2018 7

Goal and Objectives For image-based gesture recognition there are limitations on the equipment used and image noise. Images or video may not be under consistent lighting, or in the same location. Items in the background or distinct features of the users may make recognition more difficult. The variety of implementations for image-based gesture recognition may also cause issue for viability of the technology to general usage. For example, an algorithm calibrated for one camera may not work for a different camera. Furthermore, the distance from the camera, and the camera's resolution and quality, also cause variations in recognition accuracy. 8 Higher School of Economics , Moscow, 2018 8

Ways of Solving Tasks The Azure Mobile Apps Services provides lot of useful functions when building a backend for mobile applications. These services were used to build the architecture from storage facilities (Azure SQL) to mobile service (which is aim to build the backend, incorporate push notification, integration of social networks and allows single sign-on authentication), 9 Higher School of Economics , Moscow, 2018 9

Proposed Methods To recognize a hand gesture, we need to resize the input image in order to map camera coordinates to screen coordinates. Skin detection is detecting the skin colour pixels in an image. The input image from the webcam would be in the RGB colour space, thus it would have to be converted to the HSV colour space using the conversion Formulae. The Histogram-based skin detection method proposed by uses 32 bins H and S histograms to achieve skin detection. To perform edge detection to obtain the hand contour in the image, there are several edge detection methods such as, Laplacian edge detection, canny edge detection and border finding. The OpenCV function – [cvFindContours()] uses a order finding edge detection method to find the contours in the image. Consumed Endurance (CE) is a metric that captures the degree of arm fatigue during mid-air interactions. 10 Higher School of Economics , Moscow, 2018 10

Tools and Technologies 11 Higher School of Economics , Moscow, 2018 11

Expected Results 12 Higher School of Economics , Moscow, 2018 12

Advantages 13 Higher School of Economics , Moscow, 2018 13

References 14 Higher School of Economics , Moscow, 2018 14

Questions please!