Presentation is loading. Please wait.

Presentation is loading. Please wait.

KINECT AMERICAN SIGN TRANSLATOR (KAST)

Similar presentations


Presentation on theme: "KINECT AMERICAN SIGN TRANSLATOR (KAST)"— Presentation transcript:

1 KINECT AMERICAN SIGN TRANSLATOR (KAST)
Abstract How the Application Works Development Results The purpose of this project was to make an American Sign Language Translator application for the Kinect V2. A program like this could be set up the in public common areas, such as libraries and airports. Computers with the Kinect American Sign Translator application created, or KAST for short, could be put at the desk of employees and the deaf user can sign in front of the Kinect connected to the computer and use it to ask questions. For instance, in an airport a deaf user could sign “Where is terminal X,” and the employee could type the answer for them in a word processor application. KAST was created in Visual Studio 2015 on a Windows 10 computer using C# and the Kinect SDK. The application takes the gestures performed before the Kinect and compares them to a “dictionary” database. If the application finds a match in the database, then the English translation of the sign is displayed within the app as well as spoken aloud using a voice synthesizer. The “dictionary” database was created using Kinect Studio and Visual Gesture Builder (VGB), two components of the Kinect SDK. To the measure application’s success, each gesture was performed multiple times and the positives, false positives and misses were recorded. A positive is a successful gesture detection and a false positive is when the incorrect gesture was detected as a valid gesture. A miss meant no gesture was detected at all. Overall, the application proved successful, with the lowest recognized word being 67% positive. KAST was able to recognize and convert American Sign Language into text. As a gesture was translated, the lowest gesture detection was above the requirement of 60% set prior to building the application and testing it. However, most words were in the low 70s, which is less than desirable. When beginning this project, there were only 5 words in the Lexicon database and those words were trained more thoroughly. The rest of the gestures in the database were trained with less frequency. It is believed that this is why a lot of the results were so low. In addition, the way KAST detects the gesture could explain the reason for some of the false positives. As explained previously, the code essentially sees if the previous frame data seen by the Kinect matches with the last 5% of the frame’s data. Most of the gestures in the database don’t have the last 5% in common. However, the issue did occur with a few words, most notably “Pancake” and “Hello” due to their similar signs. Hopefully in the future KAST will be able to discern the difference without sacrificing the fluidity given to the application by that last 5%. Development Environment C# language in Microsoft’s Visual Studio 2015 Development Environment; Gestures were recorded in Kinect Studio and trained in Visual Gesture Builder The language and development environment were chosen because their capabilities best fit what was needed for creating the application One setback of using Visual Studio 2015 was lack of prior experience using this development environment. Because of this, a large portion of time was spent teaching myself how to use Visual Studio 2015 Testing Each word in the created database was signed 30 times for the app during the testing period. The number of times the KAST successfully translated the word, translated the wrong word and didn’t detect the sign at all was recorded. Kinect V2 Sensor Installed No first person Sign is performed KAST goes through a database of ASL gestures and prints out the name of the gesture if it finds a match Procedure What is ASL? American Sign Language (ASL) is the main mode of communication used by the hearing impaired community in America. Every gesture in ASL corresponds to an English Word and signs are motioned together to make a sentence. Future Research In ASL, a sentence can be said either in “Subject-Verb-Object” form (like English) or “Object-Verb-Subject.” As it stands now, KAST is not able to turn the latter into proper English grammar, an issue hoped to be remedied in the future. The Kinect V2 is not precise enough to allow KAST to detect finger spelling or numbers. While an algorithm for hand tracking that could help with this was found. However, due to time constraints, it was not successfully integrated into KAST. More languages will be added to make the application more universal. What is the Kinect? The Kinect V2 was created by Microsoft in The Kinect is a motion capture device created initially for the Xbox One for gaming purposes. Later, Microsoft released a version for Windows and an SDK to create applications for the Kinect such as KAST. First the gesture database was created. Gestures were preformed in the recording software called Kinect Studio, included in Kinect’s SDK, and the recordings were “trained” in Visual Gesture Builder, another component of the SDK. Training a gesture is when it is given values that allow it to be detected. Once this is done for all gestures, they were compiled into one database file. The second step was to write the code for the application itself. The main part of the code (above) checks to see if the gesture that was performed in the previous frame was equivalent to any of the gestures in the Lexicon database. If so, the name, which is its English translation, of the gesture it corresponds to will be printed and spoken via Speech Synthesizer. Kinect V2 Skeleton Joint Mapping And Detection Positive Word Recognition % (Data) Here is the Kinect Skeleton, which says how the Kinect views humans. The Kinect V2 skeleton has 25 joints. When the Kinect Sensor tracks a user motion, it tracks the motion of the individual joints. In Visual Gesture Builder, you can tell the Kinect to ignore certain joints when looking for that gesture. Images courtesy of Microsoft.com References Sign language dictionary - SPREADTHESIGN. (2012). Retrieved January 26, 2017, from Wu, G. (2013, October 20). Kinect Sign Language Translator - part 1 - Microsoft Research. Retrieved August 7, 2016, from Welcome to Visual Studio (2016, July). Retrieved August 8, 2016, from Purpose The purpose of this project was to make an American Sign Language Translator application for the Kinect V2. A program like this could be set up in public common areas, such as libraries and airports, potentially making the lives of hearing impaired and mute people easier.


Download ppt "KINECT AMERICAN SIGN TRANSLATOR (KAST)"

Similar presentations


Ads by Google