Amazon Alexa mod turns sign language into voice commands Amazonâs Alexa voice assistant can queue up your favorite tunes, give a digest of...
Amazonâs Alexa voice assistant can queue up your favorite tunes, give a digest of the dayâs news and weather, and recommend movies youâre likely to enjoy. But what if youâre deaf or hard of hearing? Voice recognition systems like Alexa sometimes struggle to pick up the rhythms of deaf users, which presents a challenge for the more than 466 million people around the world with disabling hearing loss.
Luckily, software developer Abhishek Singh has a hands-free solution: an app that allows Alexa to understand and respond to sign language.
âThe project was a thought experiment inspired by observing a trend among companies of pushing voice-based assistants as a way to create instant, seamless interactions,â Singh told Fast Company. âIf these devices are to become a central way we interact with our homes or perform tasks, then some thought needs to be given to those who cannot hear or speak. Seamless design needs to be inclusive in nature.â
It isnât an off-the-shelf Echo, exactly. Singhâs stack consists of an Amazon Echo connected to a laptop that handles processing; a webcam that detects signs in American Sign Language (ASL); and an artificially intelligent backend that translates those signs into voice commands Alexa can understand.
Singh trained a machine learning model in Googleâs TensorFlow framework â" specifically TensorFlow.js, an open-source library that can define and run machine learning models in a web browser â" by repeatedly gesturing in front of a webcam until it learned to distinguish between his hand, arm, and finger movements. He then hooked it up to a translation engine that listened for the Echoâs responses, transcribed them, and typed them in text.
The setup goes a good deal beyond the accessibility features built into current-generation Echo devices. Tap to Alexa, which launched this week, allows owne rs of the touchscreen-equipped Echo Show to access Alexa features by tapping on the Showâs screen. And Alexa Captioning transcribes incoming messages and Alexaâs responses on Echo devices with a screen. But neither can read sign language.
Singhâs system is purely proof-of-concept â" it only recognizes a handful of signs at the moment. But he told The Verge that itâs relatively easy to add new vocabulary, and that he plans to open-source the code shortly.
âIf these devices are to become a central way in which in interact with our homes or perform tasks then some thought needs to be given towards those who cannot hear or speak,â Singh said. âSeamless design needs to be inclusive in nature.â
Singhâs app isnât the first to leverage computer vision to translate sign language to text. In 2013, scientists at Microsoft Research Asia used the Kinect, a motion-sensing Xbox peripheral, to recognize signs as PC inputs. And in 2017, Nvidia built a messag ing app with a deep neural network that performs ASL-to-English translation.
Source: Google News US Technology | Netizen 24 United States
No comments