Creative Data > Creative Data

SIGNS

MRM//McCANN, Frankfurt / GERMAN YOUTH ASSOCIATION OF PEOPLE WITH HEARING LOSS / 2019

Awards:

Shortlisted Cannes Lions
CampaignCampaign(opens in a new tab)
Demo Film
Supporting Images
Supporting Images

Overview

Credits

Overview

Why is this work relevant for Creative Data?

SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. Many people with hearing loss use their hands to speak. And that’s all they need to talk to SIGNS. How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer.

Background

There are over 2 billion voice-enabled devices across the globe. Voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear? According to the World Health Organization around 466 million people worldwide have disabling hearing loss. Project SIGNS was developed to create awareness for inclusion in the digital age as well as to facilitate access to new technologies.

Describe the idea/data solution

Many people with hearing loss use their hands to speak. Their hands are their voice. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound, no reaction. Voice is predominantly seen as something audible, something you can hear. But for people with hearing loss that use their hands. Voice is more. Voice is gestures, movements, and even facial expressions. SIGNS bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (e.g. Amazon Alexa, Google Home or Microsoft Cortana). SIGNS is the first smart voice assistant solution for people with hearing loss worldwide.

Describe the data driven strategy

SIGNS was pre-trained with video footage of people who use sign language. SIGNS includes a training interface that can be used to teach new gestures in real time.

SIGNS then recognizes these gestures and acts as an interface to voice assistant systems such as Amazon Alexa, Google Home or Microsoft Cortana. SIGNS is based on an intelligent machine learning framework that is trained to identify body gestures with the help of an integrated camera. These gestures are converted into a data format that the voice assistant service understands. The voice assistant processes the data in real-time and replies appropriately. SIGNS replaces voice assistants’ typical form of communication through audio signals with a visuality – but not only by displaying the word. The visual interface of SIGNS fulfills various requirements that are necessary for an intuitive experience. SIGNS follows the basic principles of sign language. Therefore, the SIGNS dictionary was developed – a set of symbols that are inspired by the hand movements. Just like with other voice assistant devices the user has to naturally interact with the device.

Describe the creative use of data, or how the data enhanced the creative output

SIGNS uses an integrated camera to recognize sign language in real-time and communicates directly with a voice assistant. The system is based on the machine learning framework Google Tensorflow. The result of the pre-trained MobileNet is used to train several KNN classifiers on gestures. The recognition calculates the likelihood of the webcam's recorded gestures and converts into text. The resulting sentences are translated into conventional grammar and sent to a cloud-based service that generates language from it. In other words, the gestures are converted into a data format (text to speech) that the selected voice assistant understands. In this case, shown Amazon Voice Service (AVS). AVS responds with meta and audio data, which in turn is converted from a cloud service to text (text to speech). The result is displayed. SIGNS works on any browser-based operating system that has an integrated camera and can be connected to a voice assistant.

List the data driven results

Project SIGNS created awareness for inclusion in the digital age as well as facilitated access to new technologies. The response of the deaf community was overwhelming. Just like voice, gestures are an intuitive way of communicating, making it extremely relevant for the industry. Not just for the hearing impaired, but for everyone. People think it is awkward to speak to the invisible in public, that’s why we believe that invisible conversational interactions with the digital world are not limited to voice itself. Further we started a cooperation with the German Youth Association of People with Hearing Loss as a partner and extended the usability. Never before a sign language assistant was launched in that quality and with the prospect of becoming a worldwide platform, that can easily be accessible from all over the world, learning new signs and sign languages.

More Entries from Data-driven Consumer Product in Creative Data

24 items

Grand Prix Cannes Lions
GO BACK TO AFRICA

Social Data & Insight

GO BACK TO AFRICA

BLACK & ABROAD, FCB/SIX

(opens in a new tab)

More Entries from MRM//McCANN

24 items

Silver Cannes Lions
YOUR VOICE IS YOUR STAMP

Early Stage Technology

YOUR VOICE IS YOUR STAMP

U.S. POSTAL SERVICE, MRM//McCANN

(opens in a new tab)