aiD aims to address the challenge of deaf people communication and social integration by leveraging the latest advances in ML, HCI and AR. Specifically, speech-to-text/text-to-speech algorithms have currently reached high performance, as a product of the latest breakthrough advances in the field of deep learning (DL). However, the commercially available systems cannot be readily integrated into a solution targeted to the communication between deaf and hearing people. On the other hand, existing research efforts to tackle the problem of transcribing SL video or generating synthetic SL footage (SL avatar) from text have failed to generate a satisfactory outcome.

aiD addresses both these problems. We develop speech-to-text/text-to-speech modules tailored to the requirements of a system addressing the communication of the deaf. Most importantly, we systematically address the core technological challenge of SL transcription and generation in an AR environment. Our vision is to exploit and advance the state-of-the-art in DL to solve these problems with groundbreaking accuracy, in a fashion amenable to commodity mobile hardware. This will be in stark contrast to existing systems which either depend on sophisticated costly equipment (multiple vision sensors, gloves and wristbands), or are lab-only systems limited to fingerspelling as opposed to the official SL that deaf people actually use. Indeed, the current state-of-the-art requires expensive devices and operates on a word-by-word basis, thus missing the syntactic context. Finally, these solutions are not amenable to commodity mobile devices. Our vision is to resolve these staggering inadequacies so as to offer a concrete solution that addresses real time interaction between deaf and hearing people. Our core innovation lies in the development of new algorithms and techniques that enable the real-time translation of SL video to text or speech and vice-versa (SL avatar generation from speech/text in an AR environment), with satisfactory accuracy, in a fashion amenable to commodity mobile devices such as smartphones and tablets.

Addressing the multifaceted challenge of enabling deaf people to effectively communicate, interact, and eventually participate in social life will bring about a major breakthrough to the lives of hundreds of thousands of European citizens. Inspired from our deep understanding of the deaf community needs as well as of the capacity of modern machine learning (ML) and augmented reality (AR) technologies, the overarching goal of aiD is to pursue cross-disciplinary breakthrough innovation that builds and extends upon the latest academic research advances to offer a comprehensive suite of solutions catering to deaf people communication needs. Specifically, we target the following pilots: a) ease of communication by means of translation from and to SL (SL) amenable to commodity mobile devices, b) novel educational solutions for deaf children, c) intelligent relay services for deaf people, including emergency services. We consider the full pipeline of communication which entails technological development on multiple technological frontiers: signal processing, signal perception and generation via advanced ML, creation of virtual SL signals in an AR environment, usability issues, and scalability of the developed technologies on commodity mobile devices, accessible to the vast majority of potential users.

Follow us at:




YouTube channel:

back to top