Augmentative communication technologies using naturalistic data and personalized machine learning

Year 2020
Project team Pattie Maes & Rosalind Picard with Jaya Narain, Kristina Johnson, and Craig Ferguson

Understanding nonverbal communication

More than 1 million people in the United States are nonverbal or minimally verbal, including people with autism, Down Syndrome, and other genetic disorders. These individuals experience stress, frustration, and isolation when communicating in a society largely constructed around typical verbal speech. Yet, through nonspeech vocalizations, they express rich affective and communicative information. Their parents or primary caregivers are able to “translate” these sounds. This project is developing a full-feedback augmentative-communication system, using a database of vocalizations from nonverbal and minimally verbal individuals and machine learning algorithms, to “translate” non-speech sounds to speech. It would enhance communicative exchanges between nonverbal or minimally verbal individuals and the wider community.

This project is funded by the Alana TTIA program at the Deshpande Center.

Jaya Narain and Kristy Johnson present Commalla: Toward Improved Understanding of Nonverbal Vocalizations from Minimally Speaking Individuals at IdeaStream 2021.