Eighteen years after Ann Johnson suffered a stroke that left her severely paralyzed, an experimental technology has translated her brain signals into audible words and enabled her to communicate through a digital avatar.

Digital Avatar and AI Allow Paralysis Patient to Speak Again
Digital avatar and AI allow paralysis patient to speak again; Photo: maxresdefault

Prior to this study, patients who have lost the ability to speak have had to rely on slow speech synthesizers that involve spelling out words using eye-tracking technology or small facial movements. These types of technologies can be difficult to use and make natural conversation nearly impossible.

The new technology was developed by researchers at the University of California, San Francisco, and the University of California, Berkeley. The process involved an implant being placed on the surface of Johnson’s brain in regions associated with language and speech.

The implant was received via an operation in 2022 and contains 253 electrodes that intercept brain signals from thousands of neurons. The doctors also installed a port in Johnson’s head that connects to a cable that carries her brain signals to a computer bank.

Prior to the procedure, Johnson could only make sounds like “ooh” and “ahh”, but her brain was still firing off signals. After implementation, Johnson worked with the team to train the system’s AI algorithm to detect her unique brain signals for various speech sounds by repeating different phrases.

Paralysis Patient Speaks in Own Voice Using AI and Digital Avatar
Ann Johnson speaks using digital avatar with a copy of her own voice; Photo: Noah Berger

The computer learned 39 distinct sounds and, using a Chat GPT-style AI language model, they were able to translate her brain signals into sentences that could be spoken through a digitally animated figure.

So, for example, when  Johnson attempted to say the sentence “Great to see you again,” the digital avatar on a nearby screen received those signals and spoke the words out loud. The system used in this study is significantly faster and more accurate than previously existing technologies, allowing Johnson to communicate with a more expansive vocabulary.

“Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others,” said Prof Edward Chang, who led the work at the University of California, San Francisco (UCSF). “These advancements bring us much closer to making this a real solution for patients.”

The technology converted Johnson’s speech attempts into words at approximately 80 words per minute and had a median accuracy of around 75% when using a 1,024-word vocabulary. For context, the natural rate of speech is around 150 to 200 words per minute.

The researchers were even able to personalize the technology, using a recording of Johnson speaking at her wedding to create a digital replica of her own voice for the avatar. They also converted Johnson’s brain signals into facial and emotional expressions for the avatar.

Dr. Edward Chang expressed how meaningful the successful study was at a news briefing, “There’s nothing that can convey how satisfying it is to see something like this actually work in real-time.”

The technology isn’t currently wireless, so it isn’t advanced enough to be integrated into the daily lives of patients like Johnson. The advance does, however, raise hopes that brain-computer-interface (BCI) technology could soon transform the lives of people who have lost the ability to speak.

Explore Tomorrow's World From Your Inbox

Get the latest science, technology, and sustainability news delivered to your inbox every week.


I understand that by providing my email address, I agree to receive emails from Tomorrow's World Today. I understand that I may opt out of receiving such communications at any time.