SAN FRANCISCO — A woman has regained the ability to “speak” through an avatar after her brain signals were decoded and translated into text. Ann, now 48, suffered a brainstem stroke at the age of 30 that left her paralyzed.
Researchers from the University of California-San Francisco implanted a thin rectangular sheet of 253 electrodes on the surface of her brain, specifically in the area responsible for speech. These electrodes capture the “talking” brain signals, which are then sent to a computer system through a cable connected to a port on her head. The computers can translate these signals into text at a speed of 80 words per minute.
The system uses an audio recording from her wedding, prior to her stroke, to recreate her voice. This voice is then placed onto an avatar complete with facial expressions. The research team, including scientists from UC Berkeley, employed artificial intelligence to develop this revolutionary brain-computer interface (BCI).
“Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others. These advancements bring us much closer to making this a real solution for patients,” says Dr. Edward Chang, chair of neurological surgery at UCSF, in a university release.
Scroll down to see a video of how the brain-computer interface works
For weeks, Ann collaborated with the research team to train the artificial intelligence algorithms to recognize her unique brain signals for speech. Instead of focusing on whole words, the team developed a system that decodes speech from phonemes — the sub-units of speech that form spoken words. This increased both the system’s accuracy and speed.
“The accuracy, speed, and vocabulary are crucial. It’s what gives Ann the potential, in time, to communicate almost as fast as we do, and to have much more naturalistic and normal conversations,” says Sean Metzger, a graduate student in the joint Bioengineering Program at UC Berkeley and UCSF who helped develop the text decoder.
The team also developed a custom machine-learning process that converted her brain signals into movements on the avatar’s face, animating expressions like happiness, sadness, and surprise.
“We’re making up for the connections between her brain and vocal tract that have been severed by the stroke. When Ann first used this system to speak and move the avatar’s face in tandem, I knew that this was going to be something that would have a real impact,” adds Kaylo Littlejohn, another graduate student working on the project.
The study builds on earlier work by Dr. Chang’s team and represents a significant advance toward restoring speech and facial expressions in paralyzed individuals. Currently, the team is working on a wireless version of the technology.
“Being a part of this study has given me a sense of purpose, I feel like I am contributing to society. It feels like I have a job again. It’s amazing I have lived this long; this study has allowed me to really live while I’m still alive!” says Ann.
The findings are published in the journal Nature.
You might also be interested in:
- Scientists receive green light to merge human brain cells with computer chips
- Microchip implanted in the brain allows people to type without a keyboard
- Mind-controlled wheelchair successfully helps paralyzed patients navigate cluttered spaces
South West News Service writer Jim Leffman contributed to this report.