Breakthrough in brain research: Turning thoughts into speech with over 90 percent accuracy

NIJMEGEN, Netherlands — Patients who are “trapped” inside their own bodies because of neurodegenerative conditions like amyotrophic lateral sclerosis (ALS) may have a groundbreaking tool to help them communicate in the coming years. Dutch researchers have developed a method to convert brain signals directly into audible speech with stunning accuracy levels ranging between 92 to 100 percent.

The significant breakthrough in the realm of neuroscience and artificial intelligence offers new hope for helpless individuals battling to regain their voices.

“Ultimately, we hope to make this technology available to patients in a locked-in state, who are paralyzed and unable to communicate,” says study lead author lead author Julia Berezutskaya, researcher at Radboud University’s Donders Institute for Brain, Cognition and Behaviour and UMC Utrecht, in a media release. “These people lose the ability to move their muscles, and thus to speak. By developing a brain-computer interface, we can analyze brain activity and give them a voice again.”

In the study, non-paralyzed participants, equipped with temporary brain implants, were asked to vocalize specific words. The team then established a direct correlation between the brain signals and the spoken words.

“We also used advanced artificial intelligence models to translate that brain activity directly into audible speech,” explains Berezutskaya. “That means we weren’t just able to guess what people were saying, but we could immediately transform those words into intelligible, understandable sounds. In addition, the reconstructed speech even sounded like the original speaker in their tone of voice and manner of speaking.”

While many global researchers seek ways to identify sentences and words from brain patterns, this team’s uniqueness lies in their ability to produce comprehensible speech using relatively small datasets. This indicates that their models adeptly decipher the intricate relationship between speech and brain activity.

To assess the efficacy of their synthesized speech, the team ran listening tests on volunteers. The overwhelmingly positive outcomes showed the technology’s capability to convey words clearly, resembling natural speech.

However, Berezutskaya advises caution, noting the study’s scope was limited to detecting 12 words vocalized by participants.

“In general, predicting individual words is less complicated than predicting entire sentences. In the future, large language models that are used in AI research can be beneficial,” says Berezutskaya. “Our goal is to predict full sentences and paragraphs of what people are trying to say based on their brain activity alone. To get there, we’ll need more experiments, more advanced implants, larger datasets and advanced AI models. All these processes will still take a number of years, but it looks like we’re heading in the right direction.”

The study is published in the Journal of Neural Engineering.

Follow on Google News

About the Author

Matt Higgins

Matt Higgins worked in national and local news for 15 years. He started out as an overnight production assistant at Fox News Radio in 2007 and ended in 2021 as the Digital Managing Editor at CBS Philadelphia. Following his news career, he spent one year in the automotive industry as a Digital Platforms Content Specialist contractor with Subaru of America and is currently a freelance writer and editor for StudyFinds. Matt believes in facts, science and Philadelphia sports teams crushing his soul.

The contents of this website do not constitute advice and are provided for informational purposes only. See our full disclaimer