Tuesday, July 8, 2025

Meet Indian scientist who co-developed world’s first brain implant

pune

California, June 13: A man with a severe speech disability can speak expressively and sing using a brain implant that translates his neural activity into words almost instantly. The device conveys changes of tone when he asks questions, emphasizes the words of his choice, and allows him to hum a string of notes in three pitches.

The system, known as a brain–computer interface (BCI), used artificial intelligence (AI) to decode the participant’s electrical brain activity as he attempted to speak. The device is the first to reproduce not only a person’s intended words but also features of natural speech such as tone, pitch, and emphasis, which help to express meaning and emotion.

In a study, a synthetic voice that mimicked the participant’s own spoke his words within 10 milliseconds of the neural activity that signalled his intention to speak. The system, described today in Nature1, marks a significant improvement over earlier BCI models, which streamed speech within three seconds or produced it only after users finished miming an entire sentence.

“This is the holy grail in speech BCIs,” says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved in the study. “This is now real, spontaneous, continuous speech.”

The study participant, a 45-year-old man, lost his ability to speak clearly after developing amyotrophic lateral sclerosis, a form of motor neuron disease, which damages the nerves that control muscle movements, including those needed for speech. Although he could still make sounds and mouth words, his speech was slow and unclear.

Mind-reading devices are revealing the brain’s secrets

Five years after his symptoms began, the participant underwent surgery to insert 256 silicon electrodes, each 1.5 mm long, in a brain region that controls movement.

Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words.

“We don’t always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,” explains Wairagkar. “To do that, we have adopted this approach, which is completely unrestricted.”

The team also personalized the synthetic voice to sound like the man’s own, by training AI algorithms on recordings of interviews he had done before the onset of his disease.

The team asked the participant to attempt to make interjections such as ‘aah’, ‘ooh’, and ‘hmm’, and say made-up words. The BCI successfully produced these sounds, showing that it could generate speech without needing a fixed vocabulary.

Using the device, the participant spelt out words, responded to open-ended questions, and said whatever he wanted, using some words that were not part of the decoder’s training data. He told the researchers that listening to the synthetic voice produce his speech made him “feel happy” and that it felt like his “real voice”.

In other experiments, the BCI identified whether the participant was attempting to say a sentence as a question or as a statement. The system could also determine when he stressed different words in the same sentence and adjust the tone of his synthetic voice accordingly. “We are bringing in all these different elements of human speech which are important,” says Wairagkar. Previous BCIs could produce only flat, monotone speech.

“This is a bit of a paradigm shift in the sense that it can lead to a real-life tool,” says Silvia Marchesotti, a neuroengineer at the University of Geneva in Switzerland. The system’s features “would be crucial for adoption for daily use for the patients in the future”. (Nature)