A paralyzed woman was able to listen to sentences in words that she had imagined to say in the previous 3 seconds: a speed that has so far unthinkable.
A brain system that transforms thoughts into words allowed a paralyzed woman to hear a phrase that had imagined reading less than three seconds before.
Compared to previous brain-computer interfaces (Brain – interfaceBCI) the system, integrated with an IA model that helped to decode the sentences designed by the patient, allows you to overcome an annoying period of “latency” between the desire to say something and the translation into words of those intentions. Delivering the phrase to the listener much faster. The results were published on Nature Neuroscience.
The accident. The woman, Ann, had lost the ability to speak following a stroke to the brain trunk in 2005. In 2023 she had undergone an intervention to position a thin chip with 253 electrodes on the surface of the cerebral cortex. Even if Ann can no longer move the muscles of the vocal apparatus, the system It can detect the combined activity of thousands of neurons in the area responsible for language.
Innatural wait. The ability to communicate in words what we think in real time is almost taken for granted. Only in rare moments, such as when we perceive our voice come out with a little delay from a speaker or a friend’s phone, we make How much a delay can confuse between the imagined and expressed speech. In recent years, brain-computer interfaces for spoken language have made great strides, allowing people whose ability to model the sounds was compromised by a disease or by an accident to return to express itself.
Obvious limits. So far, however, most of the existing plants required that it came Once the mental reading of whole blocks of text has been completed before the software could translate them into words. In addition, the training of these systems is long depended onclear execution of vocal movements by people who have great difficulties in performing them. These and other practical obstacles extend the time it spends between thought, its decoding and its expression in words.
The thought is enough. A group made up of engineers, computer and neuroscientists from the Universities of California of Berkeley and San Francisco tried to remedy these difficulties by training a neural network based on deep learning On the activity of the Sensorimotor Cortex of Anntoday 47 years old, while silently pronounced a hundred sentences from a vocabulary of just over a thousand words, as well as 50 simpler phrases that appeared on a screen.
This system did not require the woman to actively try to pronounce the words, but only that she thought them, and was much more effective in decoding: she made it possible to translate an average number of words per minute double compared to previous methods.
Live, or almost live. The brain-computer interface continued to capture the patient’s neural signals every 80 millisecondsthus allowing them to translate them into language in an almost natural way, producing between 47 and 90 words per minute.
To make a comparison, a natural conversation takes place at the rhythm of about 160 words per minute, while a conversation generated with an old BCI system has a rhythm comparable to that of a dialogue written on WhatsApp, in which you have to wait a few seconds while the interlocutor is writing. The current assisted communication system employed by Ann uses over 20 seconds to express a single phrase thought in words.
Even better. The scientists also made sure that the synthetic voice resembled that of Ann before the stroke, training the Ai on a video of the woman’s wedding. In short, an important progress compared to the past, even if – admit scientists – There is a large margin of improvement: With a greater number of sensors and a more precise work of decoding the neural signal, we will witness a gradual approach of these systems to the rhythm of natural language.