Computers are learning to decode the language of our minds

Computers are learning to decode the language of our minds

Back in 2012, when I was playing around with the idea of becoming a cyborg, our ability to decode the orchestra of electric signals generated by our brain was still rudimentary at best. Over the last few years, however, a series of astonishing breakthroughs have highlighted just how far science and technology come in the quest to build a true brain-computer interface that allows us to interact with a machine using only our thoughts.

This new field, dubbed neuroprosthesis, is typically being used only in patients with a serious medical need, like those who have been paralyzed and can no longer speak, despite the fact that their brains and thoughts are completely intact. In 2021, a subject who had been limited to communicating just five words a minute using arduous head gestures learned to communicate through thought alone.

The new approach used electrodes in his brain. The team studied brain activity as the patient tried to say certain words and sounds. The algorithm was able to learn the patterns of neural activity, so that the patient could generate letters and words on a screen simply by thinking about the act of trying to say them out loud. His ability to communicate more than tripled in speed to around 15-18 words per minute.

It makes sense that a neural network would be a good approach here. They are exceptional at ingesting large amounts of data, training on it, adjusting their weights, and mastering a certain domain – whether this is translation between languages, the game of Go, or the structure of protein folding. In this case, the dataset was brain activity, and the algorithm was able to learn how to imitate it.

Fast forward two years and a team of researchers and doctors utilizing a similar approach was able to help a stroke patient to communicate at 78 words per minute. A normal conversation is about 160 words, so if progress continues at the same pace, and it may well accelerate, people may be able to communicate using their thoughts at the same pace as those who speak with their bodies.

The new research didn’t just improve on speed. In the original research, the patient was able to communicate with around 50 words. In the most recent study, that increase to nearly 1000 words. What’s more, they were able to understand not just the words, but also the facial expressions, allowing the patient to communicate not just ideas but feelings through their virtual avatar.

According to the reporting, “Both studies used predictive language models to help guess words in sentences. The systems don’t just match words but are ‘figuring out new language patterns’ as they improve their recognition of participants’ neural activity, said Melanie Fried-Oken, an expert in speech-language assistive technology at Oregon Health & Science University, who consulted on the Stanford study.

On tomorrow’s podcast, we’ll speak with Gasper Begus, an associate professor at UC Berkley and Director of the Berkley Lab Speech and Computation Lab. He who works at the intersection of AI, language, and neuroscience. As he explains on the episode:

“On some levels, artificial neural networks are inspired by the brain. So what we do is we record people’s brain activity when they listen to language, basically getting a sum of electric activity in your skull. Then what we do is we take artificial neural networks and take the sum of their artificial activity. Not electric activity, but artificial activity when they’re listening to the same sounds. And we show that responses to the brain signal are similar. And we’re showing that for the first time that you don’t need any other steps, just raw signals.”

The neural nets, says Begus, don’t just learn. They seem to learn in a way that’s similar to us. “One thing that was kind of surprising to me was that the stages in which they acquire language is similar to what kids do.” The AI often make the same phonetic mistakes a toddler does before progressing to more advanced speech.

Oh, and like any child, it liked to make up its own words as well. “We train them on a few words of English and they start producing novel words of English that they didn’t see or hear before,” said Begus. “Give them eight words and they’ll start producing new words of English. For example, we have a network saying ‘start“, although it only heard ‘suit‘ and ‘dark‘ and ‘water‘ and a few other words. They’d never heard ‘start‘ before, and yet, you know, ‘start’ is a perfectly good English word. And so they’re extremely innovative” said Begus, before quickly adding, “and so are we!”

Source: https://stackoverflow.blog/2023/09/07/ai-brain-computer-interface-deep-implant-speech/

You might also like this video