A peer-reviewed paper released on Monday in the journal Nature Neuroscience claims that scientists have created a noninvasive AI system that aims to convert a person’s brain activity into a stream of text.
Patients who have lost their capacity to physically communicate due to a stroke, paralysis, or other degenerative conditions may eventually benefit from the technology, known as a semantic decoder.
The system was created in part by University of Texas at Austin researchers using a transformer model, which is also used to power Google’s Bard and OpenAI’s ChatGPT chatbots.
Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation
The fMRI scanner, a sizable apparatus that detects brain activity, was used by study participants to train the decoder while they listened to many hours of podcasts. No surgical implants are necessary for the system.
When the participant is listening to or imagines telling a new story, the trained AI system can produce a stream of text. The researchers intended to capture broad concepts or sentiments, not a precise transcript, when they created the final text. In around half of the cases, the trained system generates text that closely or precisely fits the participant’s original words’ intended meaning.
The decoder depends on the fMRI scanner, thus as of Monday, it can only be used in a laboratory environment. But the researchers think that more transportable brain imaging systems might eventually use it.