Meta, formerly known as Facebook, announces that it is building a new artificial intelligence (AI) model that can process speech and text like human brains.
This research initiative is a step for Meta to better understand how humans process speech and text in their brains.
Meta is collaborating with neuroimaging center Neurospin (CEA) and Inria to carry out this research.
According to the company, it is comparing how AI language models and the brain respond to the same spoken or written sentences to guide the creation of AI that can process voice and text as efficiently as humans.
“Over the past two years, we’ve applied deep learning techniques to public neuroimaging data sets to analyze how the brain processes words and sentences,” mentioned Meta in a blog.
The article also said that although AI has come a long way in recent years, it is still far from understanding languages as efficiently as humans. To date, the researchers have found that the language models that best anticipate the next word from context are those that most closely mimic brain activity.
Meta’s team models numerous brain scans recorded from public data sets using functional magnetic resonance imaging (fMRI), along with magnetoencephalography (MEG), a scanner that takes millisecond-by-millisecond pictures of brain activity.
The company says that the tasks mentioned above are essential to meet the data requirements for deep learning. Meta evaluated several language models with the brain responses of 345 volunteers who listened to complex narratives while being captured with fMRI in partnership with Inria.
Moreover, the researchers also discovered evidence of long-range predictions in the brain, an ability that continues to challenge language models.
“For example, consider the phrase, “Once upon a …” Most language models today would typically predict the next word, “time,” but they’re still limited in their ability to anticipate complex ideas, plots, and narratives like people do,” mentioned Meta.