Thursday, December 19, 2024
ad
HomeData ScienceMETA announces new AI project to build AI-Language Models akin to Human...

META announces new AI project to build AI-Language Models akin to Human Brain

Meta, the parent company of Facebook, Instagram, and WhatsApp, has unveiled a new AI research initiative aimed at better understanding how the human brain processes language. Meta AI is examining how AI language models and the brain respond to the identical spoken or written words in partnership with Neurospin (CEA) and INRIA. 

Meta AI has spent the last two years analyzing how the brain interprets words and phrases using deep learning algorithms applied to public neuroimaging data sets. Several academic organizations, including the Max Planck Institute for Psycholinguistics and Princeton University, gathered and shared the data sets. Each institution acquired and shared the data sets with the volunteers’ informed consent in compliance with the legal regulations established by their respective ethics committees, including the authorization sought from the study participants.

Language Model is an artificial intelligence (AI) model that has been trained to anticipate the next word or groups of words based on the preceding words or phrases. It is part of the technology that predicts the next word you want to type on your phone, allowing you to finish your message sooner. The early versions of language models used classic statistical approaches such as N-grams, Hidden Markov Models (HMM), and followed rule-based linguistic criteria to learn the probability distribution of words. In contrast, language models of today are created using neural networks that have been trained on vast amounts of textual data. These AI models that most closely resemble the human brain to anticipate the next word using natural language processing (NLP). 

Despite having profound scope, these models fall short of performing at par with the human brain. For instance, while a human baby can simultaneously parse that ‘orange’ can refer to both a fruit and a color, machines fail to draw such a correlation. Hence researchers at Meta are on a mission to understand the working of the human brain that can help them develop better language models. The believes that insights from this study can offer enough ideas and guidance in the pursuit of developing AI that processes speech and text as efficiently as people. 

Speaking language, according to Jean-Rémi King, a senior research scientist at Meta AI, makes humans completely unique, and knowing how the brain works is still a problem and a work in progress. The fundamental question, according to King, is “What makes humans so much more powerful or so much more efficient than these machines? We want to identify not just the similarities, but pinpoint the remaining differences.”

The researchers modeled hundreds of brain scans while also using magnetoencephalography (MEG) scanner to take pictures every millisecond, using public neuroimaging datasets extracted from images of brain activity in magnetic resonance imaging (MRI) and computed tomography (CT) scans of participants. Working with INRIA, they compared a number of language models to the brain reactions of 345 volunteers while they listened to complicated narratives using functional magnetic resonance imaging (fMRI).

Next, AI systems were given the same narratives that were read or presented to human beings. The researchers then examined the two sets of data to identify where they overlapped and where they didn’t. Meta researchers inferred from their findings that language models that closely reflect brain activity are the best at predicting the next word from context like “on a dark and stormy night…” or “once upon a time…”. Self-supervised learning (SSL) in AI is focused on prediction based on partially visible inputs, and it might be crucial to how individuals acquire the language. 

The results of Meta AI revealed that particular brain areas, such as the prefrontal and parietal cortices – situated in the front and middle of the brain, respectively – better-represented language models with far-off future word predictions. In other words, specific parts of the brain anticipate words and concepts months quite in advance, whereas most language models today are trained to predict the next word. Unlocking this long-term predicting power might aid in the advancement of contemporary AI and language models.

The researchers also discovered that the human brain can learn with just a few million phrases and can adapt and retain knowledge in its trillions of synapses on a constant basis. Conversely, AI language models can parameterize up to 175 billion artificial synapses after being trained on billions of phrases.

Read More: Meta launches Large Language Models for AI Researchers

Meta AI researchers and NeuroSpin are presently working on an original neuroimaging dataset to refine their research. This will be open-sourced, along with code, deep learning models, and academic papers, to facilitate further AI and neuroscience research. The intent, according to King, is to create a set of tools that other peers in academia and other fields can utilize and benefit from.

He added that researchers can significantly enhance contemporary AI language models by researching long-term predicting capacity in more depth. The META team believes that by incorporating long-term forecasts into algorithms, they can become more parallel to the brain.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular