Sam Altman, CEO of OpenAI, tweeted an open invitation for Meta AI researchers to join OpenAI because of Meta AI’s aversion to Artificial General Intelligence (AGI). He says that the certainty of a ‘No’ from Meta’s chief AI scientist regarding AGI explains the last five years of Meta’s AI lab.
AI research at Meta focuses on self-supervised learning for building intelligent models that can acquire new skills and perform multiple tasks without labeled data. On the other hand, OpenAI focuses on artificial general intelligence (AGI) to develop highly autonomic systems that benefit humanity and outperform humans.
An AGI is a system capable of understanding the world and has the same capacity to learn for carrying out a wide range of tasks like humans do. In theory, AGI can carry out any task similar to humans by combining human-like flexible reasoning and thinking with computational advantages.
OpenAI has been working on developing AI systems like the GPT-3 that can do anything humans can through deep learning algorithms that use neural networks to understand what it is seeing or hearing. The goal is to create AGI that can speak, listen, write, read, and learn independently. OpenAI has been in the headlines for developing the smart GPT-3 model. However, Turing award winner and Chief AI Scientist at Meta, Yann LeCun, trashed OpenAI’s GPT-3 model and its capabilities in a Facebook post.
According to LeCun, “… trying to build intelligent machines by scaling up language models is like building high-altitude airplanes to go to the moon.” He believes that one can beat altitude records with GPT-3, but going to the moon will require an altogether different approach. Yann LeCun also responded to Sam Altman’s tweet, discrediting OpenAI’s AGI approach similarly by stating, “…But if one’s goal is to get to orbit, one must work on things like cryogenic tanks, turbopumps, etc. Not as flashy.”
Unlike OpenAI, the objective of Meta’s AI Lab is to match human intelligence rather than develop artificial human intelligence. Jerome Pesenti, vice president of AI at Meta, has publicly stated that the concept of AGI is not exciting and does not mean much. He also says that deep learning and current AI have limitations and are far from achieving human intelligence. Meta AI lab believes that designing non-reproducible systems is lost investment and does not bring much value in the field.
Today, Meta has become synonymous with self-supervision and believes that this is the right path to achieving human-level intelligence in the long run. Meta AI recently introduced data2vec, the first high-performance self-supervised algorithm capable of learning the same way in multiple modalities without labeled data like vision, speech, and text. Since Meta AI focuses on self-supervised learning rather than AGI, this model can predict its representation without the input data labeled as text, speech, or audio.
Ilya Sutskever, the chief scientist of the OpenAI, tweeted on 10th Feb that “it may be that today’s large neural networks are slightly conscious.” This is the first time Sutskever has claimed that consciousness in machines has already arrived, even if he was speaking facetiously. OpenAI has become one of the leading artificial intelligence research labs in the world and has consistently produced headline-grabbing research for designing large AI models. In his tweet, Sam Altman, CEO of OpenAI, said that Meta’s approach to achieving human-level intelligence isn’t exactly the right way. However, because Meta AI is passionate about designing reproducible AI systems through self-supervised learning, which is different from OpenAI’s approach, does not mean that their method is wrong