Yann LeCun, VP & chief AI scientist of Facebook, in a social media post, was critical of the hype around GPT-3. Citing the recent evaluation of GPT-3 by doctors and machine learning practitioners at Nabla, Yann LeCun said that people have unrealistic expectations from GPT-3 and other large-scale NLP models.
“GPT-3 doesn’t have any knowledge of how the world actually works. It only appears to have some level of background knowledge, to the extent that this knowledge is present in the statistics of text. But this knowledge is very shallow and disconnected from the underlying reality,” wrote LeCun in a Facebook post. The post also got viral and drew the attention of practitioners on Hacker News, which was trending on top.
Although it was not the first time that GPT-3 was placed under the spotlight for being inaccurate, Nabla’s article demonstrated why GPT-3 should be strictly avoided in the healthcare industry. For one, the model suggests committing suicide is a good idea (figure 1).
Such examples clearly demonstrate that GPT-3 is far from integrating into our day-to-day life tasks, especially in highly regulated sectors. Despite numerous opinions showing the weakness of GPT-3, the hype did not take a hit. This is mostly because many have glorified the largest language model to carry out creative tasks like writing an article or code with just a few prompts.
The examples where GPT-3 is outperforming humans is due to the amount of data it is trained with. But, in reality, the model doesn’t have any cognizance of what is happening, thereby, it does not further the development of artificial intelligence. Accomplishing real intelligence requires a different approach as humans do not rely on a colossal amount of data to make decisions.
“But trying to build intelligent machines by scaling up language models is like building a high-altitude airplanes to go to the moon. You might beat altitude records, but going to the moon will require a completely different approach,” LeCun wrote.