Meta introduced a new large language model ‘Galactica’ to generate original academic papers with simple prompts. But as quickly as it was introduced, several people criticized it as “dangerous,” after which Meta turned down the demo.
Earlier, on visiting the website, users could see an option to “Generate” content, as seen in the image below.
However, as more and more people reported it to be full of “statistical nonsense” and that it was generating “wrong” content, the website withdrew the option for people to experiment with.
Grady Booch, a software engineer described Galactica as “little more than statistical nonsense at scale.” He said it was amusing but IMHO (International Human Rights & Media Organization) unethical.
Gary Marcus tweeted his concern after reviewing Galactica and said that it has “jumped the AI shark.”
He also added that Galactica “prevaricates” a lot, implying that it is evasive when it comes to mentioning the exact truth. He proceeded to say that students would love to use such a model to intimidate their teachers, while others, aware of the risks, should be terrified.
Michael Black, director of the Max Planck Institute for Intelligent Systems, tweeted:
He said the work is an interesting advancement, yet it is not useful and safe for doing scientific work. He used the word “dangerous” and explained that Galactica outputs grammatically coherent content, but there is no certainty of it being unbiased and scientifically correct. In such ambiguity, if these results slip into scientific submissions, it would be potentially distorting.
Read More: Meta Will Host a ‘Hyperrealistic’ Avatar of Late Rapper Biggie in a VR Concert
He feared that models like Galactica could usher in an era of scientific deep fakes. He said, “Alldieck and Pumarola will get citations for papers they didn’t write. These papers will then be cited by others in real papers. What a mess this will be.”
Associate Professor Keenan Crane, Computer Science, and Robotics at CMU, also expressed distrust in Galactica. He said that none of the deep language models could be trusted completely as they are intuitive and authoritative and imitate reliability.