Meta AI introduces a new language model called ‘Galactica,’ an AI model that generates original scientific and academic papers using simple text inputs. Galactica can also answer direct questions, explain its answers, and provide citations for the sources it used.
Meta AI has been actively working on numerous language models like the OPT-175B and PEER and studying the human brain for language processing. With Galactica, researchers aim to summarize academic literature, solve math problems, generate Wiki articles, and accomplish much more.
Galactica was trained on a massive corpus of scientific and academic papers, knowledge bases, and reference material. After collecting all relevant information, Galactica compresses it into a 120-billion parameters model capable of fitting on a single NVIDIA A100 GPU. The model is a competitive alternative to GPT-3, another language model by OpenAI that can write an academic thesis by itself in about 2 hours.
Galactica explained in its own words, “Galactica models are trained on a large corpus comprising more than 360 millions in-context citations and over 50 millions of unique references normalized across a diverse set of sources.”
Simply put in a text input and click on “Generate.” As seen in the above screenshot, on giving input to generate results on the “breadth-first search algorithm,” Galactica outputted a brief paper. To expand the content, Galactica offers an option to “Generate More.”
Read More: GitHub creates private vulnerability reports for public repositories
Galactica’s scientific knowledge is derived from the exertion that went into building the dataset that it was trained on. To ensure that the model learns from various modalities, including natural language, molecular sequences, codes, etc., additional tokens were created to help it identify them.
Galactica is the result of researchers from Meta AI and people from Paper with Code working together to develop it completely open-source and publish the research.