www.analyticsdrift.com
Image Credit: Analytics Drift
Produced By: Sahil Pawar
By open-source, we mean that the original code is available to the public to be modified through either GitHub or other means.
Google created the conversational LLM known as LaMDA (Language Model for Dialogue Application) as the core technology for apps that use dialogue and can produce human-sounding language.
Researchers at Google AI developed the well-known language model BERT (Bidirectional Encoder Representations from Transformers) in 2018 which considerably impacted the NLP field.
LLaMA (Large Language Model Meta AI) is a large language model that Meta AI announced in February 2023. It's latest version LLaMA 2 was released on July 19.
Orca was developed by Microsoft and has 13 billion parameters. It aims to improve on advancements made by other open source models by imitating the reasoning procedures achieved by LLMs.
The Big Open-science Open-access Multilingual Language Model (BLOOM) is a significant language model developed by BigScience based on transformers.
PaLM 2 was pre-trained on parallel multilingual text and on a much larger corpus of different languages than its predecessor, PaLM. This makes PaLM 2 excel at multilingual tasks.
StableLM is a series of open source language models developed by Stability AI, the company behind image generator Stable Diffusion. It is based on a dataset called 'The Pile'.
The Databricks machine-learning platform was used to train Dolly, an LLM that learns to obey commands. It was trained using roughly 15k instruction/response fine-tuning records based on Pythia-12b.
The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the scalability of training LLMs on the Cerebras software and hardware stack.
Just three days into its public release in November 2022, Galactica was Meta's LLM designed specifically for scientists. It was trained on a collection of academic material.
A language model called XLNet was released in 2019 by Google AI researchers. It overcomes the drawbacks of conventional LLMs, such as left-to-right or auto-regressive pre-training methods.
Get the latest updates on AI developments.
Designed by: Prathamesh