A group of open-source large language models (LLMs) referred to as StableLM have been made available by Stability AI, the firm that created the AI-powered Stable Diffusion image generator. In a post published on Wednesday, the business revealed that its models are now accessible for developers to use and modify on GitHub.
StableLM is made to efficiently create text and code, much as ChatGPT’s opponent. The open-source dataset called Pile, which includes data from a variety of sources such as Wikipedia, Stack Exchange, and PubMed, served as its training data.
According to Stability AI, StableLM models presently have parameters ranging from 3 billion and 7 billion, with models having 15 billion to 65 billion parameters coming later.
Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities
In addition to building on the open-source language models that Stability AI has already developed in partnership with the nonprofit EleutherAI, StableLM also advances the company’s goal of making AI tools more approachable, as it did with Stable Diffusion.
The optimized conversation model from StableLM is available for testing in a demo on Hugging Face. Although the datasets Stability AI employs should steer the base language models into safer distributions of text, the company issues a warning that “not all biases and toxicity can be mitigated through fine-tuning.”