Mistral released its latest model, the Mixtral 8x7B, last week. Named after its “mixture of experts” technique, this model combines various specialized models, each focusing on different task categories.
Surprisingly, Mistral made it available online as a torrent link without accompanying it with explanations, blog posts, or demo videos showcasing its capabilities.
Mistral later published a blog post that delved deeper into the model. They showcased benchmarks where Mixtral 8x7B matched or even surpassed the performance of OpenAI’s GPT-3.5 and Meta’s Llama 2.
Read More: Datasaur Launches LLM LAB Through Which Enterprises can Create their Own Generative AI Application
Acknowledging collaboration with CoreWeave and Scaleway for technical support during training, Mistral also confirmed that the Mixtral 8x7B model is open for commercial use under the Apache 2.0 license.
Ethan Mollick, an AI influencer and professor at the University of Pennsylvania Wharton School of Business, pointed out on X that Mixtral 8x7B appears to lack “safety guardrails.” This means users who are dissatisfied with OpenAI’s stricter content policies have access to a model with similar performance that can generate content considered unsafe. On the flip side, this absence of safety measures could pose a challenge for policymakers and regulators.