To make AI compute orchestration easier and MLOps platform more manageable, Run:ai and Weights & Biases jointly partner with NVIDIA. The three-way collaboration will enable data scientists to use Weight & Biases for execution while Run:ai orchestrates the workloads on NVIDIA GPUs. Before the association, firms that wished to use Run:ai and Weights & Biases simultaneously had to process each manually.
Omri Geller, CEO, and co-founder of Run:ai, iterated that Run:ai was engineered as a plug-in to run machine learning on Kubernetes. It enables the visualization of NVIDIA GPU resources and fractions them so multiple containers can access the same GPU.
Scott McClellan, senior director of product management at NVIDIA, said, “Our strategy is to partner fairly and evenly with the overarching goal of making sure that AI becomes ubiquitous.” He furthered that the two vendors provide complementary technologies that can now plug into a single NVIDIA AI Platform for the users.
McClellan added, “The point in time when a data science or AI project tries to go from experimentation into production, that is sometimes a little bit like the Bermuda Triangle where a lot of projects die.” With the partnership, he hopes to develop and operationalize machine learning workflows better.
Seann Gardiner, VP of business development at Weights & Biases, commented that the partnership would enable users to benefit from Weights & Biases’ training automation with Run:ai’s orchestration.