A unified, user-friendly toolkit called NVIDIA AI Workbench, which the company just unveiled, enables developers to swiftly build, test, and customize pre-trained generative AI models on a workstation or PC before scaling them to almost any data center, public cloud, or NVIDIA DGX Cloud.
With the aid of AI Workbench, starting an enterprise AI project is no longer difficult. Developers can use a streamlined interface running on a local system to access models from well-known sources like Hugging Face, GitHub, and NVIDIA NGC and modify them using unique data. The models can then be simply shared between various other platforms.
Manuvir Das, vice president of enterprise computing at NVIDIA said, “Enterprises around the world are racing to find the right infrastructure and build generative AI models and applications. NVIDIA AI Workbench offers a streamlined path for cross-organizational teams to develop the AI-based applications that are increasingly crucial in modern business.”
Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin
Although there are now hundreds of thousands of pretrained models accessible, customizing them using the many open-source tools may require searching through numerous internet repositories for the appropriate framework, tools, and containers as well as using the appropriate skills to customize a model for a particular use case.
Developers may quickly customize and execute generative AI with NVIDIA AI Workbench. As a result, they are able to compile into a single developer toolkit all essential enterprise-grade models, frameworks, software development kits, and libraries from open-source sources and the NVIDIA AI platform.
Leading providers of AI infrastructure, such as Dell Technologies, Hewlett Packard Enterprise, HP, Lambda, Lenovo, and Supermicro, are embracing AI Workbench for its capacity to enhance their most recent lineup of multi-GPU capable desktop workstations, high-end mobile workstations, and virtual workstations.