Nvidia announced that their graphics processor units (GPU) based systems perform 3 to 5 times better while training artificial intelligence models, according to their latest MLPerf benchmarks.
The MLPerf benchmark is backed by Alibaba, Google, Facebook AI, Nvidia, and Intel and is managed by MLCommons Association, making the tests transparent. MLPerf gives users a clean source of information before buying the product. These benchmarks are based on AI workloads and scenarios like natural-language processing, covering computer vision, recommendation systems, and reinforcement learning. The training benchmarks are focused on time to train AI models.
NVIDIA has set new performance records by training models in the least amount of time across all eight benchmarks in the commercially available submissions category with its A100 GPU.
Nvidia ran the tests on Selene, the fastest AI supercomputer worldwide, based on Nvidia DGX SuperPOD architecture. Scaling is the most critical part of AI, but Nvidia obtained magnificent results in chip-to-chip comparisons. Overall, the results depict that the performance rose by 6.5x in 2.5 years, and 3 to 5 times than last year.
Read more: NVIDIA Finalizes GPU Direct Storage 1.0 To Accelerate Artificial Intelligence
The MLPerf results depict the performance of various NVIDIA-based AI platforms with numerous new and innovative systems. They span from entry-level edge servers to AI supercomputers with thousands of GPUs. ”Our ecosystem offers customers choices in a wide range of deployment models — from instances that are rentable by the minute to on-prem servers and managed services — providing the most value per dollar in the industry” said Nvidia.
All the software used is available from the MLPerf repository, making it accessible for everyone to reproduce the benchmark results. Nvidia will continually add this code into the deep learning frameworks and containers available on NGC, the software hub for GPU applications.