NVIDIA recently finalized the launch of its new GPU direct storage after a year of beta testing to accelerate artificial intelligence in the high performance computing industry.
NVIDIA’s Magnum IO GPU direct storage driver allows users to bypass the server CPU to exchange data directly between the storage and high performance GPU memory.
NVIDIA officials said that this new GPU direct storage reduces CPU utilization up to three times, which enables the CPU to focus on processing-intensive applications rather than graphic processing.
The company announced the integration of Magnum IO Direct storage software with its HGX AI supercomputing platform along with the new NDR 400G InfiniBand networking and A100 80 GB PCIe GPU in the digital conference of ISC High Performance 2021.
NVIDIA collaborated with numerous industry leaders like IBM, Dell, WekaIO, and Micron to develop this new cutting-edge technology. IBM recently announced they have updated its storage architecture for NVIDIA DGX Pod and is committed to supporting the next generation of DGX Pod with ESS 3200, which would double the data transfer speed up to 77 GB per second by the end of this year.
Jenseng Huang, CEO and Founder of NVIDIA, said, “The high performance computing revolution has started in academia and is rapidly extending across a broad range of industries.”
He also mentioned that crucial dynamics are driving exponential advancement that has made high performance computing a valuable tool for several industries.
Jeff Denworth, Co-founder and CMO of NVIDIA, said that the use of GPU direct storage in projects like Pytorch has allowed vast data to feed a standard Postgres database about 80 times faster than a conventional network-attached storage system could.
“We have been pleasantly surprised by the number of projects that we are being engaged on for this new technology,” he said.