Technology giant Meta, formerly known as Facebook, selects Microsoft Azure as its preferred cloud provider to advance artificial intelligence (AI) innovation and deepen PyTorch collaboration.
Meta will employ Azure’s supercomputing capability to boost AI research and development for its Meta AI group as part of this partnership.
According to Microsoft, Meta will use a specialized Azure cluster of 5400 GPUs running on the newest virtual machine (VM) series in Azure (NDM A100 v4 series, using NVIDIA A100 Tensor Core 80GB GPUs).
Read More: NVIDIA announces Database of 100K AI and HPC-enabled Brain Images
Both companies also plan to work together to increase PyTorch usage on Azure and help developers go from experimental to production faster.
“With Azure’s compute power and 1.6 TB/s of interconnect bandwidth per VM, we are able to accelerate our ever-growing training demands to better accommodate larger and more innovative AI models,” said Jerome Pesenti, Vice President of AI, Meta.
He further added that they are also excited to collaborate with Microsoft to expand their experience to their clients who are utilizing PyTorch in their research and production process.
According to the plan, Microsoft will release new PyTorch development accelerators in the coming months to help developers quickly deploy PyTorch-based solutions on Azure. Moreover, Microsoft will provide PyTorch with enterprise-grade support, allowing customers and partners to use PyTorch models in production on both the cloud and the edge.
Recently, the open-source machine learning platform Hugging Face also partnered with Microsoft to launch its new Hugging Face Endpoints on Azure. Hugging Face Endpoints, available through Azure Machine Learning Services, allows clients to leverage Hugging Face models with a few clicks of Microsoft Azure SDK code, drastically increasing its usability.