Google announced their new cloud TPU VMs that will authorise companies to supply instances with Tensor Processing Units (TPU). Virtual machines (VM) will offer enterprises a new and advanced user experience to develop and deploy TensorFlow, PyTorch, and JAX on Cloud TPUs.
The Cloud TPUs need not be accessed remotely over the network, rather the virtual machines can be set up in an interactive development environment on each TPU host. For example, now machine learning models can be debugged line by line using a single TPU VM and later be scaled up on a superfast Cloud TPU Pod slice to take advantage of the TPU interconnect.
The users will have access to every TPU VM created by them at all times and can run any code in a tight loop with TPU accelerators. In addition, local storage can be used to execute custom code in input pipelines and easy integration of Cloud TPUs into production workflows.
The Cloud TPU VMs are available in two variants — Cloud TPU v2, which is based on the second-generation TPU chipsets, and the new Cloud TPU v3 based on third-generation TPU. According to Google Cloud, the significant difference between both is in the performance factor. While the Cloud TPU v2 can perform up to 180 teraflops, the TPU v3 up to 420 teraflops. Both variants can be used to perform tasks such as artificial intelligence-powered healthcare analytics and quantum chemistry.
The new TPU system is built in a much simpler and flexible way to achieve performance gains. As the code need not make round trips in the data center network to reach the TPUs the overall efficiency is improved. It is also cost-efficient as the data processing can run directly on the Cloud TPU hosts and eliminates the need for additional Compute Engine VMs.