On Tuesday, Google revealed fresh details about the supercomputers it used to train its AI models, claiming that they are quicker and more energy-efficient than comparable Nvidia Corporation systems.
Over 90% of Google’s AI training work is done on a specialized processor called the Tensor Processing Unit (TPU). AI training involves putting data through models to improve their performance on tasks like creating images or providing responses to questions that sound human.
Today, Google (TPU) is in its fourth generation. Google explained in a scientific report published on Tuesday how it connected more than 4,000 TPUs to build a supercomputer. To facilitate the connection of separate machines, the business has created specialized optical switches.
Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities
According to the report, Google’s supercomputer is up to 1.7 times quicker and 1.9 times more energy-efficient than a system built around the fourth-generation TPU and Nvidia’s A100 chip that was available at the same time.
Given that Nvidia’s current flagship H100 processor was released after Google’s fourth-generation Tensor Processing Unit (TPU) and is constructed using more modern technologies, Google has claimed that it did not directly compare the two chips.