Global semiconductor designing and manufacturing giant Intel launches its new Gaudi2 processor to compete with NVIDIA, which currently dominates the AI computing market.
The launched chipset is primarily focused on deep learning applications and offers two times the performance of its predecessor. Intel recently announced this development during its Intel Vision 2022 event.
Gaudi2 is Habana Labs’ second-generation processor, an Israeli AI chip company Intel purchased for $2 billion in late 2019. During the event, officials claimed that Gaudi2’s training throughput performance for the ResNet-50 computer vision model and the BERT natural language processing model is twice that of the Nvidia A100-80GB GPU.
Read More: TENYX raises $15 million in Seed Funding Round
According to Eitan Medina, chief operating officer at Habana Labs, compared to the A100 GPU, which is implemented at the same process node and has a similar die size, Gaudi2 provides superior leadership training performance evidenced by apples-to-apples comparisons on key workloads.
The next-generation chipset for AI computing is made using Taiwan Semiconductor Manufacturing Co’s 7-nanometer transistor technology. Along with Gaudi2, Intel also launched another chip named Habana Gerco. These new chips, according to Intel, fill a gap in the market by offering customers high-performance, high-efficiency deep learning computing options for both training workloads and inference deployments in the data center while decreasing the AI barrier to entry for businesses of all sizes.
Executive Vice President and General Manager of the Data Center and AI Group at Intel, Sandra Rivera, said, “The launch of Habana’s new deep learning processors is a prime example of Intel executing on its AI strategy to give customers a wide array of solution choices – from cloud to edge – addressing the growing number and complex nature of AI workloads.”
Rivera further added that Gaudi2 could help Intel clients train increasingly large and complex deep learning applications quickly and efficiently, and they are looking forward to Greco’s inference efficiency.