The MI300X, AMD‘s most advanced GPU for AI, was announced on Tuesday. According to analysts, AMD’s launch poses the most significant threat to Nvidia, which now holds a market share of more than 80% for AI chips.
According to AMD, its new MI300X CPU and CDNA architecture were created for advanced AI models and large language models. The MI300X can fit even larger AI models than other chips because it can take up to 192GB of memory. For instance, the H100, a rival to Nvidia, only supports 120GB of RAM.
Because they perform an increasing number of calculations, large language models for generative AI applications consume a lot of memory. The Falcon model, which has 40 billion parameters, was demonstrated by AMD on the MI300x.
Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11
Additionally, AMD said that it would provide an Infinity Architecture that integrates eight of its M1300X accelerators into a single unit. For AI applications, Nvidia and Google have created comparable systems that pack eight or more GPUs into a single box.
AI programmers have historically favored Nvidia chips because they have access to the chip’s essential hardware characteristics through a well-developed software package called CUDA. On Tuesday, AMD claimed to have its own software, dubbed ROCm, for its AI chips.
AMD did not provide a pricing, but the move may put pressure on the cost of Nvidia GPUs like the H100, which may cost up to $30,000. The MI300X will be ready for sample this fall, but larger shipments won’t begin until next year, AMD CEO Lisa Su stated on an earnings call last month.