The Silicon One series of networking chips, which mark Cisco Systems’ debut into the AI supercomputing market, have just been introduced. With dedicated chips for AI-driven applications, Cisco will now be able to compete directly with market leaders Broadcom and Marvell Technology.
Five of the six major cloud providers are now evaluating Cisco’s networking chips. The company withheld the names of the cloud providers, however, Bofa Global Research reports that major firms including Amazon Web Services, Microsoft Azure, and Google Cloud are involved.
The importance of effective communication between individual chips has been underscored by the expanding demand for AI applications, such as ChatGPT, making Cisco’s networking solutions more and more important.
Cisco, a well-known manufacturer of networking hardware, including ethernet switches, claims that its newest generation of ethernet switches, the G200 and G202, would perform twice as well as its forerunners. Up to 32,000 graphics processing units (GPUs) can be efficiently connected by these switches.
According to Cisco, their chips would allow AI and machine learning activities to be completed with 40% fewer switches, reducing latency and increasing power efficiency.
The news follows Broadcom’s April launch of the Jericho3-AI processor, which similarly bragged about its ability to link up to 32,000 GPU components. This development is a sign of the intensifying industry competitiveness as key competitors compete for control in the market for AI supercomputing, which is expanding quickly.