NeuReality, an Israeli AI chip startup, has announced a US$35 million Series A fundraising round to commercialize its NR1 processor, which is developed to speed up artificial intelligence applications. The round was headed by Samsung Ventures, Cardumen Capital, Varana Capital, OurCrowd, and XT Hi-Tech, with participation from SK Hynix, Cleveland Avenue, Korean Investment Partners, StoneBridge, and Glory Ventures. With this round, NeuReality’s total funding now stands at US$48 million.
The NR1, which is a network-attached “server on a chip,” employs a new class of Network Addressable Processing Units (NAPU) designed specifically for deep learning inference applications such as computer vision, natural language processing, and recommendation engines. Large-scale users like Hyperscalers and next-wave data center clients will be able to accommodate the expanding spectrum of their AI usage thanks to the NAPU.
The latest funding will bolster NeuReality’s aspirations to begin implementing its Inference products in 2023. The term “inference” refers to the process of executing trained neural networks in production. In contrast to the existing technologies, NeuReality’s solution is designed for optimum deployment in data centers and near-edge on-premises sites. These locations require better performance, reduced latency, and significantly higher efficiency. Generally, large-scale AI- infrastructure settings struggle with maintaining hardware efficiency with scaling demands. This is because they require the addition of more chips to the infrastructures – which in turn requires huge power to manage them. The NR1 chip solves the problem via linear scaling, where you add more chips to the server cluster without compromising hardware efficiency. At the same time, the latency of AI operations is reduced, and system costs and power usage are reduced. These factors are crucial for improving the total cost of ownership (TCO) of data centers and on-premises large-scale compute systems, which is essential for the business models of many applications.
NeuReality provides the NR1 as part of an appliance called the NR1-S Inference Server, which features several NR1 chips. When compared to rival hardware, NeuReality claims that the NR1-S Inference Server can reduce prices and power needs by a factor of 50. The company also features the NR1 as part of the NR1-M accelerator card, which can be connected to a server via a PCIe port. With the use of the accelerator card, companies can incorporate NeuReality’s technology with their current server infrastructure in their data centers.
In addition to the NR1, NeuReality offers a collection of software tools to make deploying AI applications in production easier. These solutions from the company also promise to make managing applications easier. Among the software components in NeuReality’s portfolio is an AI hypervisor, which assists customers in managing machine learning applications deployed on NR1 chips.
Read More: Elon Musk Said Neuralink Brain Chip To Begin Human Trials in The Next Six Months
Dr. Mingu Lee, Managing Partner at Cleveland Avenue Technology Investments, said, “NeuReality is bringing ease of use and scalability into the deployment of AI inference solutions, and we see great synergy between their promising technology Fortune 500 enterprises companies we communicate with. We feel that investing in companies such as NeuReality is vital, not only to ensure the future of technology, but also in terms of sustainability.”
Since last May, NeuReality claims it has been distributing NR1 prototype implementations to partners. Using its latest US$35 million funding round, the company hopes to roll out its technologies extensively. In order to help with the endeavor, NeuReality will hire 20 additional staff over the next six months.