cartoon of GMC G-series van with a Rush-themed mural on the side featuring Krieger; lifted from the animated TV show Archer

NeuReality NR-1: AI-Processing Revolution or Time to Accept the Old Reality?

Men who hold high places must be the ones who start

EETimes has posted a story on NeuReality, an Israeli startup developing a type of data-processing unit (DPU) to manage AI accelerators (NPUs), fielding queries, scheduling work, managing data, and performing functions like decoding images. Because it has Ethernet ports on one side and attaches to NPUs via PCIe ports on the other side, the NeuReality NR-1 can obviate CPUs and their accoutrement, reducing system cost and power. Describing the NR-1 as a network-address processing unit (NAPU), NeuReality has focused on functions similar to those the Tesla Dojo’s DIPs perform.

It’s rare to see a company identify a market need yet sense it faces an uphill battle. Even looking only at inference and excluding training, AI is Nvidia’s world. Nvidia’s Grace and BlueField address many of the issues NeuReality is tackling, albeit at greater scale and cost.

Moreover, we expect inference chips to increasingly combine NPU and CPU (and maybe networking) functions—if not on the same die, then on adjacent chiplets. In the meantime, cloud companies like Amazon, Google, and Microsoft, which consume the most AI silicon, have the resources to roll their own DPU/NAPU solutions, as they all have done to varying degrees with CPUs, networking-focused DPUs, and NPUs. Underscoring the tough road ahead, NeuReality touts collaborations with NPU suppliers IBM, Qualcomm, and Untether—companies that have yet to find AI-processing success.


Posted

in

by


error: Unable to select