Among Nvidia’s Supercomputing 2024 announcements are two new hardware items. Available now, the H200 NVL is a PCIe card for use in conventional enterprise servers employed by researchers and software developers. Nvidia has lined up numerous OEMs working on systems supporting the card. Due in 2H25, the GB200 NVL4 is a new Grace Blackwell “superchip” (module) for HPC customers. Its Hopper-based predecessor landed in a Cray design and in the Jupiter machine at Jülich Supercomputing Centre.
The NVL nomenclature highlights NVLink’s importance. The high-bandwidth, low-latency interconnect enables quick data shuttling among connected chips. Although the H200 NVL is a PCIe card, the image below shows four such cards joined by a slab—presumably, a silicon substrate carrying the NVLink signals. A planar module, the GB200 integrates in a single NVLink domain six main chips: four Nvidia Blackwell GPUs and two Grace Arm-based processors.
Bottom Line
For years before the AI boom, GPU-based computing by enterprises and supercomputing centers was the mainstay of Nvidia’s data-center business. Even as Nvidia has developed large-scale systems to address AI customers’ requirements, the company continues to make the boards and modules demanded by long-time GPU-computing customers. After all, it was these organizations’ teams that formed the installed base of Cuda developers that have made it so hard for rival AI-accelerator (NPU/GPU) companies to get market traction, especially in AI training.