AMD paraded new data-center and PC processors at an AI-themed pep rally this week that featured partners and customers cheering the company’s advancements. Most of the products had been discussed previously but in only vague terms.
AMD Epyc 9005 (Turin)
What’s in a name? AMD applies the Turin code name to Epyc server processors based on both the regular Zen 5 and dense Zen 5C, highlighting the fundamental similarities of the processors and contrasting Epyc with Intel Xeon, which employs different CPUs in its regular and dense designs. Turin is available now, and Google stated the cloud instances will be online in the first quarter.
Speed kills—AMD touted Epyc models that can boost to 5.0 GHz, citing demand for faster per-thread performance in AI “head nodes” (the server processor to which CPUs/NPUs attach). By contrast, the fastest Intel Xeon Granite Rapids models boost only up to 3.9 GHz. AMD’s speedy SKUs also have higher base frequencies, a more important metric for systems always under load. Most Turin models run faster than their Xeon counterparts, which should offset any Intel per-cycle throughput (IPC) advantage. (Edit: Phoronix benchmarks back up this assessment.)
Price—Few customers demand the highest core-count processors, undermining the validity of list prices. Nonetheless, we find it noteworthy that 128-core Epycs list for much less than 128-core Xeons (Granites). Price and manufacturing costs aren’t closely coupled, but AMD’s chiplet approach should raise yields and reduce costs compared with Intel’s.
AMD Instinct MI325X
Midlife kicker—The MI325X is the MI300X but with 256 GB of HBM3E memory instead of 192 GB of HBM3 and peak memory bandwidth of 6 TB/s instead of 5.3 TB/s. AMD raised the board-power rating from 750 W to 1,000 W. The boards (OCP OAM modules) fit into the same sockets as MI300X. Chips are to be available by the end of the year, with systems following in the first quarter.
Nvidia comparison—The new AMD product matches up against the Nvidia H200, which is the HBM-enhanced midlife kicker for the original Hopper H100. The H200 has only 141 GB of HBM3E and 4.8 TB/s of memory bandwidth. The MI325X and MI300X also have greater peak throughput on FP8 and larger data types, but AMD doesn’t provide comprehensive performance comparisons on benchmarks. The H200 handily beat the MI300X on the one MLPerf inference benchmark for which AMD submitted results, and the MI325X will arrive about the same time as Nvidia’s Hopper replacement, Blackwell.
Looking forward—AMD reiterated its roadmap. The MI355X is due in 2H25. Refreshing the MI300X architecture, it adds four- and six-bit floating-point (FP4 and FP6) support and raises peak FP8 and FP16 throughput by 1.8×, putting it on par with Blackwell. Memory capacity and bandwidth climb to 288 GB and 8 TB/s, respectively. The new accelerator will fit in modules sized the same as the MI300X and MI325X. Due in 2026, the MI400 will implement a new architecture.
Pensando Salina (Pollara)
What is it? Founded by the storied MPLS team, Pensando was acquired by AMD in 2022. The organization develops data-processing units (DPUs), chips for smart NICs and other networking gear (e.g., as a services engine in switches for higher-layer functions). Its designs integrate packet engines that execute the P4 network-processing language along with general-purpose CPUs, offloads such as for cryptography and compression, Ethernet controllers, buffers, and other functions. The newest chip code-named Salina is also available on a NIC called Pollara.
UEC—Salina aims to be the first chip to support the Ultra Ethernet Consortium’s forthcoming protocol standard. Unlike other Ethernet groups, the UEC is working on not only faster Ethernet versions but also higher-layer protocols for the data center (especially as used in AI and HPC clusters). The aim is to supplant Nvidia’s InfiniBand and various proprietary approaches employed by hyperscalers. Key UEC members are also part of the UALink Consortium, which is working on an open alternative to NVLink.
Performance—Salina operates at a 50% higher packet rate than Giglio, Pensando’s previous-gen DPU launched in 2023, and it integrates 400 Gbps Ethernet ports compared with 200 Gbps and updates the PCIe interface to Gen5.
Why? Companies originally developed DPUs and smart NICs to provide networking services, offloading servers’ host processors. Big AI clusters, however, mainly employ them to manage data movement, such as reordering received packets and coping with network congestion.
Other
AMD announced the pro version of its Zen 5 laptop processor code-named Strix. It uses the same silicon as the consumer-grade Strix but enables security features for corporate laptops. Yes, they incorporate NPUs to offload AI computations.
ROCm is AMD’s answer to Nvidia Cuda. AMD is still working on it and achieving software-based performance gains. Developing ROCm is essential to AMD achieving success in AI like steel and concrete are essential to building a skyscraper—and about as interesting a topic for an event like this.
Speakers—many companies provided emissaries speaking in support of AMD. Some of the companies have provided similar kind words in the past. AMD’s ability to rally customers and partners is a testament to the status it has achieved among hyperscalers and— increasingly—enterprise OEMs. Significant ones appearing at this pep rally include Microsoft, Meta, HPE, Lenovo, Dell, and Oracle.
Bottom Line
AMD is firing on all cylinders (so, too, is Intel’s HR department, but that’s a different story). The company has produced superior server processors for several years, enabling it to claim a 34% revenue share while capturing only a 24% unit share. The implied 37% price premium is evidence that AMD’s superior CPU density has won it business.
Now that Intel Xeon is achieving parity in performance-core counts, AMD’s rate of gains should slow. However, Turin shows AMD is still producing a better product. For dense designs, Zen 5c should outclass Intel E-core chips and Ampere’s Arm-based systems. Its minority market position shows AMD has headroom to gain share, but the next 24% will be a tough fight.
Hypercompetitive people like to say second place is first place among losers. The MI325X improves upon the MI300X and keeps AMD’s GPU line competitive with Nvidia’s. While AMD has leapt far ahead of other AI rivals, Nvidia still streaks forward. It took years for Epyc to capture meaningful share despite its competitive superiority, and it will take at least as long for Instinct to do the same against a more formidable competitor.