Marvell and DRAM suppliers Micron, Samsung, and SK Hynix are collaborating to offer custom high-bandwidth memory (HBM), departing from Jedec’s standards regime and undercutting data-center AI-accelerator (GPU/NPU) upstarts. We expect customization to only apply to the base logic die in an HBM stack while the memory-array dice remain general.
Why Customize HBM?
- Processor offload is possible with custom HBM. Companies customize chips to differentiate, to achieve something unattainable with standard parts. Some HPC and AI functions may be possible while data streams in or out of memory. Implementing these functions in the HBM stack may improve performance and free the AI accelerator for other tasks.
- Interface throughput can increase either by raising width or transfer rate. Already employing thousands of traces, standard HBM interfaces are too wide. Marvell’s approach comprehends both parallel and serial die-to-die interfaces and promises to reduce power, area, and linear beachfront. (Note that parallel refers to separating clock and data signals, whereas serial refers to convolving them. Confusingly, a serial interface may have multiple parallel lines.)
- The reduced processor area likely comes from a combination of reduced interface area and moving processors’ DRAM controllers to the HBM stack’s base die. Data-center XPUs will likely redeploy the saved interface area to computational functions.
- The reduced HBM footprint likely comes from the interface-area reductions. An important benefit is that smaller HBM stacks may allow processor designers to integrate three where they could previously fit only two.
- Standards bodies are slow. Jedec governs everything from the size of plastic trays holding chips during assembly and testing to memory interfaces. It even has vacuum-tube standards. Consensus among myriad parties takes time. What was originally a niche memory technology, HBM is the cornerstone of rapidly advancing AI accelerators. (It’s literally near the corners of most NPUs/GPUs. See the photos associated with our AMD MI300, Intel Gaudi 3, Nvidia Blackwell, and SambaNova coverage.) A standards body can’t keep pace with industry.
Winners and Losers
- Marvell comes out ahead as it stands to win additional custom-silicon business or reap royalties from other chip designers licensing its interface technology.
- Memory makers will come out ahead if they (instead of Marvell) customize the base HBM die and can charge a premium. However, if the new HBM approach results in moving the boundary between standard and proprietary silicon from the processor-base interface to the base-array interface, memory companies won’t escape the commoditization that characterizes their business.
- Big XPU companies can make better processors if they can customize HBM.
- Small chipmakers are disadvantaged. Lacking volume and the attendant scale economies, they’re unlikely to motivate memory makers to customize their products. They’ll be stuck with standard HBM. Worse, there may be no future standard HBM generations if customized memory accounts for most of the market.
Bottom Line: Memory Decommoditization
Memory chips are nominally a commodity, an item standardized to the degree that vendors’ products are interchangeable. The predominant HBM use, however, is within a processor package, lessening standardization’s value because the processor and memory vendors can agree to customizations without affecting system builders and other downstream customers. Marvell’s announcement shows that the industry is indeed breaking standards’ constraints and opening the door to memory-chip differentiation.