Reuters reports that Meta (Facebook) plans to deploy its own data-center inference chip. The appeal of such a design is clear: a chip specifically for inference should be less expensive and require less power than ASSPs such as Nvidia’s GPUs. Meta has the resources, including software expertise, to make a custom approach work, as well as a well-defined (even if evolving) set of workloads. However, Meta’s scrapped a previous inference-chip project, as related in a previous Reuters article. Off-the-shelf chips optimized for inference, such as Qualcomm’s Cloud AI 100 didn’t meet Meta’s needs either, possibly because Meta’s needs grew beyond accelerating image classification to include running recommendation engines. That failure, however, could provide Meta the experience required to succeed a second time. In the meantime, the company continues to buy merchant-market AI accelerators such as Nvidia’s and AMD’s GPUs and is working on a training-capable NPU as well—a much more ambitious project.