TPU mania surged last month when Google released its TPU-trained Gemini 3.0 AI model, and Reuters reported that Meta would adopt the TPU for its models. The press turned its attention to the TPU, highlighting Nvidia’s sliding stock price, mooting it as a GPU alternative, and suggesting that Nvidia customer OpenAI had fallen behind not just in model performance but also in executing ground-up pretraining runs for new releases. Viewed from a broader context, the November disclosures are a mixture of old and new.
While the TPU itself is not new—Google has used and offered the accelerator for years—the genuine development here is not technical but commercial: Meta’s reported adoption marks a pivot in Google’s strategy from offering TPUs as a service to selling them as hardware, fundamentally altering the competitive landscape for Nvidia, AMD, and the broader AI silicon market.
Old TPU News
- The TPU is on its seventh generation, called Ironwood. The TPU is not a new AI-acceleration technology.
- Google has trained and run its business-critical models, including previous Gemini generations, on TPUs.
- Google offers TPU access through its cloud services. Notable customers include Anthropic (maker of the Claude model favored by coders) and Apple. Small companies, particularly those with credits for Google Cloud Services, have also used TPUs.
- When we discussed Ironwood, we noted that its raw performance is similar to the Nvidia Blackwell, but it doesn’t handle low-precision formats.
- Actual performance, particularly for training, isn’t disclosed in either case. The chips’ architectures, the systems’ interconnect topologies, and their software infrastructures differ vastly, potentially yielding different performance and energy efficiency.
- Google has not sold TPUs and internally uses them only in pods assembled from 4 × 4 × 4 logical cubes (which are physically racks).
- Google’s Edge TPU is a different chip and irrelevant to the data-center discussion.
- Gemini, GPT, Claude, and competing large language models (LLMs) leapfrog each other with every new release. In principle, the models and their effectiveness are independent of the XPU on which they execute.
New TPU News
The news, therefore, is only the possibility that Meta will own and house a TPU pod, a marked departure from Google’s practice of offering access as a service. This indicates that Meta’s TPU use wouldn’t be a single experiment but an ongoing commitment, one made at the expense of deploying alternative silicon. Meta would do so for financial reasons. Other things being equal, were the goal Nvidia independence, Meta could turn to AMD or devote resources to making the MTIA competitive.
Bottom Line
As for Google, Meta is its ad-business rival, but Nvidia has shown there’s more value in selling pickaxes than mining gold. By selling a TPU pod, Google can capture some of that value, build the TPU ecosystem, and better amortize development costs.
The pecking order among AI accelerators has been
- Nvidia GPUs
- Proprietary designs (hyperscalers’ ASICs, such as the TPU and Amazon Trainium)
- AMD
- Everyone else.
If the TPU breaks out of the confines of Google, it will maintain its rank in the pecking order, but the players’ absolute positions will shift. Demand for Nvidia GPUs has been limitless; the same can’t be said of AMD’s products, much less those from the fourth tier. Google’s gains will be lower-ranked competitors’ losses.

