Nvidia is the world’s largest company. All because it sits at the centre of the current AI infrastructure boom. From training AI to serving it to customers, companies are using Nvidia AI chips and hardware. But now Anthropic is expanding its partnership with Google and Broadcom to secure multiple gigawatts of next-generation TPU (Tensor Processing Unit) capacity. The new compute infrastructure is expected to go live from 2027 and will power Claude AI, at least some parts of it.
While this is good news for both Google and Anthropic, the move might put Nvidia under a bit of pressure. Currently AI chips made by Nvidia are in extremely high-demand and have virtual monopoly on the market. From OpenAI to Anthropic to Meta, everyone wants data centres powered Nvidia AI chips. The only exception to this is Google, which also — along Nvidia hardware — use its own in-house AI chips called Tensor Processing Unit (TPU). Now, Google is opening its TPU data centres to other companies.
While Anthropic has so far relied heavily on Nvidia chips to run its large language models, the expanded partnership highlights a calculated effort to diversify its hardware stack.
The agreement of Anthropic with Google and Broadcom comes at a time when demand for AI hardware is steadily surging. Anthropic, in particular, has seen a sharp rise in popularity in the past few months with its Claude family of models. Anthropic’s run-rate revenue has reportedly crossed $30 billion in 2026, a significant jump from around $9 billion at the end of 2025.
While Anthropic is bringing in new partners, it is not abandoning its existing ones. The company continues to train and run its Claude models across a mix of platforms, including AWS Trainium, Google TPUs and Nvidia AI chips. However, the broader implication of this deal lies in what it means for Nvidia.
Nvidia losing its dominance in AI chip war?
Nvidia has become one of the most valuable companies in the world largely due to the AI boom, with its GPUs forming the backbone of most large-scale AI systems. Till date, companies like Anthropic and others have relied heavily on Nvidia hardware for both training and inference workloads.
That dominance, however, is now facing credible competition. By deepening its reliance on Google’s TPUs — custom-built AI accelerators — and Broadcom’s chip capabilities, Anthropic is signalling viable alternatives to Nvidia’s hardware.
While Nvidia is unlikely to lose its leadership position overnight, partnerships like this, along with new initiatives such as large-scale chip manufacturing efforts from other players like TeraFab, a vertically-integrated AI chip manufacturing facility announced by Elon Musk which will be built combining Tesla, SpaceX, and xAI in Austin, suggest that major AI companies are actively working to reduce their dependence on a single supplier.


