Meta Platforms is reportedly in advanced discussions to deploy Google custom AI accelerators, TPUs (Tensor Processing Units), in its own data centres starting as early as 2027. The deal would also enable Meta to rent TPU capacity from Google Cloud in the interim, potentially beginning as soon as 2026.
- According to reports, Meta is weighing a multi-billion-dollar investment to transition part of its AI infrastructure from its current reliance on NVIDIA GPUs toward Google TPUs architecture.
- The proposed plan includes both on-premises deployment of Google TPUs (inside Meta’s own data centres) by 2027, and the possibility of renting TPU compute from Google Cloud beginning in 2026.
- Google’s shift to offering its TPUs for installation outside its own data centres, rather than exclusively in its cloud, marks a strategic pivot in its hardware strategy.
The move signals an attempt to diversify its compute-infrastructure stack and reduce dependence on a single supplier, in this case NVIDIA, which remains dominant in high end AI accelerators.
Why Meta Is Breaking Its GPU Dependence
For much of the past decade, Meta has relied heavily on NVIDIA’s GPU family, first Volta, then Ampere, Hopper and now Blackwell to power its AI training and inference workloads. But as global demand has surged and supply constraints persist, Meta’s leadership recognises that the next generation of AI products requires far more compute, far more quickly, and at far more predictable cost.
Key drivers behind the diversification strategy
Supply Chain Risk: With hyperscalers, sovereign AI projects, and startups all competing for the same GPU supply. Meta cannot depend on a single vendor for mission critical infrastructure. Multi-chip sourcing reduces risk and increases negotiating leverage.
Exploding Compute Requirements: Meta’s LLMs, vision language models, real-time recommendation engines, creator tools and mixed-reality systems demand exponential growth in compute. A single vendor cannot meet this pace sustainably.
Cost Pressures: A competitive chip environment helps Meta control infrastructure spend, one of its largest operating expenditures, especially as AI adoption accelerates across products.
Strategic Autonomy: By combining in-house silicon with external accelerators, Meta gains more control over model optimisation, data centre design, thermal efficiency and power usage.
Meta’s diversification of its AI chip supply marks one of the most consequential shifts in the global AI infrastructure landscape.
Also Read: From Coaches to Control Rooms: 5 Trends Shaping Railway Smart Video Storage























