
Google, one of the largest buyers of Nvidia’s AI chips, is ramping up efforts to expand its own Tensor Processing Units (TPUs), signalling a bold new front in the AI chip race.
Traditionally, Google has rented Nvidia GPUs through its cloud service to major players like OpenAI and Meta.
Now, the company is aggressively pitching its TPUs to smaller cloud providers, many of whom primarily rely on Nvidia hardware.
In a first-of-its-kind deal, Google has partnered with London-based Fluidstack to install TPUs in a New York data center, marking the first time Google has placed its custom chips in another provider’s facility.
The move is a direct challenge to Nvidia’s market dominance, as every rack of TPUs could mean fewer Nvidia GPUs in these centers.
It also reflects growing demand from cloud firms and AI developers to diversify away from a single supplier.
Perhaps most striking, Google agreed to act as a financial “backstop” for up to $3.2 billion if Fluidstack fails to cover its lease—an unprecedented show of commitment to drive TPU adoption.
This strategy underscores the high-stakes battle for AI infrastructure leadership, with Google betting big to shape the future of artificial intelligence.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.