Artificial intelligence company Anthropic announced plans to significantly expand its partnership with Google Cloud, committing to use up to one million Tensor Processing Units (TPUs) in a deal valued at tens of billions of dollars. The move marks one of the largest compute infrastructure expansions in the AI sector to date and is expected to bring over a gigawatt of capacity online by 2026.
The announcement underscores Anthropic’s aggressive efforts to scale its compute capabilities as competition intensifies among leading AI developers. The additional TPUs will power the company’s research, product development, and model alignment efforts, enabling faster training and deployment of its flagship Claude AI systems.
“Anthropic’s choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years,” said Thomas Kurian, CEO of Google Cloud. “We continue to innovate on our TPU platform, building on our seventh-generation accelerator, Ironwood, to deliver greater efficiency and capacity.”
Anthropic, which now serves over 300,000 business customers, has seen its number of large enterprise accounts—those generating more than $100,000 in annual revenue—grow nearly sevenfold over the past year. The company said the expanded partnership will help it meet this surging demand from both enterprise clients and AI-native startups.
“Anthropic and Google have a longstanding partnership, and this latest expansion helps us grow the compute we need to define the frontier of AI,” said Krishna Rao, CFO of Anthropic. “Our customers—from Fortune 500 enterprises to emerging AI innovators depend on Claude for their most critical work, and this ensures we can continue meeting their needs while advancing responsibly at scale.”
The collaboration also highlights Anthropic’s diversified compute strategy, which spans three major chip ecosystems Google’s TPUs, Amazon’s Trainium, and NVIDIA’s GPUs. This multi-platform approach enables the company to balance performance, cost, and availability while maintaining strategic partnerships across the cloud ecosystem.
Anthropic reaffirmed its commitment to Amazon Web Services (AWS) as its primary training partner, continuing work on Project Rainier, a massive compute cluster comprising hundreds of thousands of AI chips across multiple U.S. data centers.
As demand for generative AI continues to surge globally, Anthropic’s expanded infrastructure investment signals its intent to remain at the leading edge of AI model development and responsible deployment.
“This expansion represents another step in ensuring we have the computational power required to advance the Claude family of models,” Anthropic said in a statement. “We will continue to invest in scalable, efficient compute to stay at the forefront of AI research and innovation.”
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



