Breaking News
Broadcom Signs Long-Term AI Chip Deal with Google, Partners Anthropic for Compute Capacity
2026-04-07
Broadcom has entered into a long-term agreement with Google to develop and supply custom artificial intelligence chips and related components for next-generation AI infrastructure through 2031.
The partnership will focus on advancing Google’s AI racks, including systems powered by its proprietary tensor processing units (TPUs), as demand for specialized AI hardware continues to grow amid rising competition with GPU-based solutions.
In a separate agreement, Broadcom has also partnered with Anthropic to provide access to approximately 3.5 gigawatts of AI computing capacity starting in 2027. The capacity will be supported by Google’s AI processors, highlighting deeper collaboration across the AI ecosystem.
Financial details of the agreements were not disclosed, but the announcement reflects increasing demand for custom silicon as companies look for alternatives to high-cost graphics processing units from Nvidia.
Google has been investing heavily in scaling its TPUs as a competitive option for AI workloads, with the chips becoming a key driver of growth in its cloud business. The move aligns with broader industry trends toward vertically integrated AI infrastructure, where companies design their own hardware to optimize performance and cost.
Anthropic, meanwhile, said the agreement supports its broader plan to invest $50 billion in expanding computing infrastructure in the United States. The company has seen rapid growth in demand for its Claude AI models, with annualized revenue reportedly surpassing $30 billion in 2026.
Anthropic currently utilizes a mix of AI hardware platforms, including offerings from Amazon Web Services, Google, and Nvidia, reflecting a multi-vendor approach to scaling its AI capabilities.
The deals underscore intensifying competition in the AI hardware space, as cloud providers and chipmakers collaborate to build more efficient, scalable alternatives to traditional GPU-centric architectures.
The partnership will focus on advancing Google’s AI racks, including systems powered by its proprietary tensor processing units (TPUs), as demand for specialized AI hardware continues to grow amid rising competition with GPU-based solutions.
In a separate agreement, Broadcom has also partnered with Anthropic to provide access to approximately 3.5 gigawatts of AI computing capacity starting in 2027. The capacity will be supported by Google’s AI processors, highlighting deeper collaboration across the AI ecosystem.
Financial details of the agreements were not disclosed, but the announcement reflects increasing demand for custom silicon as companies look for alternatives to high-cost graphics processing units from Nvidia.
Google has been investing heavily in scaling its TPUs as a competitive option for AI workloads, with the chips becoming a key driver of growth in its cloud business. The move aligns with broader industry trends toward vertically integrated AI infrastructure, where companies design their own hardware to optimize performance and cost.
Anthropic, meanwhile, said the agreement supports its broader plan to invest $50 billion in expanding computing infrastructure in the United States. The company has seen rapid growth in demand for its Claude AI models, with annualized revenue reportedly surpassing $30 billion in 2026.
Anthropic currently utilizes a mix of AI hardware platforms, including offerings from Amazon Web Services, Google, and Nvidia, reflecting a multi-vendor approach to scaling its AI capabilities.
The deals underscore intensifying competition in the AI hardware space, as cloud providers and chipmakers collaborate to build more efficient, scalable alternatives to traditional GPU-centric architectures.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.




