Breaking News

Positioned as a strong competitor to Nvidia’s NVLink Switch, the Tomahawk Ultra can link up to four times more chips using an enhanced version of Ethernet instead of proprietary protocols.
In a bold move to strengthen its foothold in the booming AI infrastructure space, Broadcom Inc. has unveiled the Tomahawk Ultra, a next-generation networking chip designed to support large-scale AI workloads by connecting hundreds of AI accelerator chips within data centers.
The Broadcom Tomahawk Ultra chip is positioned as a direct competitor to Nvidia’s NVLink Switch, a key component in Nvidia’s AI supercomputing architecture. Broadcom claims its chip offers a major leap forward in performance and scalability by enabling connections for up to four times more chips than Nvidia's offering. The chip uses an enhanced Ethernet protocol, providing an open and scalable alternative to Nvidia's proprietary technologies.
"Tomahawk Ultra represents a fundamental shift in how data is routed and processed in AI-powered data centers," said Ram Velaga, Senior Vice President of Broadcom’s Switch Products Division. "With support for thousands of high-speed connections across server racks, this chip is designed for the AI era—delivering ultra-low latency, high bandwidth, and seamless scalability."
AI applications like large language models (LLMs), generative AI, and machine learning pipelines require massive data exchange between GPUs and TPUs. Tomahawk Ultra acts as a traffic controller, efficiently managing this interconnect traffic to prevent bottlenecks and maximize system performance.
Broadcom is already a key supplier of networking chips to companies like Alphabet’s Google, which designs its own AI chips and increasingly looks for alternatives to Nvidia's solutions. With the Tomahawk Ultra, Broadcom expands its competitive edge in the AI data center networking market—an area seeing exponential growth driven by demand for smarter, faster, and more efficient computing infrastructure.
The Broadcom Tomahawk Ultra chip is positioned as a direct competitor to Nvidia’s NVLink Switch, a key component in Nvidia’s AI supercomputing architecture. Broadcom claims its chip offers a major leap forward in performance and scalability by enabling connections for up to four times more chips than Nvidia's offering. The chip uses an enhanced Ethernet protocol, providing an open and scalable alternative to Nvidia's proprietary technologies.
"Tomahawk Ultra represents a fundamental shift in how data is routed and processed in AI-powered data centers," said Ram Velaga, Senior Vice President of Broadcom’s Switch Products Division. "With support for thousands of high-speed connections across server racks, this chip is designed for the AI era—delivering ultra-low latency, high bandwidth, and seamless scalability."
AI applications like large language models (LLMs), generative AI, and machine learning pipelines require massive data exchange between GPUs and TPUs. Tomahawk Ultra acts as a traffic controller, efficiently managing this interconnect traffic to prevent bottlenecks and maximize system performance.
Broadcom is already a key supplier of networking chips to companies like Alphabet’s Google, which designs its own AI chips and increasingly looks for alternatives to Nvidia's solutions. With the Tomahawk Ultra, Broadcom expands its competitive edge in the AI data center networking market—an area seeing exponential growth driven by demand for smarter, faster, and more efficient computing infrastructure.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.