Breaking News
AWS to Integrate Nvidia’s NVLink Fusion in Future Trainium Chips as Cloud Giant Ramps Up AI Push
2025-12-03
Amazon Web Services (AWS) is deepening its partnership with Nvidia, announcing that its future generation of Trainium AI chips will incorporate NVLink Fusion, one of Nvidia’s flagship interconnect technologies. The move signals AWS’s intention to strengthen its position in the rapidly intensifying AI infrastructure market and attract enterprises building large-scale AI systems.
The announcement was made at AWS re:Invent 2025 in Las Vegas, where the company said the NVLink-enabled Trainium4 chip will deliver dramatically faster communication between processors, enabling larger, more tightly connected AI training clusters. AWS did not specify when Trainium4 will debut, but emphasized that NVLink Fusion will be foundational to its next wave of AI hardware.
Nvidia has been encouraging chipmakers to adopt NVLink to create unified, high-bandwidth systems capable of scaling to thousands of GPUs. With AWS joining Intel and Qualcomm as adopters, Nvidia is expanding the influence of its interconnect technology across the industry.
Nvidia CEO Jensen Huang described the partnership as a step toward “building the compute fabric for the AI industrial revolution,” saying the collaboration will allow companies around the world to access advanced AI systems through AWS. As part of the tie-up, customers will also gain access to AI Factories, dedicated AI compute clusters that AWS will deploy inside customer data centers for faster, more secure model development.
AWS Unveils Trainium3-Powered Servers
Alongside the Nvidia partnership, AWS announced the launch of new servers powered by Trainium3, available immediately. Each server contains 144 Trainium3 chips and delivers more than four times the compute performance of previous AWS AI hardware, while consuming 40% less power.
Dave Brown, vice president for AWS compute and machine learning services, said the company’s goal is to compete aggressively on price and performance. “We’ve got to prove we have a product that gives customers the performance they need at the right price point,” he told Reuters.
Amazon Pushes New Nova AI Models
AWS also unveiled updated versions of its in-house AI model family, Nova, aiming to catch up with major competitors such as OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini. The new Nova 2 model is designed to deliver faster responses and support multimodal inputs—including text, images, speech and video. Another model, Sonic, generates human-like speech outputs in real time.
Despite stiff competition in the foundation model space, AWS continues to benefit from enterprise demand for cloud-based AI infrastructure. The company recently reported a 20% revenue jump in its AWS business, driven largely by organizations migrating workloads and ramping up AI development.
To help enterprises tailor models to their own data, Amazon also introduced Nova Forge, a service that lets businesses train domain-specific AI systems without losing the base model’s general capabilities. “This allows you to produce a model that deeply understands your information,” AWS CEO Matt Garman said during his keynote.
With a new generation of chips, tighter Nvidia integration and an expanded AI model portfolio, AWS is positioning itself for the next phase of hyperscale AI competition—one increasingly defined by custom silicon, multimodal models and high-performance distributed infrastructure.
The announcement was made at AWS re:Invent 2025 in Las Vegas, where the company said the NVLink-enabled Trainium4 chip will deliver dramatically faster communication between processors, enabling larger, more tightly connected AI training clusters. AWS did not specify when Trainium4 will debut, but emphasized that NVLink Fusion will be foundational to its next wave of AI hardware.
Nvidia has been encouraging chipmakers to adopt NVLink to create unified, high-bandwidth systems capable of scaling to thousands of GPUs. With AWS joining Intel and Qualcomm as adopters, Nvidia is expanding the influence of its interconnect technology across the industry.
Nvidia CEO Jensen Huang described the partnership as a step toward “building the compute fabric for the AI industrial revolution,” saying the collaboration will allow companies around the world to access advanced AI systems through AWS. As part of the tie-up, customers will also gain access to AI Factories, dedicated AI compute clusters that AWS will deploy inside customer data centers for faster, more secure model development.
AWS Unveils Trainium3-Powered Servers
Alongside the Nvidia partnership, AWS announced the launch of new servers powered by Trainium3, available immediately. Each server contains 144 Trainium3 chips and delivers more than four times the compute performance of previous AWS AI hardware, while consuming 40% less power.
Dave Brown, vice president for AWS compute and machine learning services, said the company’s goal is to compete aggressively on price and performance. “We’ve got to prove we have a product that gives customers the performance they need at the right price point,” he told Reuters.
Amazon Pushes New Nova AI Models
AWS also unveiled updated versions of its in-house AI model family, Nova, aiming to catch up with major competitors such as OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini. The new Nova 2 model is designed to deliver faster responses and support multimodal inputs—including text, images, speech and video. Another model, Sonic, generates human-like speech outputs in real time.
Despite stiff competition in the foundation model space, AWS continues to benefit from enterprise demand for cloud-based AI infrastructure. The company recently reported a 20% revenue jump in its AWS business, driven largely by organizations migrating workloads and ramping up AI development.
To help enterprises tailor models to their own data, Amazon also introduced Nova Forge, a service that lets businesses train domain-specific AI systems without losing the base model’s general capabilities. “This allows you to produce a model that deeply understands your information,” AWS CEO Matt Garman said during his keynote.
With a new generation of chips, tighter Nvidia integration and an expanded AI model portfolio, AWS is positioning itself for the next phase of hyperscale AI competition—one increasingly defined by custom silicon, multimodal models and high-performance distributed infrastructure.
See What’s Next in Tech With the Fast Forward Newsletter
SECURITY
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.




