Breaking News
Meta Platforms has revealed a roadmap for four new custom-designed AI chips as the company accelerates the expansion of its global data-center infrastructure.
The chips are part of Meta’s Meta Training and Inference Accelerator (MTIA) program, an internal effort aimed at reducing reliance on external chip suppliers and optimizing hardware for the company’s AI workloads.
The first chip in the lineup, MTIA 300, is already being used to power recommendation and ranking systems across Meta’s platforms, including Facebook and Instagram.
Three additional chips are planned for rollout through 2027, with the final two—MTIA 450 and MTIA 500—designed primarily for AI inference, the stage where trained models generate responses to user queries.
Large technology companies are increasingly designing chips in-house to handle specialized AI workloads. Firms such as Alphabet and Microsoft have taken similar steps to complement processors purchased from suppliers like NVIDIA and Advanced Micro Devices.
Custom silicon allows companies to tailor hardware to their own data-processing needs, improving energy efficiency and reducing operational costs in massive data centers.
Meta has had some success building chips for inference tasks but has faced challenges developing processors capable of training large generative AI models. The upcoming MTIA 400 chip represents the company’s next step toward addressing that challenge.
Meta said the MTIA 400 is being integrated into a full hardware system designed for its data centers, including infrastructure roughly the size of multiple server racks and featuring liquid cooling technology.
The company plans to release the chips at six-month intervals, reflecting the rapid pace at which it is expanding data-center capacity to support AI workloads across its platforms.
Meta has significantly increased spending on AI infrastructure. Earlier this year the company projected capital expenditures of between $115 billion and $135 billion, largely tied to data-center construction and computing capacity.
While Meta is building chips internally, it continues to work with semiconductor partners. The company collaborates with Broadcom on parts of its chip designs, while manufacturing is handled by Taiwan Semiconductor Manufacturing Company.
Despite its push toward custom silicon, Meta is also continuing to purchase large volumes of AI processors from external suppliers. In February, the company signed deals worth tens of billions of dollars to buy chips from NVIDIA and AMD.
The chips are part of Meta’s Meta Training and Inference Accelerator (MTIA) program, an internal effort aimed at reducing reliance on external chip suppliers and optimizing hardware for the company’s AI workloads.
The first chip in the lineup, MTIA 300, is already being used to power recommendation and ranking systems across Meta’s platforms, including Facebook and Instagram.
Three additional chips are planned for rollout through 2027, with the final two—MTIA 450 and MTIA 500—designed primarily for AI inference, the stage where trained models generate responses to user queries.
Large technology companies are increasingly designing chips in-house to handle specialized AI workloads. Firms such as Alphabet and Microsoft have taken similar steps to complement processors purchased from suppliers like NVIDIA and Advanced Micro Devices.
Custom silicon allows companies to tailor hardware to their own data-processing needs, improving energy efficiency and reducing operational costs in massive data centers.
Meta has had some success building chips for inference tasks but has faced challenges developing processors capable of training large generative AI models. The upcoming MTIA 400 chip represents the company’s next step toward addressing that challenge.
Meta said the MTIA 400 is being integrated into a full hardware system designed for its data centers, including infrastructure roughly the size of multiple server racks and featuring liquid cooling technology.
The company plans to release the chips at six-month intervals, reflecting the rapid pace at which it is expanding data-center capacity to support AI workloads across its platforms.
Meta has significantly increased spending on AI infrastructure. Earlier this year the company projected capital expenditures of between $115 billion and $135 billion, largely tied to data-center construction and computing capacity.
While Meta is building chips internally, it continues to work with semiconductor partners. The company collaborates with Broadcom on parts of its chip designs, while manufacturing is handled by Taiwan Semiconductor Manufacturing Company.
Despite its push toward custom silicon, Meta is also continuing to purchase large volumes of AI processors from external suppliers. In February, the company signed deals worth tens of billions of dollars to buy chips from NVIDIA and AMD.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



