Prime Highlights
- Meta Platforms introduced four custom AI processors under its Meta Training and Inference Accelerator (MTIA)program to improve performance and reduce reliance on external chip suppliers.
- The new chips will support AI workloads behind services on Facebook and Instagram, including recommendation systems and generative AI features.
Key Facts
- The first chip, MTIA 300, is already deployed in Meta data centers, while MTIA 400, 450 and 500are expected to roll out by 2027.
- The processors are manufactured by Taiwan Semiconductor Manufacturing Company, while Meta continues to purchase GPUs from Nvidia and AMD for its AI infrastructure.
Background
Meta has launched a new set of in-house artificial intelligence chips to support its growing network of data centers and reduce dependence on external chip suppliers.
The company revealed four custom chips as part of its Meta Training and Inference Accelerator (MTIA) program. The initiative aims to improve computing performance and lower costs for AI workloads that power services across Meta’s platforms.
The first chip, called MTIA 300, has already been deployed in the company’s data centers. According to Meta, the chip is designed to train smaller AI models used for ranking and recommendation systems. These systems help show users relevant posts, videos, and advertisements on apps such as Facebook and Instagram.
Meta plans to release three additional chips — MTIA 400, MTIA 450 and MTIA 500 — over the next few years. These processors will focus on inference tasks for generative AI, including creating images and videos from text prompts. The company expects these chips to become operational by 2027.
Meta’s engineering team said the company is developing new chips quickly to keep up with the fast pace of AI development. The processors are manufactured by Taiwan Semiconductor Manufacturing Company, one of the world’s largest chip producers.
Technology companies have increasingly built their own application-specific integrated circuits to reduce reliance on costly graphics processing units supplied by firms like Nvidia and AMD.
At the same time, Meta continues to invest heavily in external AI hardware. The company recently signed agreements to deploy millions of Nvidia GPUs and large quantities of AMD chips across its data centers.
Meta’s AI expansion includes major data center projects in U.S. states such as Louisiana, Ohio and Indiana. The company operates or plans about 30 data centers worldwide, with most located in the United States, supporting its rapid growth in artificial intelligence services.