
Meta and WhatsApp, has started testing its first in-house AI training chip. This move aims to reduce its reliance on external suppliers like Nvidia and lower AI infrastructure costs.
According to Reuters, Meta has begun a small-scale deployment of the chip and may increase production if the tests go well.
Meta's AI training chip
The chip is part of the Meta Training and Inference Accelerator (MTIA) series and is specifically designed to handle AI-related tasks. Unlike general-purpose GPUs, this dedicated accelerator could be more power-efficient for AI training.
Meta is working with Taiwan Semiconductor Manufacturing Company (TSMC) to manufacture the chip. The testing phase began after Meta completed a critical development step called "tape-out," which involves sending an initial chip design to a factory. This process can take months and cost millions of dollars, with no guarantee of success.
Meta's investment in AI is significant. The company has forecasted expenses of up to $119 billion in 2025, with around $65 billion dedicated to AI infrastructure. The new chip is expected to support Meta’s AI-driven recommendation systems and, later, its generative AI products like Meta AI.
The company previously tested an inference chip to help AI systems run efficiently but had to abandon an earlier project after unsuccessful trials. Despite these challenges, Meta remains a major customer of Nvidia GPUs, which power AI training for content recommendations, advertising, and language models like Llama.