Meta develops ‘Mango’ and ‘Avocado’ AI models

Meta's plans are to roll out new AI models in the first half of 2026
An image of a logo of Meta is seen at the Porte de Versailles exhibition centre in Paris, France, on June 11, 2025. — Reuters
An image of a logo of Meta is seen at the Porte de Versailles exhibition centre in Paris, France, on June 11, 2025. — Reuters

Meta is ramping up its efforts to challenge Google and OpenAI in the field of artificial intelligence as the company appears to be working on a new generation of models that specialise in image- and video-related tasks as well as advanced text processing.

As reported by The Wall Street Journal, Meta has developed an AI model whose focus revolves around the use of images and videos, and it is nicknamed “Mango.” 

Another advanced AI model developed by Meta is the text-based LLM, nicknamed “Avocado,” which will come along with enhanced coding and reasoning capabilities.

According to sources quoted in the report, Meta's plans are to roll out these models in the first half of 2026. When deployed, these models will mark the first significant products to emerge from Meta Superintelligence Labs (MSL), an AI research team set up by Meta just recently.

Meta’s "Mango" model is said to be developed in such a way that it can understand the physical world better through visual data analysis. The text-to-video generation techniques are of high sophistication. 

Experts associated with the project believe that the text-to-video generation skills of the project are likely to be better than the existing tools in the market.

On another side, Meta’s Chief AI Officer Alexandr Wang has confirmed that Meta is also beginning work on what they refer to as ‘world models.’ These models work on the principle of understanding the world around them through processing visual inputs, which also fits into the overall goals of creating more context-driven and general-purpose AI models.