The social media giant is developing an image and video-focused AI model code-named Mango, alongside its next large language model, Avocado. Both are expected to debut in the first half of 2026.
Meta, chief AI officer, Alexandr Wang, revealed the plans during an internal company Q&A with Meta's chief product officer, Chris Cox. Wang said Avocado will focus heavily on improved coding abilities.
He told staff the company is exploring world models, which are AI systems that learn about their environment by processing visual information. The work is still at an early stage.
The effort follows a major internal shake-up. Meta reorganised its AI operations over the summer and hired Wang to run a new division called Meta Superintelligence Labs.
Meta CEO Mark Zuckerberg personally led a hiring spree, poaching more than 20 researchers from OpenAI and assembling a team of more than 50 engineers and AI specialists.
Image and video generation has become one of the hottest battlegrounds in the AI sector. In September, Meta launched an AI video generator called Vibes in collaboration with Midjourney.
Less than a week later, OpenAI rolled out its own video generator, Sora, escalating the pace of releases. Google also pushed into the space with its Nano Banana image tool, driving Gemini’s monthly users from 450 million in July to more than 650 million by late October.
After releasing the third version of Gemini in November, OpenAI CEO Sam Altman publicly declared a “code red” as the company rushed to reclaim benchmark leadership.
Speaking to journalists last week, Altman emphasised the pull of visuals, noting that image generation is a primary attraction for users and a particularly “sticky” feature that keeps them coming back.


