superintelligence lab AI News & Updates
Meta Developing "Mango" Image/Video Model and "Avocado" Text Model Under New Superintelligence Lab for 2026 Release
Meta is developing two new AI models under its superintelligence lab: "Mango" for image and video generation, and "Avocado" for text-based tasks with improved coding capabilities, both planned for release in the first half of 2026. The company is also exploring world models that can understand visual information and reason without exhaustive training. This effort comes amid leadership changes, researcher departures, and Meta falling behind competitors like OpenAI and Anthropic in the AI race.
Skynet Chance (+0.04%): Development of world models that can "reason, plan, and act" with visual understanding represents progress toward more autonomous AI systems with broader capabilities, incrementally increasing alignment challenges. However, this is still early-stage development with a 2026 timeline, limiting immediate risk impact.
Skynet Date (+0 days): The push toward world models with planning and reasoning capabilities slightly accelerates development of more autonomous AI systems, though organizational instability and researcher departures may offset some acceleration. The net effect is minor acceleration toward more capable autonomous systems.
AGI Progress (+0.03%): World models that understand visual information and can reason, plan, and act represent meaningful progress toward AGI's core requirements of multimodal understanding and general reasoning capabilities. The explicit focus on superintelligence research with concrete 2026 deliverables signals substantial investment in AGI-relevant capabilities.
AGI Date (+0 days): Meta's dedicated superintelligence lab with concrete timelines and substantial resources accelerates AGI development efforts, though the company's organizational challenges and falling behind competitors somewhat temper this acceleration. The 2026 release target for advanced world models suggests moderate timeline compression.