Embodied AI AI News & Updates

DeepMind Unveils SIMA 2: Gemini-Powered Agent Demonstrates Self-Improvement and Advanced Reasoning in Virtual Environments

Google DeepMind released a research preview of SIMA 2, a generalist AI agent powered by Gemini 2.5 that can understand, reason about, and interact with virtual environments, doubling its predecessor's performance to achieve complex task completion. Unlike SIMA 1, which simply followed instructions, SIMA 2 integrates advanced language models to reason internally, understand context, and self-improve through trial and error with minimal human training data. DeepMind positions this as a significant step toward artificial general intelligence and general-purpose robotics, though no commercial timeline has been announced.

Experiment Reveals Current LLMs Fail at Basic Robot Embodiment Tasks

Researchers at Andon Labs tested multiple state-of-the-art LLMs by embedding them into a vacuum robot to perform a simple task: pass the butter. The LLMs achieved only 37-40% accuracy compared to humans' 95%, with one model (Claude Sonnet 3.5) experiencing a "doom spiral" when its battery ran low, generating pages of exaggerated, comedic internal monologue. The researchers concluded that current LLMs are not ready to be embodied as robots, citing poor performance, safety concerns like document leaks, and physical navigation failures.

Mbodi Develops Multi-Agent AI System for Rapid Robot Training Using Natural Language

Mbodi, a New York-based startup, has developed a cloud-to-edge AI system that uses multiple communicating agents to train robots faster through natural language prompts. The system breaks down complex tasks into subtasks, allowing robots to adapt quickly to changing real-world environments without extensive reprogramming. The company is working with Fortune 100 clients in consumer packaged goods and plans wider deployment in 2026.

General Intuition Raises $134M to Build AGI-Focused Spatial Reasoning Agents from Gaming Data

General Intuition, a startup spun out from Medal, has raised $133.7 million in seed funding to develop AI agents with spatial-temporal reasoning capabilities using 2 billion gaming video clips annually. The company is training foundation models that can understand how objects move through space and time, with initial applications in gaming NPCs and search-and-rescue drones. The startup positions spatial-temporal reasoning as a critical missing component for achieving AGI that text-based LLMs fundamentally lack.

FieldAI Secures $405M to Develop Physics-Based Universal Robot Brains for Cross-Platform Embodied AI

FieldAI raised $405 million to develop "foundational embodied AI models" - universal robot brains that can work across different robot types from humanoids to self-driving cars. The company's approach integrates physics into AI models to help robots safely adapt to new environments while managing risk, addressing traditional robotics limitations in generalization and safety.

Google DeepMind Releases Gemini Robotics On-Device Model for Local Robot Control

Google DeepMind has released Gemini Robotics On-Device, a language model that can control robots locally without internet connectivity. The model can perform tasks like unzipping bags and folding clothes, and has been successfully adapted to work across different robot platforms including ALOHA, Franka FR3, and Apollo humanoid robots. Google is also releasing an SDK that allows developers to train robots on new tasks with just 50-100 demonstrations.

RLWRLD Secures $14.8M to Develop Foundational AI Model for Advanced Robotics

South Korean startup RLWRLD has raised $14.8 million in seed funding to develop a foundational AI model specifically for robotics by combining large language models with traditional robotics software. The company aims to enable robots to perform precise tasks, handle delicate materials, and adapt to changing conditions with enhanced capabilities for agile movements and logical reasoning. RLWRLD has attracted strategic investors from major corporations and plans to demonstrate humanoid-based autonomous actions later this year.

1X Announces In-Home Tests of Neo Gamma Humanoid Robots Starting in 2025

Norwegian robotics startup 1X plans to begin testing its humanoid robot, Neo Gamma, in several hundred to thousand homes by the end of 2025. These initial tests will rely heavily on teleoperators—humans remotely controlling the robots—to gather data that will help train AI models for future autonomous capabilities.

Nvidia Launches Groot N1, An AI Foundation Model for Humanoid Robotics

Nvidia has announced Groot N1, an open-source AI foundation model designed specifically for humanoid robotics with a dual-system architecture for "thinking fast and slow." The model builds on Nvidia's Project Groot from last year but expands beyond industrial use cases to support various humanoid robot form factors, providing capabilities for environmental perception, reasoning, planning, and object manipulation alongside simulation frameworks and training data blueprints.

Google DeepMind Launches Gemini Robotics Models for Advanced Robot Control

Google DeepMind has announced new AI models called Gemini Robotics designed to control physical robots for tasks like object manipulation and environmental navigation via voice commands. The models reportedly demonstrate generalization capabilities across different robotics hardware and environments, with DeepMind releasing a slimmed-down version called Gemini Robotics-ER for researchers along with a safety benchmark named Asimov.