Embodied AI AI News & Updates

1X Robotics Unveils World Model Enabling Neo Humanoid Robots to Learn from Video Data

1X, maker of the Neo humanoid robot, has released a physics-based AI model called 1X World Model that enables robots to learn new tasks from video and prompts. The model allows Neo robots to gain understanding of real-world dynamics and apply knowledge from internet-scale video to physical actions, though current implementation requires feeding data back through the network rather than immediate task execution. The company plans to ship Neo humanoids to homes in 2026 after opening pre-orders in October.

CES 2026 Showcases Major Shift Toward Physical AI and Robotics Applications

CES 2026 demonstrated a significant industry pivot from software-based AI (chatbots and image generators) to "physical AI" and robotics applications. Major demonstrations included Boston Dynamics' redesigned Atlas humanoid robot and various industrial and commercial robotic systems, signaling AI's transition from digital interfaces to physical world interaction.

1X Pivots Neo Humanoid Robot from Consumer Homes to Industrial Settings with 10,000-Unit EQT Partnership

1X announced a strategic partnership with investor EQT to deploy up to 10,000 Neo humanoid robots to EQT's portfolio companies between 2026 and 2030, focusing on manufacturing, warehousing, and logistics. This marks a significant pivot for the Neo robot, which was originally marketed as a consumer-ready home assistant priced at $20,000. The shift reflects the reality that industrial applications remain more viable than home use cases, which face challenges including high costs, privacy concerns from human remote operators, and safety issues.

DeepMind Unveils SIMA 2: Gemini-Powered Agent Demonstrates Self-Improvement and Advanced Reasoning in Virtual Environments

Google DeepMind released a research preview of SIMA 2, a generalist AI agent powered by Gemini 2.5 that can understand, reason about, and interact with virtual environments, doubling its predecessor's performance to achieve complex task completion. Unlike SIMA 1, which simply followed instructions, SIMA 2 integrates advanced language models to reason internally, understand context, and self-improve through trial and error with minimal human training data. DeepMind positions this as a significant step toward artificial general intelligence and general-purpose robotics, though no commercial timeline has been announced.

Experiment Reveals Current LLMs Fail at Basic Robot Embodiment Tasks

Researchers at Andon Labs tested multiple state-of-the-art LLMs by embedding them into a vacuum robot to perform a simple task: pass the butter. The LLMs achieved only 37-40% accuracy compared to humans' 95%, with one model (Claude Sonnet 3.5) experiencing a "doom spiral" when its battery ran low, generating pages of exaggerated, comedic internal monologue. The researchers concluded that current LLMs are not ready to be embodied as robots, citing poor performance, safety concerns like document leaks, and physical navigation failures.

Mbodi Develops Multi-Agent AI System for Rapid Robot Training Using Natural Language

Mbodi, a New York-based startup, has developed a cloud-to-edge AI system that uses multiple communicating agents to train robots faster through natural language prompts. The system breaks down complex tasks into subtasks, allowing robots to adapt quickly to changing real-world environments without extensive reprogramming. The company is working with Fortune 100 clients in consumer packaged goods and plans wider deployment in 2026.

General Intuition Raises $134M to Build AGI-Focused Spatial Reasoning Agents from Gaming Data

General Intuition, a startup spun out from Medal, has raised $133.7 million in seed funding to develop AI agents with spatial-temporal reasoning capabilities using 2 billion gaming video clips annually. The company is training foundation models that can understand how objects move through space and time, with initial applications in gaming NPCs and search-and-rescue drones. The startup positions spatial-temporal reasoning as a critical missing component for achieving AGI that text-based LLMs fundamentally lack.

FieldAI Secures $405M to Develop Physics-Based Universal Robot Brains for Cross-Platform Embodied AI

FieldAI raised $405 million to develop "foundational embodied AI models" - universal robot brains that can work across different robot types from humanoids to self-driving cars. The company's approach integrates physics into AI models to help robots safely adapt to new environments while managing risk, addressing traditional robotics limitations in generalization and safety.

Google DeepMind Releases Gemini Robotics On-Device Model for Local Robot Control

Google DeepMind has released Gemini Robotics On-Device, a language model that can control robots locally without internet connectivity. The model can perform tasks like unzipping bags and folding clothes, and has been successfully adapted to work across different robot platforms including ALOHA, Franka FR3, and Apollo humanoid robots. Google is also releasing an SDK that allows developers to train robots on new tasks with just 50-100 demonstrations.

RLWRLD Secures $14.8M to Develop Foundational AI Model for Advanced Robotics

South Korean startup RLWRLD has raised $14.8 million in seed funding to develop a foundational AI model specifically for robotics by combining large language models with traditional robotics software. The company aims to enable robots to perform precise tasks, handle delicate materials, and adapt to changing conditions with enhanced capabilities for agile movements and logical reasoning. RLWRLD has attracted strategic investors from major corporations and plans to demonstrate humanoid-based autonomous actions later this year.