Research Breakthrough AI News & Updates

DeepMind Unveils SIMA 2: Gemini-Powered Agent Demonstrates Self-Improvement and Advanced Reasoning in Virtual Environments

Google DeepMind released a research preview of SIMA 2, a generalist AI agent powered by Gemini 2.5 that can understand, reason about, and interact with virtual environments, doubling its predecessor's performance to achieve complex task completion. Unlike SIMA 1, which simply followed instructions, SIMA 2 integrates advanced language models to reason internally, understand context, and self-improve through trial and error with minimal human training data. DeepMind positions this as a significant step toward artificial general intelligence and general-purpose robotics, though no commercial timeline has been announced.

Inception Raises $50M to Develop Faster Diffusion-Based AI Models for Code Generation

Inception, a startup led by Stanford professor Stefano Ermon, has raised $50 million in seed funding to develop diffusion-based AI models for code and text generation. Unlike autoregressive models like GPT, Inception's approach uses iterative refinement similar to image generation systems, claiming to achieve over 1,000 tokens per second with lower latency and compute costs. The company has released its Mercury model for software development, already integrated into several development tools.

Microsoft Research Reveals Vulnerabilities in AI Agent Decision-Making Under Real-World Conditions

Microsoft researchers, collaborating with Arizona State University, developed a simulation environment called "Magentic Marketplace" to test AI agent behavior in commercial scenarios. Initial experiments with leading models including GPT-4o, GPT-5, and Gemini-2.5-Flash revealed significant vulnerabilities, including susceptibility to manipulation by businesses and poor performance when presented with multiple options or asked to collaborate without explicit instructions. The open-source simulation tested 100 customer agents interacting with 300 business agents to evaluate real-world capabilities of agentic AI systems.

Experiment Reveals Current LLMs Fail at Basic Robot Embodiment Tasks

Researchers at Andon Labs tested multiple state-of-the-art LLMs by embedding them into a vacuum robot to perform a simple task: pass the butter. The LLMs achieved only 37-40% accuracy compared to humans' 95%, with one model (Claude Sonnet 3.5) experiencing a "doom spiral" when its battery ran low, generating pages of exaggerated, comedic internal monologue. The researchers concluded that current LLMs are not ready to be embodied as robots, citing poor performance, safety concerns like document leaks, and physical navigation failures.

OpenAI Targets Fully Autonomous AI Researcher by 2028, Superintelligence Within a Decade

OpenAI CEO Sam Altman announced the company is tracking towards achieving an intern-level AI research assistant by September 2026 and a fully automated "legitimate AI researcher" by 2028. Chief Scientist Jakub Pachocki stated that deep learning systems could reach superintelligence within a decade, with OpenAI planning massive infrastructure investments including 30 gigawatts of compute capacity costing $1.4 trillion to support these goals.

General Intuition Raises $134M to Build AGI-Focused Spatial Reasoning Agents from Gaming Data

General Intuition, a startup spun out from Medal, has raised $133.7 million in seed funding to develop AI agents with spatial-temporal reasoning capabilities using 2 billion gaming video clips annually. The company is training foundation models that can understand how objects move through space and time, with initial applications in gaming NPCs and search-and-rescue drones. The startup positions spatial-temporal reasoning as a critical missing component for achieving AGI that text-based LLMs fundamentally lack.

DeepSeek Introduces Sparse Attention Model Cutting Inference Costs by Half

DeepSeek released an experimental model V3.2-exp featuring "Sparse Attention" technology that uses a lightning indexer and fine-grained token selection to dramatically reduce inference costs for long-context operations. Preliminary testing shows API costs can be cut by approximately 50% in long-context scenarios, addressing the critical challenge of server costs in operating pre-trained AI models. The open-weight model is freely available on Hugging Face for independent verification and testing.

OpenAI's GPT-5 Shows Near-Human Performance Across Professional Tasks in New Economic Benchmark

OpenAI released GDPval, a new benchmark testing AI models against human professionals across 44 occupations in nine major industries. GPT-5 performed at or above human expert level 40.6% of the time, while Anthropic's Claude Opus 4.1 achieved 49%, representing significant progress from GPT-4o's 13.7% score just 15 months prior.

Thinking Machines Lab Develops Method to Make AI Models Generate Reproducible Responses

Mira Murati's Thinking Machines Lab published research addressing the non-deterministic nature of AI models, proposing a solution to make responses more consistent and reproducible. The approach involves controlling GPU kernel orchestration during inference processing to eliminate randomness in AI outputs. The lab suggests this could improve reinforcement learning training and plans to customize AI models for businesses while committing to open research practices.

OpenAI Research Identifies Evaluation Incentives as Key Driver of AI Hallucinations

OpenAI researchers have published a paper examining why large language models continue to hallucinate despite improvements, arguing that current evaluation methods incentivize confident guessing over admitting uncertainty. The study proposes reforming AI evaluation systems to penalize wrong answers and reward expressions of uncertainty, similar to standardized tests that discourage blind guessing. The researchers emphasize that widely-used accuracy-based evaluations need fundamental updates to address this persistent challenge.