Current AI Risk Assessment

24.32%

Chance of AI Control Loss

November 27, 2035

Estimated Date of Control Loss

AGI Development Metrics?

75.79%

AGI Progress

December 13, 2029

Estimated Date of AGI

Risk Trend Over Time

Latest AI News (Last 3 Days)

March 10, 2026
-0.02% Risk

Mira Murati's Thinking Machines Lab Secures Major Nvidia Compute Partnership for AI Development

Thinking Machines Lab, founded by former OpenAI co-founder Mira Murati, has signed a multi-year strategic partnership with Nvidia to deploy at least one gigawatt of Vera Rubin systems starting in 2027. The seed-stage company, valued at over $12 billion with $2 billion raised, is developing AI models that create reproducible results but has not yet released any products.

Yann LeCun's AMI Labs Secures $1.03B to Develop World Models as Alternative to LLMs

AMI Labs, cofounded by Turing Prize winner Yann LeCun, has raised $1.03 billion at a $3.5 billion valuation to develop world models based on Joint Embedding Predictive Architecture (JEPA). Unlike traditional large language models, world models aim to learn from reality rather than just language, with initial applications planned in healthcare through partner Nabla. The ambitious project focuses on fundamental research and may take years before producing commercial applications, with the startup committing to open research and code sharing.

March 9, 2026
-0.19% Risk

AI Industry Rallies Behind Anthropic in Pentagon Supply Chain Risk Designation Dispute

Over 30 employees from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic's lawsuit against the U.S. Department of Defense, which labeled the AI firm a supply chain risk after it refused to allow use of its technology for mass surveillance or autonomous weapons. The Pentagon subsequently signed a deal with OpenAI, prompting industry-wide concern about government overreach and its implications for AI development guardrails. The employees argue that punishing Anthropic for establishing safety boundaries will harm U.S. AI competitiveness and discourage responsible AI development practices.

Anthropic Deploys AI-Powered Code Review Tool to Manage Surge in AI-Generated Code

Anthropic has launched Code Review, an AI-powered tool integrated into Claude Code that automatically analyzes pull requests to catch bugs and logical errors in AI-generated code. The tool uses multiple AI agents working in parallel to review code from different perspectives, focusing on high-priority logical errors rather than style issues. This product targets enterprise customers dealing with increased code review bottlenecks caused by AI coding tools that rapidly generate large amounts of code.

OpenAI Acquires AI Security Startup Promptfoo to Bolster Agent Safety

OpenAI has acquired Promptfoo, an AI security startup founded in 2024 that specializes in protecting large language models from adversaries and testing security vulnerabilities. The acquisition will integrate Promptfoo's technology into OpenAI Frontier, OpenAI's enterprise platform for AI agents, enabling automated red-teaming, security evaluation, and risk monitoring. The deal highlights growing concerns about securing autonomous AI agents as they gain access to sensitive business operations.

March 8, 2026
-0.13% Risk

Bipartisan Coalition Releases Pro-Human Declaration Framework for AI Governance Amid Pentagon-Anthropic Standoff

A bipartisan coalition of experts has released the Pro-Human Declaration, a framework for responsible AI development that includes prohibitions on superintelligence development until proven safe, mandatory off-switches, and bans on self-replicating AI systems. The declaration's release coincided with a conflict between the Pentagon and Anthropic over military AI access, highlighting the absence of coherent government AI regulations. The framework emphasizes keeping humans in control, preventing power concentration, and establishing pre-deployment testing requirements, particularly for AI products targeting children.

See More AI News

AI Risk Assessment Methodology

Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:

Data Collection

We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.

Impact Analysis

Each news item undergoes rigorous assessment through:

  • Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
  • Safety Research: Progress in alignment, interpretability, and containment mechanisms
  • Governance Factors: Regulatory developments, industry standards, and institutional safeguards

Indicator Calculation

Our indicators are updated using a Bayesian probabilistic model that:

  • Assigns weighted impact scores to each analyzed development
  • Calculates cumulative effects on control loss probability and AGI timelines
  • Accounts for interdependencies between different technological trajectories
  • Maintains historical trends to identify acceleration or deceleration patterns

This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.