Current AI Risk Assessment

24.34%

Chance of AI Control Loss

November 27, 2035

Estimated Date of Control Loss

AGI Development Metrics?

75.73%

AGI Progress

December 15, 2029

Estimated Date of AGI

Risk Trend Over Time

Latest AI News (Last 3 Days)

March 9, 2026
-0.19% Risk

AI Industry Rallies Behind Anthropic in Pentagon Supply Chain Risk Designation Dispute

Over 30 employees from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic's lawsuit against the U.S. Department of Defense, which labeled the AI firm a supply chain risk after it refused to allow use of its technology for mass surveillance or autonomous weapons. The Pentagon subsequently signed a deal with OpenAI, prompting industry-wide concern about government overreach and its implications for AI development guardrails. The employees argue that punishing Anthropic for establishing safety boundaries will harm U.S. AI competitiveness and discourage responsible AI development practices.

Anthropic Deploys AI-Powered Code Review Tool to Manage Surge in AI-Generated Code

Anthropic has launched Code Review, an AI-powered tool integrated into Claude Code that automatically analyzes pull requests to catch bugs and logical errors in AI-generated code. The tool uses multiple AI agents working in parallel to review code from different perspectives, focusing on high-priority logical errors rather than style issues. This product targets enterprise customers dealing with increased code review bottlenecks caused by AI coding tools that rapidly generate large amounts of code.

OpenAI Acquires AI Security Startup Promptfoo to Bolster Agent Safety

OpenAI has acquired Promptfoo, an AI security startup founded in 2024 that specializes in protecting large language models from adversaries and testing security vulnerabilities. The acquisition will integrate Promptfoo's technology into OpenAI Frontier, OpenAI's enterprise platform for AI agents, enabling automated red-teaming, security evaluation, and risk monitoring. The deal highlights growing concerns about securing autonomous AI agents as they gain access to sensitive business operations.

March 8, 2026
-0.13% Risk

Bipartisan Coalition Releases Pro-Human Declaration Framework for AI Governance Amid Pentagon-Anthropic Standoff

A bipartisan coalition of experts has released the Pro-Human Declaration, a framework for responsible AI development that includes prohibitions on superintelligence development until proven safe, mandatory off-switches, and bans on self-replicating AI systems. The declaration's release coincided with a conflict between the Pentagon and Anthropic over military AI access, highlighting the absence of coherent government AI regulations. The framework emphasizes keeping humans in control, preventing power concentration, and establishing pre-deployment testing requirements, particularly for AI products targeting children.

March 7, 2026
+0.04% Risk

OpenAI Robotics Lead Resigns Over Pentagon Partnership Citing Governance and Red Line Concerns

Caitlin Kalinowski, OpenAI's robotics lead, resigned in protest of the company's Department of Defense agreement, citing concerns about surveillance of Americans and lethal autonomy without proper guardrails and deliberation. The controversial Pentagon deal, announced after Anthropic's negotiations fell through, has led to a 295% surge in ChatGPT uninstalls and elevated Claude to the top of App Store charts. Kalinowski emphasized her decision was based on governance principles, specifically that the announcement was rushed without adequately defined safeguards.

See More AI News

AI Risk Assessment Methodology

Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:

Data Collection

We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.

Impact Analysis

Each news item undergoes rigorous assessment through:

  • Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
  • Safety Research: Progress in alignment, interpretability, and containment mechanisms
  • Governance Factors: Regulatory developments, industry standards, and institutional safeguards

Indicator Calculation

Our indicators are updated using a Bayesian probabilistic model that:

  • Assigns weighted impact scores to each analyzed development
  • Calculates cumulative effects on control loss probability and AGI timelines
  • Accounts for interdependencies between different technological trajectories
  • Maintains historical trends to identify acceleration or deceleration patterns

This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.