Current AI Risk Assessment

24.79%

Chance of AI Control Loss

November 27, 2035

Estimated Date of Control Loss

AGI Development Metrics?

75.64%

AGI Progress

December 15, 2029

Estimated Date of AGI

Risk Trend Over Time

Latest AI News (Last 3 Days)

March 2, 2026
+0.04% Risk

OpenAI and Anthropic Navigate Turbulent Government Contracts Amid Pentagon Pressure

OpenAI CEO Sam Altman faced public backlash after accepting a Pentagon contract that Anthropic rejected due to concerns over mass surveillance and automated weaponry. The U.S. Defense Secretary threatened to designate Anthropic as a supply chain risk for refusing to change contract terms, creating unprecedented pressure on AI companies working with government. The situation highlights how leading AI labs are unprepared for the political complexities of becoming national security contractors.

March 1, 2026
+0.1% Risk

OpenAI Finalizes Pentagon Agreement Following Anthropic's Withdrawal

OpenAI announced a deal with the Department of Defense to deploy AI models in classified environments after Anthropic's negotiations with the Pentagon collapsed. The agreement includes stated red lines against mass domestic surveillance, autonomous weapons, and high-stakes automated decisions, though critics question whether the contractual language effectively prevents domestic surveillance. OpenAI defends its multi-layered approach including cloud-only deployment and retained control over safety systems.

Trump Administration Blacklists Anthropic Over Refusal to Support Military Surveillance and Autonomous Weapons

The Trump administration has severed ties with Anthropic and invoked national security laws to blacklist the AI company after it refused to allow its technology for mass surveillance of U.S. citizens or autonomous armed drones. MIT physicist Max Tegmark argues that Anthropic and other AI companies have created their own predicament by resisting binding safety regulation while breaking their voluntary safety commitments. The incident highlights the regulatory vacuum in AI development and raises questions about whether other AI companies will stand with Anthropic or compete for the Pentagon contract.

February 28, 2026
+0.06% Risk

OpenAI Secures Pentagon AI Contract with Safety Protections Amid Anthropic Standoff

OpenAI has reached an agreement with the Department of Defense to deploy its AI models on classified networks, including technical safeguards against mass domestic surveillance and autonomous weapons. This follows a public conflict between the Pentagon and Anthropic over usage restrictions, which resulted in Trump administration threats to designate Anthropic as a supply-chain risk and ban federal agencies from using its products. OpenAI claims its deal includes protections for the same ethical concerns Anthropic sought, and is asking the government to extend these terms to all AI companies.

See More AI News

AI Risk Assessment Methodology

Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:

Data Collection

We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.

Impact Analysis

Each news item undergoes rigorous assessment through:

  • Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
  • Safety Research: Progress in alignment, interpretability, and containment mechanisms
  • Governance Factors: Regulatory developments, industry standards, and institutional safeguards

Indicator Calculation

Our indicators are updated using a Bayesian probabilistic model that:

  • Assigns weighted impact scores to each analyzed development
  • Calculates cumulative effects on control loss probability and AGI timelines
  • Accounts for interdependencies between different technological trajectories
  • Maintains historical trends to identify acceleration or deceleration patterns

This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.