Current AI Risk Assessment

25.06%

Chance of AI Control Loss

November 7, 2035

Estimated Date of Control Loss

AGI Development Metrics?

76.55%

AGI Progress

November 27, 2029

Estimated Date of AGI

Risk Trend Over Time

Latest AI News (Last 3 Days)

April 8, 2026
+0.04% Risk

Databricks CTO Declares AGI Already Achieved, Warns Against Anthropomorphizing AI Systems

Matei Zaharia, Databricks co-founder and CTO, received the 2026 ACM Prize in Computing for his contributions including Apache Spark. He controversially claims that AGI is "here already" but argues we shouldn't apply human standards to AI models, citing security risks when AI agents are treated like trusted human assistants. Zaharia emphasizes AI's potential for automating research while warning against anthropomorphization that leads to misplaced trust and security vulnerabilities.

April 7, 2026
+0.07% Risk

Arcee Releases Trinity Large Thinking: 400B Open-Source Reasoning Model as Western Alternative to Chinese AI

Arcee, a 26-person U.S. startup, has released Trinity Large Thinking, a 400-billion parameter open-source reasoning model built on a $20 million budget. The company positions it as the most capable open-weight model from a non-Chinese company, offering Western businesses an alternative to Chinese models with genuine Apache 2.0 licensing. While not outperforming closed-source models from major labs, it provides independence from both Chinese government concerns and the policy changes of large AI companies.

Anthropic Releases Mythos: Powerful Frontier AI Model for Cybersecurity Vulnerability Detection

Anthropic has released a limited preview of Mythos, described as one of its most powerful frontier AI models, to over 40 partner organizations including Amazon, Apple, Microsoft, and Cisco for defensive cybersecurity work. The model has reportedly identified thousands of zero-day vulnerabilities in software systems, some dating back one to two decades. While designed as a general-purpose model with strong coding and reasoning capabilities, concerns exist about potential weaponization by bad actors to exploit rather than fix vulnerabilities.

Anthropic Secures Massive 3.5 Gigawatt Compute Expansion with Google and Broadcom

Anthropic has signed an expanded agreement with Google and Broadcom to secure 3.5 gigawatts of additional compute capacity using Google's TPUs, coming online in 2027. This deal supports the company's explosive growth, with run rate revenue jumping from $9 billion to $30 billion and over 1,000 enterprise customers spending $1M+ annually. The expansion reflects unprecedented demand for Claude AI models despite some U.S. government supply chain concerns.

April 6, 2026
-0.08% Risk

OpenAI Proposes Economic Framework for Superintelligence Era Including Robot Taxes and Public Wealth Funds

OpenAI has released policy proposals for managing economic changes expected from superintelligent AI, including shifting taxes from labor to capital, creating public wealth funds to distribute AI profits, and subsidizing four-day work weeks. The framework aims to distribute AI-driven prosperity broadly while building safeguards against systemic risks, though critics may question whether these proposals align with OpenAI's recent shift to for-profit status. The proposals come as governments worldwide grapple with AI's potential to displace jobs and concentrate wealth.

See More AI News

AI News Calendar

April 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
January 2025
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

AI Risk Assessment Methodology

Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:

Data Collection

We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.

Impact Analysis

Each news item undergoes rigorous assessment through:

  • Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
  • Safety Research: Progress in alignment, interpretability, and containment mechanisms
  • Governance Factors: Regulatory developments, industry standards, and institutional safeguards

Indicator Calculation

Our indicators are updated using a Bayesian probabilistic model that:

  • Assigns weighted impact scores to each analyzed development
  • Calculates cumulative effects on control loss probability and AGI timelines
  • Accounts for interdependencies between different technological trajectories
  • Maintains historical trends to identify acceleration or deceleration patterns

This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.