Current AI Risk Assessment

26.58%

Chance of AI Control Loss

October 13, 2035

Estimated Date of Control Loss

AGI Development Metrics?

77.71%

AGI Progress

November 3, 2029

Estimated Date of AGI

Risk Trend Over Time

Latest AI News (Last 3 Days)

May 3, 2026
+0.04% Risk

OpenAI's GPT Models Outperform Emergency Room Physicians in Diagnostic Accuracy Study

A Harvard Medical School study published in Science found that OpenAI's o1 model provided more accurate diagnoses than human emergency room physicians when analyzing 76 real patient cases from Beth Israel Deaconess Medical Center. The AI model achieved exact or close diagnoses in 67% of initial triage cases compared to 50-55% for attending physicians, though researchers emphasized the need for prospective trials before real-world clinical deployment. The study only evaluated text-based information and acknowledged current AI limitations with non-text inputs and the need for human accountability in medical decision-making.

May 1, 2026
+0.07% Risk

Meta Acquires Humanoid Robotics Startup to Advance Embodied AI Research

Meta has acquired Assured Robot Intelligence (ARI), a startup developing foundation models for humanoid robots capable of performing physical labor and adapting to human behaviors. The ARI team, including co-founders Xiaolong Wang and Lerrel Pinto, will join Meta's Superintelligence Labs to advance whole-body humanoid control technology. The acquisition reflects the broader industry belief that achieving AGI may require training AI models through physical world interactions rather than data alone.

Elon Musk's OpenAI Lawsuit Centers on Alleged Betrayal of Nonprofit Mission

Elon Musk testified for three days in his lawsuit against OpenAI, arguing that Sam Altman betrayed the organization's original nonprofit mission by converting it to a for-profit model. The case involves examining emails, texts, and tweets as evidence, with Altman and other witnesses yet to testify. Musk claims the transformation violated the "nonprofit for the benefit of humanity" purpose he initially agreed to fund.

Pentagon Expands AI Arsenal with Nvidia, Microsoft, and AWS Deals for Classified Military Networks

The U.S. Department of Defense has signed agreements with Nvidia, Microsoft, Amazon Web Services, and Reflection AI to deploy their AI technologies and models on classified military networks at high security levels (IL6 and IL7). These deals are part of the Pentagon's strategy to become an "AI-first fighting force" and to diversify AI vendors following a legal dispute with Anthropic over usage restrictions. The AI systems will be used for data synthesis, situational awareness, and augmenting military decision-making in operational warfare contexts.

April 30, 2026
+0.17% Risk

Anthropic Seeks $900B+ Valuation in Massive Funding Round Ahead of Anticipated IPO

Anthropic is soliciting investor allocations for a roughly $50 billion funding round targeting a $900 billion valuation, with closure expected within two weeks. The AI company, which has surpassed $30 billion in annual revenue (closer to $40 billion according to sources), is raising capital to fund computing infrastructure before a planned IPO later this year. This would more than double its February 2026 valuation of $380 billion and surpass rival OpenAI's $852 billion valuation.

OpenAI Restricts Access to GPT-5.5 Cyber Tool Despite Criticizing Anthropic's Similar Approach

OpenAI is limiting access to its new cybersecurity tool, GPT-5.5 Cyber, releasing it only to "critical cyber defenders" through an application process, despite CEO Sam Altman previously criticizing Anthropic for taking the same approach with its Mythos tool. The tool can perform penetration testing, vulnerability identification, and malware reverse engineering, with concerns about potential misuse by malicious actors. OpenAI is consulting with the U.S. government to eventually expand access to verified cybersecurity professionals.

Elon Musk Confirms xAI Used Model Distillation on OpenAI's Grok Training

Elon Musk testified in federal court that xAI used distillation techniques—training AI models by prompting competitors' chatbots—on OpenAI models to develop Grok, calling it a general industry practice. This admission comes amid growing concerns from frontier labs like OpenAI and Anthropic about distillation undermining their competitive advantages, particularly regarding Chinese firms creating cheaper, comparable models. The revelation highlights potential violations of terms of service and raises questions about the ethics and legality of such practices among leading AI companies.

Stripe Launches Link Digital Wallet with Autonomous AI Agent Payment Capabilities

Stripe has introduced Link, a digital wallet designed for both human users and autonomous AI agents to manage payments securely. The wallet allows users to grant AI agents controlled spending permissions without exposing raw payment credentials, using OAuth authentication and approval workflows. Link supports payment methods including cards, banks, crypto wallets, and buy now/pay later services, with plans to add agentic tokens and stablecoins.

Anthropic in Talks for Massive $50B Funding Round at $900B Valuation Amid Explosive Revenue Growth

Anthropic, creator of the Claude AI assistant, is reportedly considering a $40-50 billion funding round at a valuation between $850-900 billion, with a board decision expected in May. The company's annual revenue run rate has surged dramatically from approximately $9 billion at the end of 2025 to over $30 billion recently, with current estimates closer to $40 billion, driven largely by AI coding capabilities through Claude Code and Cowork platforms. This potential raise would more than double Anthropic's February valuation of $380 billion and position it competitively with OpenAI's $852 billion valuation.

See More AI News

AI News Calendar

May 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
January 2025
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

AI Risk Assessment Methodology

Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:

Data Collection

We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.

Impact Analysis

Each news item undergoes rigorous assessment through:

  • Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
  • Safety Research: Progress in alignment, interpretability, and containment mechanisms
  • Governance Factors: Regulatory developments, industry standards, and institutional safeguards

Indicator Calculation

Our indicators are updated using a Bayesian probabilistic model that:

  • Assigns weighted impact scores to each analyzed development
  • Calculates cumulative effects on control loss probability and AGI timelines
  • Accounts for interdependencies between different technological trajectories
  • Maintains historical trends to identify acceleration or deceleration patterns

This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.