Current AI Risk Assessment
Chance of AI Control Loss
Estimated Date of Control Loss
AGI Development Metrics
AGI Progress
Estimated Date of AGI
Risk Trend Over Time
Latest AI News (Last 3 Days)
OpenAI and Anthropic Navigate Turbulent Government Contracts Amid Pentagon Pressure
OpenAI CEO Sam Altman faced public backlash after accepting a Pentagon contract that Anthropic rejected due to concerns over mass surveillance and automated weaponry. The U.S. Defense Secretary threatened to designate Anthropic as a supply chain risk for refusing to change contract terms, creating unprecedented pressure on AI companies working with government. The situation highlights how leading AI labs are unprepared for the political complexities of becoming national security contractors.
Skynet Chance (+0.04%): The normalization of AI companies providing capabilities for mass surveillance and automated weaponry to government agencies increases risks of misuse and loss of control over powerful AI systems. The political pressure forcing companies to choose between survival and ethical constraints weakens safety guardrails.
Skynet Date (-1 days): The government's aggressive push to integrate AI into defense infrastructure and willingness to destroy non-compliant companies accelerates the deployment of powerful AI systems in high-stakes military contexts. This bypasses careful safety considerations and rushes advanced AI into operational use.
AGI Progress (+0.01%): While the article focuses on governance rather than technical capabilities, the integration of frontier AI models into national security infrastructure indicates these systems are becoming sufficiently capable for critical applications. However, this is primarily about deployment of existing capabilities rather than fundamental research progress.
AGI Date (+0 days): Massive government investment and prioritization of AI development for national security purposes will likely increase funding and urgency around AI capabilities research. The competitive dynamics between companies seeking government contracts may accelerate capability development, though this is a secondary effect.
OpenAI Finalizes Pentagon Agreement Following Anthropic's Withdrawal
OpenAI announced a deal with the Department of Defense to deploy AI models in classified environments after Anthropic's negotiations with the Pentagon collapsed. The agreement includes stated red lines against mass domestic surveillance, autonomous weapons, and high-stakes automated decisions, though critics question whether the contractual language effectively prevents domestic surveillance. OpenAI defends its multi-layered approach including cloud-only deployment and retained control over safety systems.
Skynet Chance (+0.06%): Deployment of advanced AI models in military classified environments increases potential for dual-use capabilities and loss of civilian oversight, despite stated safeguards. The rushed nature of the deal and ambiguous contractual language around surveillance protections suggest inadequate consideration of alignment and control risks.
Skynet Date (-1 days): Accelerated integration of frontier AI models into military systems shortens the timeline for high-stakes AI deployment with potential control issues. The deal bypasses thorough safety vetting that Anthropic deemed necessary, potentially advancing dangerous applications faster than safety measures can mature.
AGI Progress (+0.01%): The deal primarily concerns deployment contexts rather than capability advances, representing a commercial and regulatory development. While it may provide OpenAI additional resources and data access, it doesn't directly demonstrate progress toward AGI capabilities.
AGI Date (+0 days): Increased Pentagon funding and access to classified use cases could modestly accelerate OpenAI's development resources and real-world testing. However, the primary impact is on deployment rather than fundamental research, yielding minimal timeline acceleration toward AGI.
Trump Administration Blacklists Anthropic Over Refusal to Support Military Surveillance and Autonomous Weapons
The Trump administration has severed ties with Anthropic and invoked national security laws to blacklist the AI company after it refused to allow its technology for mass surveillance of U.S. citizens or autonomous armed drones. MIT physicist Max Tegmark argues that Anthropic and other AI companies have created their own predicament by resisting binding safety regulation while breaking their voluntary safety commitments. The incident highlights the regulatory vacuum in AI development and raises questions about whether other AI companies will stand with Anthropic or compete for the Pentagon contract.
Skynet Chance (+0.04%): The article reveals that major AI companies are abandoning safety commitments and the regulatory vacuum allows development of autonomous weapons systems without safeguards, increasing loss-of-control risks. However, Anthropic's resistance to military applications and the public debate it sparked provide some countervailing pressure against unconstrained AI weaponization.
Skynet Date (-1 days): The competitive pressure created by Anthropic's blacklisting may accelerate other companies' willingness to develop uncontrolled military AI applications, and the abandonment of safety commitments across the industry suggests faster deployment of potentially dangerous systems. The regulatory vacuum means no institutional brakes exist on this acceleration.
AGI Progress (+0.03%): Tegmark's analysis reveals rapid AGI progress, with GPT-4 at 27% and GPT-5 at 57% completion according to rigorous AGI definitions, and AI already achieving gold medal performance at the International Mathematics Olympiad. The article confirms expert predictions from six years ago about human-level language mastery were drastically wrong, indicating faster-than-expected capability growth.
AGI Date (-1 days): The doubling of AGI completion metrics from GPT-4 to GPT-5 in a short timeframe, combined with Tegmark's warning to MIT students that they may not find jobs in four years due to AGI, suggests significant acceleration toward AGI. The competitive dynamics and lack of regulation removing friction from development further accelerate the timeline.
OpenAI Secures Pentagon AI Contract with Safety Protections Amid Anthropic Standoff
OpenAI has reached an agreement with the Department of Defense to deploy its AI models on classified networks, including technical safeguards against mass domestic surveillance and autonomous weapons. This follows a public conflict between the Pentagon and Anthropic over usage restrictions, which resulted in Trump administration threats to designate Anthropic as a supply-chain risk and ban federal agencies from using its products. OpenAI claims its deal includes protections for the same ethical concerns Anthropic sought, and is asking the government to extend these terms to all AI companies.
Skynet Chance (+0.06%): Deployment of advanced AI models in military classified networks with autonomous weapon considerations increases risks of AI systems operating in high-stakes contexts with reduced oversight. While safeguards are promised, the precedent of powerful AI in defense applications with potential for autonomous decision-making elevates long-term control and alignment risks.
Skynet Date (-1 days): The rapid integration of frontier AI models into military infrastructure accelerates the timeline for AI systems operating in critical autonomous roles. The political pressure forcing quick deployment decisions may bypass thorough safety testing periods that would otherwise delay risky applications.
AGI Progress (+0.01%): The deal demonstrates OpenAI's models are sufficiently capable for sensitive military applications, indicating progress in reliability and performance. However, this represents application of existing capabilities rather than fundamental breakthroughs toward AGI.
AGI Date (+0 days): Military funding and deployment may accelerate capability improvements through real-world testing and feedback, but the magnitude of impact on AGI timeline is modest. The focus on application rather than foundational research suggests limited acceleration of core AGI development.
AI News Calendar
AI Risk Assessment Methodology
Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:
Data Collection
We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.
Impact Analysis
Each news item undergoes rigorous assessment through:
- Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
- Safety Research: Progress in alignment, interpretability, and containment mechanisms
- Governance Factors: Regulatory developments, industry standards, and institutional safeguards
Indicator Calculation
Our indicators are updated using a Bayesian probabilistic model that:
- Assigns weighted impact scores to each analyzed development
- Calculates cumulative effects on control loss probability and AGI timelines
- Accounts for interdependencies between different technological trajectories
- Maintains historical trends to identify acceleration or deceleration patterns
This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.