Current AI Risk Assessment
Chance of AI Control Loss
Estimated Date of Control Loss
AGI Development Metrics
AGI Progress
Estimated Date of AGI
Risk Trend Over Time
Latest AI News (Last 3 Days)
Anthropic Seeks $900B+ Valuation in Massive Funding Round Ahead of Anticipated IPO
Anthropic is soliciting investor allocations for a roughly $50 billion funding round targeting a $900 billion valuation, with closure expected within two weeks. The AI company, which has surpassed $30 billion in annual revenue (closer to $40 billion according to sources), is raising capital to fund computing infrastructure before a planned IPO later this year. This would more than double its February 2026 valuation of $380 billion and surpass rival OpenAI's $852 billion valuation.
Skynet Chance (+0.04%): Massive capital infusion enables scaled compute infrastructure, potentially accelerating development of more powerful AI systems without clear indication of proportional safety investments. The competitive pressure with OpenAI may incentivize rapid capability advancement over cautious alignment work.
Skynet Date (-1 days): The enormous funding specifically designated for computing needs will likely accelerate the development timeline of advanced AI systems. Competitive dynamics between frontier labs at this scale tends to compress safety timelines.
AGI Progress (+0.03%): The $50 billion raise for compute infrastructure, combined with $40 billion annual revenue run rate, demonstrates both commercial validation and resource availability for scaling AI capabilities toward AGI. This level of investment enables training runs at unprecedented scales.
AGI Date (-1 days): Dedicated massive compute funding will directly accelerate training of larger, more capable models, potentially shortening AGI timelines. The competitive race with OpenAI at near-trillion-dollar valuations suggests an industry-wide sprint toward advanced capabilities.
OpenAI Restricts Access to GPT-5.5 Cyber Tool Despite Criticizing Anthropic's Similar Approach
OpenAI is limiting access to its new cybersecurity tool, GPT-5.5 Cyber, releasing it only to "critical cyber defenders" through an application process, despite CEO Sam Altman previously criticizing Anthropic for taking the same approach with its Mythos tool. The tool can perform penetration testing, vulnerability identification, and malware reverse engineering, with concerns about potential misuse by malicious actors. OpenAI is consulting with the U.S. government to eventually expand access to verified cybersecurity professionals.
Skynet Chance (+0.04%): The development of advanced AI tools capable of autonomous vulnerability exploitation and malware engineering increases the risk of misuse and potential for AI systems to be weaponized or cause unintended security breaches. The fact that both leading AI labs recognize the danger enough to restrict access, despite competitive pressures, validates concerns about dual-use capabilities.
Skynet Date (+0 days): While the capabilities are concerning, the restricted access approach and government consultation represent risk mitigation measures that neither significantly accelerate nor decelerate the timeline toward potential uncontrollable AI scenarios. The pace remains relatively unchanged as both safety concerns and capabilities development continue in parallel.
AGI Progress (+0.04%): The release of GPT-5.5 with specialized cybersecurity capabilities including autonomous penetration testing and malware reverse engineering demonstrates significant advancement in AI task specialization and autonomous problem-solving in complex technical domains. This suggests continued progress in creating AI systems that can perform expert-level cognitive tasks independently.
AGI Date (-1 days): The designation "GPT-5.5" indicates OpenAI has progressed beyond GPT-5, suggesting faster-than-expected iteration cycles in their model development pipeline. The specialized capabilities in complex technical domains like cybersecurity exploitation indicate accelerating progress toward general-purpose reasoning systems.
Elon Musk Confirms xAI Used Model Distillation on OpenAI's Grok Training
Elon Musk testified in federal court that xAI used distillation techniques—training AI models by prompting competitors' chatbots—on OpenAI models to develop Grok, calling it a general industry practice. This admission comes amid growing concerns from frontier labs like OpenAI and Anthropic about distillation undermining their competitive advantages, particularly regarding Chinese firms creating cheaper, comparable models. The revelation highlights potential violations of terms of service and raises questions about the ethics and legality of such practices among leading AI companies.
Skynet Chance (+0.01%): Model distillation accelerates capability proliferation across more actors, potentially reducing control over advanced AI systems and making coordination on safety measures more difficult. However, the impact is relatively minor as this practice doesn't fundamentally change the nature of AI risks.
Skynet Date (+0 days): Distillation techniques allow newer companies to rapidly catch up to frontier labs without massive compute investments, slightly accelerating the overall pace of advanced AI development across the industry. The effect is modest as the underlying capabilities still originate from well-resourced frontier labs.
AGI Progress (+0.01%): The confirmation that distillation is a widespread industry practice demonstrates that AI capabilities are diffusing more rapidly than previously understood, allowing multiple companies to reach near-frontier performance. This broader capability distribution suggests the overall field is progressing faster than if knowledge were siloed.
AGI Date (+0 days): Distillation as a common practice enables faster capability catch-up among competitors without requiring proportional compute investment, effectively accelerating the timeline for multiple labs to approach AGI-relevant benchmarks. This reduces the time advantage that massive compute infrastructure would otherwise provide to frontier labs.
Stripe Launches Link Digital Wallet with Autonomous AI Agent Payment Capabilities
Stripe has introduced Link, a digital wallet designed for both human users and autonomous AI agents to manage payments securely. The wallet allows users to grant AI agents controlled spending permissions without exposing raw payment credentials, using OAuth authentication and approval workflows. Link supports payment methods including cards, banks, crypto wallets, and buy now/pay later services, with plans to add agentic tokens and stablecoins.
Skynet Chance (+0.04%): Enabling autonomous AI agents to handle financial transactions independently increases their real-world capabilities and autonomy, which expands potential attack surfaces and misuse scenarios. However, the implementation includes human approval controls and security measures that somewhat mitigate uncontrolled agent behavior.
Skynet Date (-1 days): By providing financial infrastructure specifically designed for autonomous agents, this accelerates the practical deployment and normalization of AI agents operating independently in the real economy. The widespread adoption of such systems could modestly hasten the timeline for increasingly autonomous AI systems.
AGI Progress (+0.03%): This represents meaningful progress in AI agents' ability to interact autonomously with real-world systems and complete complex multi-step tasks involving financial transactions. The infrastructure development signals growing maturity of agentic AI capabilities beyond pure reasoning into practical economic activity.
AGI Date (-1 days): The creation of dedicated financial infrastructure for AI agents indicates and accelerates the broader ecosystem development necessary for advanced autonomous systems. This type of supporting infrastructure reduces friction for deploying increasingly capable agents, modestly accelerating the path toward more general AI systems.
Anthropic in Talks for Massive $50B Funding Round at $900B Valuation Amid Explosive Revenue Growth
Anthropic, creator of the Claude AI assistant, is reportedly considering a $40-50 billion funding round at a valuation between $850-900 billion, with a board decision expected in May. The company's annual revenue run rate has surged dramatically from approximately $9 billion at the end of 2025 to over $30 billion recently, with current estimates closer to $40 billion, driven largely by AI coding capabilities through Claude Code and Cowork platforms. This potential raise would more than double Anthropic's February valuation of $380 billion and position it competitively with OpenAI's $852 billion valuation.
Skynet Chance (+0.04%): Massive capital infusion ($50B) into a leading AI company accelerates development of increasingly capable AI systems without corresponding evidence of proportional safety investment, marginally increasing risks of misaligned AI systems. The explosive revenue growth and expansion into critical sectors (finance, healthcare) suggests rapid deployment of powerful AI without sufficient time for safety validation.
Skynet Date (-1 days): The unprecedented funding scale and explosive revenue growth (9B to 40B in roughly 16 months) significantly accelerates AI capability development and deployment timelines. This capital enables faster scaling of compute resources and expansion into critical infrastructure sectors, compressing the timeline for potential AI control challenges to emerge.
AGI Progress (+0.04%): The dramatic revenue surge driven by AI coding capabilities demonstrates significant practical progress in complex reasoning and task automation, key AGI components. Anthropic's expansion trajectory and investor confidence at near-trillion-dollar valuations reflects market assessment that current systems are approaching economically transformative capabilities characteristic of near-AGI systems.
AGI Date (-1 days): The $50 billion capital injection provides unprecedented resources to scale compute infrastructure, research capabilities, and talent acquisition, directly accelerating AGI development timelines. The company's explosive growth and plans for rapid expansion into multiple complex domains (finance, healthcare, life sciences) suggests aggressive pursuit of general-purpose capabilities that compress the path to AGI.
Musk Testifies in OpenAI Lawsuit, Contradicts Own Tesla AGI Claims Under Oath
Elon Musk testified in his lawsuit against OpenAI, alleging Sam Altman and cofounders misled him about the organization's non-profit structure before launching a for-profit arm. Under cross-examination, Musk admitted Tesla is not currently pursuing AGI despite tweeting otherwise weeks earlier, and acknowledged he had supported various for-profit transitions for OpenAI as early as 2016. The case appears to hinge on distinctions between capped and uncapped investor profits, with safety concerns also emerging as a key issue.
Skynet Chance (+0.01%): The lawsuit highlights ongoing tensions between profit motives and safety commitments at major AI labs, which could marginally increase alignment risks. However, the legal scrutiny itself may also promote accountability and safety considerations.
Skynet Date (+0 days): While the lawsuit reveals organizational conflicts at OpenAI, it does not directly affect the technical trajectory or pace of AI development that would accelerate or decelerate risk timelines. The legal proceedings are primarily about corporate governance rather than capability advancement.
AGI Progress (-0.01%): Musk's admission that Tesla is not pursuing AGI contradicts his public claims and suggests less actual progress toward AGI than publicly portrayed. The lawsuit also reveals internal conflicts and distractions at OpenAI that may slow focused development efforts.
AGI Date (+0 days): Legal disputes and organizational turmoil at OpenAI, combined with Tesla's apparent lack of AGI pursuit despite public claims, suggest modest deceleration in the AGI timeline. These distractions and misalignments between stated goals and actual work may slow overall progress.
Microsoft Retains Royalty-Free OpenAI Access Through 2032 Despite Partnership Changes
Microsoft CEO Satya Nadella confirmed that under the revised OpenAI partnership, Microsoft retains royalty-free access to OpenAI's models and IP through 2032, while no longer paying for them. Microsoft reported its AI business surpassed $37 billion annual revenue (up 123% year-over-year), with OpenAI remaining a major cloud customer committing over $250 billion in purchases, while Microsoft holds a 27% equity stake. Nadella emphasized Microsoft offers the broadest model selection among hyperscalers, with over 10,000 customers using multiple models.
Skynet Chance (+0.01%): The commercial success and broad deployment of multiple AI models across thousands of enterprises increases the surface area for potential misuse or unintended consequences. However, the diversification of models rather than single-vendor dependence may provide some resilience against catastrophic failures.
Skynet Date (+0 days): Microsoft's $37 billion AI revenue and massive scale of deployment (10,000+ customers using multiple models) indicates rapid commercialization and widespread integration of advanced AI systems. This accelerated adoption and financial incentive structure modestly speeds up the timeline toward scenarios where AI systems become deeply embedded in critical infrastructure.
AGI Progress (+0.02%): Microsoft's guaranteed access to OpenAI's frontier models through 2032 and explosive revenue growth ($37B at 123% YoY) demonstrates that advanced AI capabilities are being successfully scaled and commercialized. The multi-model ecosystem with thousands of enterprise customers shows maturation of AI infrastructure necessary for AGI development.
AGI Date (+0 days): The massive financial success (123% revenue growth) and OpenAI's $250+ billion cloud commitment provide enormous capital and infrastructure resources that will accelerate AGI research and development. The stable, long-term partnership through 2032 creates a well-funded environment for sustained progress toward AGI.
Runway AI Pivots from Video Generation to General World Models for AGI Applications
Runway, an AI video generation company valued at $5.3 billion, is expanding beyond creative video tools into developing general world models. CEO Cristóbal Valenzuela indicates these models will have applications in gaming, robotics, and potentially general intelligence, marking a strategic shift toward more foundational AI capabilities.
Skynet Chance (+0.04%): World models that can simulate and predict physical environments create more capable autonomous systems, potentially increasing risks if deployed without adequate alignment and control mechanisms. The pivot toward general intelligence applications in robotics amplifies potential for unintended consequences.
Skynet Date (-1 days): A well-funded company pivoting from narrow video generation to general world models and robotics accelerates development of more capable autonomous systems. This represents a moderate acceleration of the timeline toward advanced AI systems requiring robust safety measures.
AGI Progress (+0.03%): World models represent a key component of AGI as they enable AI systems to understand and simulate physical reality, going beyond pattern recognition to causal understanding. Runway's strategic pivot with substantial funding indicates significant progress toward more general AI capabilities.
AGI Date (-1 days): A major AI company with $860 million in funding explicitly targeting general world models and general intelligence applications accelerates the AGI timeline. The shift from narrow video generation to broader world modeling represents a meaningful acceleration in pursuing AGI-relevant capabilities.
Parallel Web Systems Raises $100M Series B at $2B Valuation for AI Agent Infrastructure
Parallel Web Systems, founded by former Twitter CEO Parag Agrawal, raised $100 million Series B at a $2 billion valuation led by Sequoia, just five months after its Series A. The startup provides web search and research APIs designed specifically for AI agents, serving customers including Clay, Harvey, Notion, and OpenDoor, with over 100,000 developers using its products.
Skynet Chance (+0.01%): Improved infrastructure for AI agents could marginally increase agent deployment and autonomy, though these are research/productivity tools rather than general autonomous systems. The impact on uncontrollable AI risk remains minimal as these are bounded API services.
Skynet Date (+0 days): Better tooling for AI agents modestly accelerates their practical deployment and capabilities, potentially shortening timelines to more autonomous systems. However, this is incremental infrastructure rather than a fundamental capability breakthrough.
AGI Progress (+0.01%): Dedicated infrastructure for AI agents represents progress in making AI systems more capable at autonomous web research and interaction, which are components needed for AGI. The rapid adoption (100,000+ developers) suggests these tools meaningfully enhance agent capabilities.
AGI Date (+0 days): The massive funding and rapid scaling of AI agent infrastructure slightly accelerates the timeline by making it easier for developers to build increasingly capable autonomous systems. The $2B valuation and broad adoption indicate this infrastructure layer is maturing faster than expected.
Scout AI Secures $100M to Deploy Autonomous Military Systems Using Vision Language Action Models
Scout AI, a defense startup founded in 2024, raised $100 million to develop "Fury," an AI model based on Vision Language Action (VLA) technology for operating autonomous military vehicles and weapons systems. The company is training its models at a U.S. military base using ATVs and drones, with initial applications focusing on logistics and resupply before progressing to autonomous weapons capable of identifying and engaging targets. Scout has secured $11 million in DoD contracts and is testing technology that could enable drone swarms to operate with minimal human intervention in combat scenarios.
Skynet Chance (+0.09%): The development of AI systems explicitly designed to operate autonomous weapons with minimal human intervention, including self-targeting capabilities and drone swarms, significantly increases risks of unintended escalation and loss of meaningful human control over lethal decisions. The company's ambition to achieve AGI through real-world military interaction and their willingness to deploy agents on "one-way attack drones" raises substantial alignment and control concerns.
Skynet Date (-1 days): The rapid deployment timeline (technology being field-tested for operational use by 2027) and the company's claim that VLAs enable faster scaling with existing military assets accelerates the pace at which increasingly autonomous military AI systems could be deployed at scale. The $100M funding specifically dedicated to compute and training for a military-focused AGI pursuit further accelerates development toward potentially uncontrollable systems.
AGI Progress (+0.04%): Scout's application of VLAs to complex real-world autonomous navigation and decision-making in unpredictable environments represents meaningful progress in embodied AI capabilities. The founder's belief that real-world interaction through military applications could reach AGI faster than internet-trained models suggests a novel pathway that could advance general intelligence development.
AGI Date (-1 days): The company's massive funding round dedicated to building foundation models from scratch, combined with continuous real-world training data from military operations, could accelerate AGI development through a different pathway than traditional lab-based approaches. Their claim of potentially beating existing leaders to AGI through embodied learning suggests they see a faster timeline than conventional approaches.
Amazon AWS Rapidly Integrates OpenAI Models Following Exclusivity Agreement Changes
Amazon Web Services announced immediate availability of OpenAI's latest models, Codex, and a new agent-building service called Bedrock Managed Agents on its platform. This follows OpenAI's revised agreement with Microsoft that ended exclusivity provisions, enabling OpenAI to partner with AWS after signing a deal worth up to $50 billion. The move signals shifting alliances in the AI industry, with OpenAI-Amazon and Microsoft-Anthropic partnerships emerging as Microsoft's relationship with OpenAI reportedly deteriorates.
Skynet Chance (+0.01%): Increased competition and distribution of advanced AI models across multiple cloud platforms slightly increases accessibility and deployment of powerful AI systems, marginally raising potential misuse or control risks. However, the competitive landscape may also incentivize better safety practices.
Skynet Date (+0 days): Broader cloud platform availability accelerates deployment infrastructure for advanced AI models, potentially enabling faster real-world integration of powerful systems. The competitive pressure between AWS and Microsoft may also speed development cycles.
AGI Progress (+0.01%): The expanded partnership demonstrates OpenAI's models are mature and scalable enough for broad enterprise deployment across multiple cloud platforms, indicating significant progress in practical AI capabilities. The introduction of reasoning model-specific agent services suggests advancement toward more autonomous AI systems.
AGI Date (+0 days): The $50 billion AWS deal and competitive dynamics between major cloud providers significantly increases available compute resources and market pressure to advance AI capabilities rapidly. Multiple large-scale partnerships accelerate the pace of AI development through increased funding and infrastructure.
Google Provides Pentagon Unrestricted AI Access Following Anthropic's Refusal and Legal Battle
Google has granted the U.S. Department of Defense broad access to its AI systems for classified networks, allowing essentially all lawful uses. This decision follows Anthropic's refusal to provide unrestricted AI access to the Pentagon over concerns about domestic mass surveillance and autonomous weapons, which led to the DoD designating Anthropic a "supply-chain risk" and subsequent litigation. Google's agreement includes non-binding language discouraging use for mass surveillance and autonomous weapons, though enforceability remains unclear.
Skynet Chance (+0.04%): Providing unrestricted AI access to military applications without enforceable guardrails increases risks of autonomous weapons development and potential loss of human control in defense systems. The precedent of major AI companies prioritizing military contracts over safety constraints elevates concerns about AI weaponization.
Skynet Date (-1 days): The rapid deployment of advanced AI systems into military infrastructure without robust safety frameworks accelerates the timeline for potential AI-related catastrophic scenarios. Multiple major AI labs now competing for defense contracts suggests faster integration of powerful AI into high-stakes military contexts.
AGI Progress (+0.01%): Military applications may drive additional investment and development in AI capabilities, though this represents deployment rather than fundamental capability advancement. The competitive pressure among AI companies for defense contracts could marginally accelerate overall AI development efforts.
AGI Date (+0 days): Increased defense funding and urgency around military AI applications may modestly accelerate overall AI development timelines. However, this primarily represents a shift in deployment priorities rather than fundamental research breakthroughs that would significantly change AGI timelines.
AI News Calendar
AI Risk Assessment Methodology
Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:
Data Collection
We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.
Impact Analysis
Each news item undergoes rigorous assessment through:
- Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
- Safety Research: Progress in alignment, interpretability, and containment mechanisms
- Governance Factors: Regulatory developments, industry standards, and institutional safeguards
Indicator Calculation
Our indicators are updated using a Bayesian probabilistic model that:
- Assigns weighted impact scores to each analyzed development
- Calculates cumulative effects on control loss probability and AGI timelines
- Accounts for interdependencies between different technological trajectories
- Maintains historical trends to identify acceleration or deceleration patterns
This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.