Current AI Risk Assessment
Chance of AI Control Loss
Estimated Date of Control Loss
AGI Development Metrics
AGI Progress
Estimated Date of AGI
Risk Trend Over Time
Latest AI News (Last 3 Days)
Anthropic Secures $30 Billion Series G Funding at $380 Billion Valuation
Anthropic has raised $30 billion in Series G funding, increasing its valuation to $380 billion from a previous $183 billion. The round was led by GIC and Coatue, with participation from numerous high-profile investors including Founders Fund and Abu Dhabi's MGX. This massive funding comes amid intense competition with OpenAI, which is reportedly seeking $100 billion in additional funding for an $830 billion valuation.
Skynet Chance (+0.04%): Massive capital infusion accelerates AI capability development with less resource constraint, potentially reducing time for safety research relative to capability advancement. The competitive dynamics with OpenAI may incentivize faster deployment over cautious alignment work.
Skynet Date (-1 days): The $30 billion funding significantly accelerates compute acquisition, research hiring, and product deployment timelines, potentially shortening the window before advanced AI systems with control challenges emerge. The competitive pressure with OpenAI's parallel fundraising intensifies the race dynamics.
AGI Progress (+0.03%): The unprecedented $380 billion valuation and $30 billion capital raise enables substantial scaling of compute infrastructure, talent acquisition, and research programs essential for AGI development. Enterprise adoption of Claude indicates practical progress toward more general AI systems.
AGI Date (-1 days): The massive funding directly accelerates AGI timelines by removing capital constraints on compute scaling, research expansion, and infrastructure development. The competitive funding race with OpenAI creates pressure to advance capabilities rapidly toward AGI milestones.
Spotify Developers Stop Writing Code Manually as AI System Takes Over Programming Tasks
Spotify reported that its top developers haven't written code since December, relying instead on an internal AI system called "Honk" that uses Claude Code for real-time code deployment. Engineers can now request bug fixes and new features via Slack on their phones, with AI completing the work and deploying it to production without manual coding. The company shipped over 50 new features in 2025 using this approach and is building proprietary datasets for music-related AI applications.
Skynet Chance (+0.04%): Demonstrates AI systems autonomously writing and deploying production code with minimal human oversight, representing a capability expansion where humans serve primarily as supervisors rather than implementers. This reduces human understanding of system internals and increases dependency on AI decision-making in critical infrastructure.
Skynet Date (-1 days): The acceleration of AI's ability to autonomously handle complex software development tasks suggests faster progress toward systems that can modify and improve themselves. However, this is still within supervised commercial contexts with human approval gates, limiting immediate risk acceleration.
AGI Progress (+0.03%): Represents a significant milestone where AI can handle end-to-end software development workflows including understanding requirements, writing code, testing, and deployment autonomously. This demonstrates practical reasoning and multi-step problem-solving capabilities approaching real-world AGI-relevant tasks.
AGI Date (-1 days): Shows that current AI systems (Claude) are already capable of replacing human developers in production environments, suggesting capabilities are advancing faster than expected. The widespread adoption of AI coding tools across major tech companies indicates accelerating progress toward more general autonomous AI systems.
OpenAI Launches Faster Codex Model Powered by Cerebras' Dedicated AI Chip
OpenAI released GPT-5.3-Codex-Spark, a lightweight version of its coding tool designed for faster inference and real-time collaboration. The model is powered by Cerebras' Wafer Scale Engine 3 chip, marking the first milestone in their $10 billion partnership announced last month. This represents a significant integration of specialized hardware into OpenAI's infrastructure to enable ultra-low latency AI responses.
Skynet Chance (+0.01%): The integration of specialized hardware for faster AI inference could marginally increase deployment scale and accessibility of agentic coding tools, though this remains a narrow application domain. The focus on speed rather than capability expansion presents minimal direct alignment or control concerns.
Skynet Date (+0 days): Faster inference through dedicated chips modestly accelerates the practical deployment and iteration cycles of AI systems, potentially slightly compressing timelines. However, this is primarily an optimization rather than a fundamental capability breakthrough.
AGI Progress (+0.01%): The partnership demonstrates continued vertical integration and infrastructure investment in AI, with specialized hardware enabling more efficient deployment of existing models. This represents incremental progress in making AI systems more practical and responsive, though it's an engineering advancement rather than a cognitive capability leap.
AGI Date (+0 days): The $10 billion infrastructure investment and deployment of specialized chips for faster inference accelerates the practical scaling and iteration speed of AI development. Reduced latency enables new interaction patterns and faster development cycles, modestly compressing AGI timelines.
xAI Unveils Organizational Restructuring and Ambitious Space-Based AI Infrastructure Plans
xAI publicly released a 45-minute all-hands meeting video revealing organizational restructuring, layoffs affecting founding team members, and a new four-team structure focused on Grok chatbot, coding systems, video generation, and the "Macrohard" project for autonomous computer use. Musk outlined ambitious long-term plans for space-based AI data centers, including moon-based manufacturing facilities and energy-harvesting clusters capable of capturing significant portions of solar output. The company also reported $1 billion in annual recurring revenue for X subscriptions and 50 million daily video generations, though these figures coincide with widespread deepfake pornography issues on the platform.
Skynet Chance (+0.04%): The explicit ambition to create AI systems of galactic scale capable of harnessing stellar energy, combined with autonomous AI agents that can "do anything on a computer," represents planning for superintelligent systems with vast resource access. The lack of mentioned safety considerations alongside these capability expansions increases concern about control mechanisms.
Skynet Date (+0 days): While the space infrastructure plans are extremely long-term, the immediate focus on autonomous computer-use AI (Macrohard) and organizational scaling to accelerate development suggests modest acceleration of capability advancement timelines. The reorganization appears designed to increase development velocity across multiple capability domains.
AGI Progress (+0.03%): The Macrohard project explicitly aims for general computer-use capabilities ("anything on a computer"), which represents a significant step toward AGI-level task generality. The organizational restructuring to support parallel development of multimodal capabilities (chat, coding, video, autonomous agents) and long-term infrastructure planning for superintelligent systems indicates serious commitment to AGI development.
AGI Date (+0 days): The organizational restructuring to accelerate development across four major capability areas, combined with significant revenue generation enabling sustained investment, suggests meaningful acceleration of AGI timeline. The explicit focus on building infrastructure for future superintelligent systems indicates xAI is positioning for rapid scaling once key capabilities are achieved.
OpenAI Dissolves Mission Alignment Team, Reassigns Safety-Focused Researchers
OpenAI has disbanded its Mission Alignment team, which was responsible for ensuring AI systems remain safe, trustworthy, and aligned with human values. The team's former leader, Josh Achiam, has been appointed as "Chief Futurist," while the remaining six to seven team members have been reassigned to other roles within the company. This follows the 2024 dissolution of OpenAI's superalignment team that focused on long-term existential AI risks.
Skynet Chance (+0.04%): Disbanding a dedicated team focused on alignment and safety mechanisms suggests deprioritization of systematic safety research at a leading AI company, potentially increasing risks of misaligned AI systems. The dissolution of two consecutive safety-focused teams (superalignment in 2024, mission alignment now) indicates a concerning organizational pattern.
Skynet Date (-1 days): Reduced organizational focus on alignment research may remove barriers to faster AI deployment without adequate safety measures, potentially accelerating the timeline to scenarios involving loss of control. However, reassignment to similar work elsewhere partially mitigates this acceleration.
AGI Progress (+0.01%): The restructuring suggests OpenAI may be shifting resources toward capabilities development rather than safety research, which could accelerate raw capability gains. However, this is an organizational change rather than a technical breakthrough, so the impact on actual AGI progress is modest.
AGI Date (+0 days): Potential reallocation of talent from safety-focused work to capabilities research could marginally accelerate AGI development timelines. The effect is limited since team members reportedly continue similar work in new roles.
Mass Exodus of Senior Engineers and Co-Founders from xAI Raises Stability Concerns
At least nine engineers, including two of xAI's co-founders, have publicly announced their departure from the company within the past week, bringing the total co-founder exits to more than half of the founding team. The departures coincide with regulatory scrutiny over Grok's generation of nonconsensual explicit deepfakes and personal controversy surrounding Elon Musk. Several departing engineers cite desires for greater autonomy and plan to start new ventures, raising questions about xAI's institutional stability and ability to compete with rivals like OpenAI and Anthropic.
Skynet Chance (-0.03%): The organizational instability and talent drain at xAI may slightly reduce concentrated AI risk by fragmenting expertise across multiple new ventures, though the impact is marginal. Key safety-focused co-founder Jimmy Ba's departure could weaken safety oversight at one major lab.
Skynet Date (+0 days): Organizational disruption at a major AI lab likely causes minor delays in capability development at xAI specifically, slightly decelerating the overall pace toward advanced AI systems. However, departing engineers forming new ventures may redistribute rather than reduce overall AI development velocity.
AGI Progress (-0.03%): The departure of over half of xAI's founding team, including the reasoning lead and research/safety lead, represents a significant loss of institutional knowledge and technical leadership that will likely slow xAI's progress toward AGI. This disruption affects one of the major frontier AI labs competing in the AGI race.
AGI Date (+0 days): The exodus of senior talent and co-founders will likely cause short-to-medium term delays in xAI's development timeline, though the overall impact on industry-wide AGI timelines is modest given the company's 1,000+ remaining employees. Some departing engineers forming new startups may eventually contribute to distributed AGI progress, partially offsetting the deceleration.
Flapping Airplanes Secures $180M to Develop Brain-Inspired Data-Efficient AI Models
AI lab Flapping Airplanes has raised $180 million in seed funding from Google Ventures, Sequoia, and Index to develop AI models that learn like humans rather than through massive data consumption. The team, led by brothers Ben and Asher Spector and co-founder Aidan Smith, believes radically more data-efficient training methods could unlock entirely new AI capabilities. Despite having no product yet, the lab attracted significant investment based on its novel approach to AI learning efficiency.
Skynet Chance (-0.03%): More data-efficient and human-like learning approaches could potentially lead to more interpretable and controllable AI systems compared to current opaque large-scale models. However, the impact is minimal at this early stage with no demonstrated results.
Skynet Date (+0 days): Pursuing alternative learning paradigms that differ from current scaling approaches may slow near-term progress on powerful but less controllable systems. The exploratory nature of this research likely delays rather than accelerates existential risk timelines.
AGI Progress (+0.02%): Human-like learning efficiency is a key missing capability for current AI systems, and achieving it could represent significant progress toward general intelligence. The substantial funding ($180M seed) from top-tier investors signals credible potential for breakthrough approaches.
AGI Date (+0 days): Successfully developing more data-efficient learning methods that match human cognitive abilities could significantly accelerate AGI development by removing current bottlenecks around data requirements and computational costs. The major funding injection suggests accelerated research timelines in this promising direction.
xAI Loses Nearly Half Its Founding Team Amid Product Struggles and IPO Preparation
Five of xAI's 12 founding team members have departed the company, with four leaving in the past year alone, including co-founder Yuhuai Wu who announced his exit in February 2025. While the departures appear amicable and coincide with an upcoming IPO and SpaceX acquisition windfall, they raise concerns about organizational stability. The exodus occurs as xAI's Grok chatbot faces technical issues, controversies over deepfake content generation, and increasing competitive pressure from OpenAI and Anthropic.
Skynet Chance (-0.03%): Leadership instability and talent departures at a major AI lab may slow development of advanced capabilities and reduce the likelihood of unchecked rapid advancement. The organizational chaos and product issues suggest less effective progress toward potentially dangerous systems.
Skynet Date (+0 days): The loss of technical talent and internal challenges at xAI will likely slow the company's AI development pace, marginally decelerating the overall timeline toward advanced AI systems. However, the impact is limited as other labs continue their work unaffected.
AGI Progress (-0.02%): The departure of nearly half the founding team from a well-funded AI lab represents a setback in collective research capacity and institutional knowledge. This brain drain, combined with product struggles, indicates reduced momentum in advancing AI capabilities at xAI.
AGI Date (+0 days): While xAI's internal challenges may slow its specific contributions to AGI development, the broader AI ecosystem remains robust with OpenAI and Anthropic continuing their progress. The overall timeline impact is minimal but slightly positive as one major player loses momentum.
Former GitHub CEO Launches Entire with Record $60M Seed to Manage AI-Generated Code
Former GitHub CEO Thomas Dohmke has raised a record $60 million seed round at a $300 million valuation for Entire, a startup developing tools to help developers manage code written by AI agents. The company's first product, Checkpoints, is an open-source tool that pairs AI-generated code with the context that created it, including prompts and transcripts, to address the growing challenge of massive volumes of AI-produced code overwhelming software projects.
Skynet Chance (-0.03%): This tool focuses on improving human oversight and understanding of AI-generated code, which marginally enhances control and transparency over AI agent outputs. By providing context and review mechanisms, it slightly reduces the risk of uncontrolled AI behavior in software development contexts.
Skynet Date (+0 days): The tool aims to add oversight layers to AI agent workflows, which could slightly slow down unfettered AI agent autonomy by requiring human review checkpoints. However, the impact on overall timeline is minimal as it's primarily a management tool rather than a fundamental capability constraint.
AGI Progress (+0.01%): The need for such infrastructure indicates AI coding agents are now producing code volumes beyond human comprehension capacity, suggesting significant progress in autonomous AI capabilities. The fact that this problem requires a $60M solution demonstrates AI agents have reached practical, production-scale autonomy in software development.
AGI Date (+0 days): By solving the code management bottleneck created by AI agents, this tool enables wider adoption and deployment of AI coding agents at scale. Removing friction in AI agent integration into workflows could marginally accelerate progress toward more general autonomous systems.
Runway Secures $315M Series E at $5.3B Valuation to Develop Advanced World Models for AGI
AI video startup Runway raised $315 million at a $5.3 billion valuation to develop next-generation world models, AI systems that create internal representations of environments to predict future events. The company, which recently released its Gen 4.5 video generation model that outperformed Google and OpenAI offerings, plans to expand world model capabilities beyond media into medicine, climate, energy, and robotics. This strategic shift positions Runway among competitors like Fei-Fei Li's World Labs and Google DeepMind in the race to build world models viewed as essential for surpassing large language model limitations.
Skynet Chance (+0.04%): World models that can predict and plan for future events represent advancement toward more autonomous AI systems with greater agency, potentially increasing risks if deployed without robust alignment and control mechanisms. The expansion into robotics and critical infrastructure domains like medicine and energy amplifies potential consequences of misaligned systems.
Skynet Date (-1 days): The significant funding and compute expansion accelerates development of world models capable of planning and prediction, potentially shortening timelines to more capable autonomous systems. However, the focus remains primarily on commercial applications rather than pure capability advancement, moderating the acceleration effect.
AGI Progress (+0.04%): World models are widely considered a critical advancement beyond current LLM limitations, as they enable AI systems to build internal representations and plan for future states rather than just pattern matching. Runway's success in outperforming Google and OpenAI on benchmarks, combined with substantial funding for scaling, represents meaningful progress toward more general AI capabilities.
AGI Date (-1 days): The $315M funding specifically targeting world model pre-training, combined with expanded compute infrastructure via CoreWeave partnership and aggressive hiring plans, directly accelerates the pace of research in a technology area viewed as essential for AGI. The competitive landscape with World Labs and DeepMind also intensifies the overall race toward more capable systems.
AI News Calendar
AI Risk Assessment Methodology
Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:
Data Collection
We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.
Impact Analysis
Each news item undergoes rigorous assessment through:
- Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
- Safety Research: Progress in alignment, interpretability, and containment mechanisms
- Governance Factors: Regulatory developments, industry standards, and institutional safeguards
Indicator Calculation
Our indicators are updated using a Bayesian probabilistic model that:
- Assigns weighted impact scores to each analyzed development
- Calculates cumulative effects on control loss probability and AGI timelines
- Accounts for interdependencies between different technological trajectories
- Maintains historical trends to identify acceleration or deceleration patterns
This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.