Current AI Risk Assessment
Chance of AI Control Loss
Estimated Date of Control Loss
AGI Development Metrics
AGI Progress
Estimated Date of AGI
Risk Trend Over Time
Latest AI News (Last 3 Days)
Major AI Companies Experience Significant Leadership Departures and Internal Restructuring
Multiple leading AI companies are experiencing significant talent losses, with half of xAI's founding team departing and OpenAI undergoing major organizational changes including the disbanding of its mission alignment team. The departures include both voluntary exits and company-initiated restructuring, alongside controversy over policy decisions like OpenAI's "adult mode" feature.
Skynet Chance (+0.04%): The disbanding of OpenAI's mission alignment team and departure of safety-focused personnel suggests reduced organizational focus on AI safety and alignment, which are critical safeguards against uncontrolled AI development. Leadership instability across major AI labs may compromise long-term safety priorities in favor of competitive pressures.
Skynet Date (-1 days): While safety team departures are concerning, organizational chaos and talent loss could paradoxically slow capability development in the short term. However, the weakening of alignment-focused teams may accelerate deployment of insufficiently controlled systems, creating a modest net acceleration of risk timelines.
AGI Progress (-0.01%): Loss of half of xAI's founding team and significant departures from OpenAI represent setbacks to institutional knowledge and research continuity at leading AI labs. Brain drain and organizational disruption typically slow technical progress, though the impact may be temporary if talent redistributes within the industry.
AGI Date (+0 days): Significant talent exodus and organizational restructuring at major AI companies creates friction and reduces research velocity in the near term. The disruption to team cohesion and loss of experienced researchers suggests a modest deceleration in the pace toward AGI development.
Anthropic Secures $30 Billion Series G Funding at $380 Billion Valuation
Anthropic has raised $30 billion in Series G funding, increasing its valuation to $380 billion from a previous $183 billion. The round was led by GIC and Coatue, with participation from numerous high-profile investors including Founders Fund and Abu Dhabi's MGX. This massive funding comes amid intense competition with OpenAI, which is reportedly seeking $100 billion in additional funding for an $830 billion valuation.
Skynet Chance (+0.04%): Massive capital infusion accelerates AI capability development with less resource constraint, potentially reducing time for safety research relative to capability advancement. The competitive dynamics with OpenAI may incentivize faster deployment over cautious alignment work.
Skynet Date (-1 days): The $30 billion funding significantly accelerates compute acquisition, research hiring, and product deployment timelines, potentially shortening the window before advanced AI systems with control challenges emerge. The competitive pressure with OpenAI's parallel fundraising intensifies the race dynamics.
AGI Progress (+0.03%): The unprecedented $380 billion valuation and $30 billion capital raise enables substantial scaling of compute infrastructure, talent acquisition, and research programs essential for AGI development. Enterprise adoption of Claude indicates practical progress toward more general AI systems.
AGI Date (-1 days): The massive funding directly accelerates AGI timelines by removing capital constraints on compute scaling, research expansion, and infrastructure development. The competitive funding race with OpenAI creates pressure to advance capabilities rapidly toward AGI milestones.
Spotify Developers Stop Writing Code Manually as AI System Takes Over Programming Tasks
Spotify reported that its top developers haven't written code since December, relying instead on an internal AI system called "Honk" that uses Claude Code for real-time code deployment. Engineers can now request bug fixes and new features via Slack on their phones, with AI completing the work and deploying it to production without manual coding. The company shipped over 50 new features in 2025 using this approach and is building proprietary datasets for music-related AI applications.
Skynet Chance (+0.04%): Demonstrates AI systems autonomously writing and deploying production code with minimal human oversight, representing a capability expansion where humans serve primarily as supervisors rather than implementers. This reduces human understanding of system internals and increases dependency on AI decision-making in critical infrastructure.
Skynet Date (-1 days): The acceleration of AI's ability to autonomously handle complex software development tasks suggests faster progress toward systems that can modify and improve themselves. However, this is still within supervised commercial contexts with human approval gates, limiting immediate risk acceleration.
AGI Progress (+0.03%): Represents a significant milestone where AI can handle end-to-end software development workflows including understanding requirements, writing code, testing, and deployment autonomously. This demonstrates practical reasoning and multi-step problem-solving capabilities approaching real-world AGI-relevant tasks.
AGI Date (-1 days): Shows that current AI systems (Claude) are already capable of replacing human developers in production environments, suggesting capabilities are advancing faster than expected. The widespread adoption of AI coding tools across major tech companies indicates accelerating progress toward more general autonomous AI systems.
OpenAI Launches Faster Codex Model Powered by Cerebras' Dedicated AI Chip
OpenAI released GPT-5.3-Codex-Spark, a lightweight version of its coding tool designed for faster inference and real-time collaboration. The model is powered by Cerebras' Wafer Scale Engine 3 chip, marking the first milestone in their $10 billion partnership announced last month. This represents a significant integration of specialized hardware into OpenAI's infrastructure to enable ultra-low latency AI responses.
Skynet Chance (+0.01%): The integration of specialized hardware for faster AI inference could marginally increase deployment scale and accessibility of agentic coding tools, though this remains a narrow application domain. The focus on speed rather than capability expansion presents minimal direct alignment or control concerns.
Skynet Date (+0 days): Faster inference through dedicated chips modestly accelerates the practical deployment and iteration cycles of AI systems, potentially slightly compressing timelines. However, this is primarily an optimization rather than a fundamental capability breakthrough.
AGI Progress (+0.01%): The partnership demonstrates continued vertical integration and infrastructure investment in AI, with specialized hardware enabling more efficient deployment of existing models. This represents incremental progress in making AI systems more practical and responsive, though it's an engineering advancement rather than a cognitive capability leap.
AGI Date (+0 days): The $10 billion infrastructure investment and deployment of specialized chips for faster inference accelerates the practical scaling and iteration speed of AI development. Reduced latency enables new interaction patterns and faster development cycles, modestly compressing AGI timelines.
xAI Unveils Organizational Restructuring and Ambitious Space-Based AI Infrastructure Plans
xAI publicly released a 45-minute all-hands meeting video revealing organizational restructuring, layoffs affecting founding team members, and a new four-team structure focused on Grok chatbot, coding systems, video generation, and the "Macrohard" project for autonomous computer use. Musk outlined ambitious long-term plans for space-based AI data centers, including moon-based manufacturing facilities and energy-harvesting clusters capable of capturing significant portions of solar output. The company also reported $1 billion in annual recurring revenue for X subscriptions and 50 million daily video generations, though these figures coincide with widespread deepfake pornography issues on the platform.
Skynet Chance (+0.04%): The explicit ambition to create AI systems of galactic scale capable of harnessing stellar energy, combined with autonomous AI agents that can "do anything on a computer," represents planning for superintelligent systems with vast resource access. The lack of mentioned safety considerations alongside these capability expansions increases concern about control mechanisms.
Skynet Date (+0 days): While the space infrastructure plans are extremely long-term, the immediate focus on autonomous computer-use AI (Macrohard) and organizational scaling to accelerate development suggests modest acceleration of capability advancement timelines. The reorganization appears designed to increase development velocity across multiple capability domains.
AGI Progress (+0.03%): The Macrohard project explicitly aims for general computer-use capabilities ("anything on a computer"), which represents a significant step toward AGI-level task generality. The organizational restructuring to support parallel development of multimodal capabilities (chat, coding, video, autonomous agents) and long-term infrastructure planning for superintelligent systems indicates serious commitment to AGI development.
AGI Date (+0 days): The organizational restructuring to accelerate development across four major capability areas, combined with significant revenue generation enabling sustained investment, suggests meaningful acceleration of AGI timeline. The explicit focus on building infrastructure for future superintelligent systems indicates xAI is positioning for rapid scaling once key capabilities are achieved.
OpenAI Dissolves Mission Alignment Team, Reassigns Safety-Focused Researchers
OpenAI has disbanded its Mission Alignment team, which was responsible for ensuring AI systems remain safe, trustworthy, and aligned with human values. The team's former leader, Josh Achiam, has been appointed as "Chief Futurist," while the remaining six to seven team members have been reassigned to other roles within the company. This follows the 2024 dissolution of OpenAI's superalignment team that focused on long-term existential AI risks.
Skynet Chance (+0.04%): Disbanding a dedicated team focused on alignment and safety mechanisms suggests deprioritization of systematic safety research at a leading AI company, potentially increasing risks of misaligned AI systems. The dissolution of two consecutive safety-focused teams (superalignment in 2024, mission alignment now) indicates a concerning organizational pattern.
Skynet Date (-1 days): Reduced organizational focus on alignment research may remove barriers to faster AI deployment without adequate safety measures, potentially accelerating the timeline to scenarios involving loss of control. However, reassignment to similar work elsewhere partially mitigates this acceleration.
AGI Progress (+0.01%): The restructuring suggests OpenAI may be shifting resources toward capabilities development rather than safety research, which could accelerate raw capability gains. However, this is an organizational change rather than a technical breakthrough, so the impact on actual AGI progress is modest.
AGI Date (+0 days): Potential reallocation of talent from safety-focused work to capabilities research could marginally accelerate AGI development timelines. The effect is limited since team members reportedly continue similar work in new roles.
Mass Exodus of Senior Engineers and Co-Founders from xAI Raises Stability Concerns
At least nine engineers, including two of xAI's co-founders, have publicly announced their departure from the company within the past week, bringing the total co-founder exits to more than half of the founding team. The departures coincide with regulatory scrutiny over Grok's generation of nonconsensual explicit deepfakes and personal controversy surrounding Elon Musk. Several departing engineers cite desires for greater autonomy and plan to start new ventures, raising questions about xAI's institutional stability and ability to compete with rivals like OpenAI and Anthropic.
Skynet Chance (-0.03%): The organizational instability and talent drain at xAI may slightly reduce concentrated AI risk by fragmenting expertise across multiple new ventures, though the impact is marginal. Key safety-focused co-founder Jimmy Ba's departure could weaken safety oversight at one major lab.
Skynet Date (+0 days): Organizational disruption at a major AI lab likely causes minor delays in capability development at xAI specifically, slightly decelerating the overall pace toward advanced AI systems. However, departing engineers forming new ventures may redistribute rather than reduce overall AI development velocity.
AGI Progress (-0.03%): The departure of over half of xAI's founding team, including the reasoning lead and research/safety lead, represents a significant loss of institutional knowledge and technical leadership that will likely slow xAI's progress toward AGI. This disruption affects one of the major frontier AI labs competing in the AGI race.
AGI Date (+0 days): The exodus of senior talent and co-founders will likely cause short-to-medium term delays in xAI's development timeline, though the overall impact on industry-wide AGI timelines is modest given the company's 1,000+ remaining employees. Some departing engineers forming new startups may eventually contribute to distributed AGI progress, partially offsetting the deceleration.
AI News Calendar
AI Risk Assessment Methodology
Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:
Data Collection
We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.
Impact Analysis
Each news item undergoes rigorous assessment through:
- Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
- Safety Research: Progress in alignment, interpretability, and containment mechanisms
- Governance Factors: Regulatory developments, industry standards, and institutional safeguards
Indicator Calculation
Our indicators are updated using a Bayesian probabilistic model that:
- Assigns weighted impact scores to each analyzed development
- Calculates cumulative effects on control loss probability and AGI timelines
- Accounts for interdependencies between different technological trajectories
- Maintains historical trends to identify acceleration or deceleration patterns
This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.