Current AI Risk Assessment
Chance of AI Control Loss
Estimated Date of Control Loss
AGI Development Metrics
AGI Progress
Estimated Date of AGI
Risk Trend Over Time
Latest AI News (Last 3 Days)
OpenAI Consolidates Products Under Brockman's Leadership, Focuses on Agentic AI Future
OpenAI co-founder Greg Brockman is taking charge of product strategy, consolidating ChatGPT and Codex into a unified experience focused on building agentic AI capabilities. This restructuring follows CEO Sam Altman's "code red" declaration and the company's decision to halt various side projects to refocus on core products and pursue an AI "super app" vision.
Skynet Chance (+0.04%): The explicit focus on building "agentic" AI systems that can act autonomously increases potential control and alignment challenges, as agents operating independently present greater risks of unintended consequences or misalignment with human values.
Skynet Date (-1 days): The consolidation and streamlined focus on agentic capabilities, combined with elimination of side projects, suggests accelerated development toward more autonomous AI systems that could reach concerning capability levels sooner.
AGI Progress (+0.03%): The strategic pivot toward unified agentic systems and consolidation of advanced products like ChatGPT and Codex represents a focused effort to build more general-purpose, autonomous AI capabilities that are characteristic steps toward AGI.
AGI Date (-1 days): By eliminating "side quests" and concentrating resources on core agentic AI development with explicit organizational focus, OpenAI is likely accelerating its timeline toward more general AI capabilities rather than dispersing efforts across multiple projects.
Musk vs. Altman Trial Concludes Amid Questions About AI Leadership Trust
The trial between Elon Musk and Sam Altman concluded this week, with closing arguments centered on whether the individuals leading AI development can be trusted. The legal proceedings coincide with SpaceX preparing for a potentially massive IPO and an expanding ecosystem of founders emerging from Musk-affiliated companies.
Skynet Chance (+0.01%): The trial highlights ongoing concerns about trustworthiness and accountability of AI leadership, which relates to governance structures that could affect alignment and control mechanisms. However, this is primarily a legal dispute rather than a technical safety failure, resulting in minimal impact.
Skynet Date (+0 days): Legal proceedings and leadership disputes do not directly affect the technical pace of AI capability development or deployment timelines. The trial focuses on corporate governance rather than accelerating or decelerating actual AI development.
AGI Progress (-0.01%): Leadership conflicts and trust issues at major AI organizations like OpenAI could create organizational instability and distraction from core research objectives. However, the impact is minor as technical work likely continues largely unaffected by legal proceedings.
AGI Date (+0 days): Organizational turmoil and legal disputes at leading AI companies may marginally slow progress by diverting leadership attention and resources from research priorities. The effect is small as engineering teams typically operate independently of executive-level legal matters.
Runway Pursues World Models as Next Frontier Beyond Language-Based AI
AI video generation startup Runway, valued at $5.3 billion, is shifting from video generation tools to building world models that learn directly from observational data rather than language. The company believes training AI on video and sensory data represents the next frontier of intelligence, with applications ranging from robotics and drug discovery to climate modeling. Runway faces intense competition from Google, OpenAI, and well-funded startups, though it has raised $860 million and maintains revenue growth of $40 million ARR in Q2 2026.
Skynet Chance (+0.04%): Development of world models that can simulate physical reality and predict environmental behavior increases AI's ability to operate autonomously in the real world, potentially complicating control and alignment efforts. The explicit goal of building "a better scientist than human scientists" to "accelerate progress" suggests capabilities that could outpace human oversight.
Skynet Date (-1 days): The shift from language models to world models trained on observational data could accelerate the development of AI systems with broader real-world understanding and autonomy. However, the significant compute requirements and competitive landscape may moderate the pace of this particular approach.
AGI Progress (+0.03%): World models trained on multimodal sensory data represent a significant architectural shift toward more general intelligence, moving beyond language-constrained reasoning to physics-aware understanding of reality. The company's successful deployment in robotics and expansion into scientific applications demonstrates tangible progress toward broader AI capabilities.
AGI Date (-1 days): Multiple well-funded companies simultaneously pursuing world models as a path to AGI (Runway, Google, World Labs, Luma) accelerates the timeline through competitive pressure and parallel research efforts. Runway's $40 million ARR growth and strategic partnerships with AMD and Nvidia provide the revenue and compute infrastructure to sustain rapid development.
Jury Deliberates Future of OpenAI in Elon Musk Lawsuit Over Nonprofit Mission and For-Profit Conversion
A California jury is deliberating Elon Musk's lawsuit against OpenAI, Sam Altman, and Microsoft, focusing on whether Musk's donations created a charitable trust that was violated when OpenAI established a for-profit entity and accepted a $10 billion Microsoft investment. The case centers on narrow legal questions about donor intent, use of charitable funds, and whether OpenAI's commercial pivot betrayed its original nonprofit mission. The verdict could potentially force OpenAI to restructure away from its current for-profit model, though the specific consequences remain to be determined in subsequent hearings.
Skynet Chance (-0.03%): The lawsuit addresses organizational governance and accountability mechanisms for a leading AI lab, which could marginally improve oversight and alignment with stated safety missions. However, the case is primarily about corporate structure and donor intent rather than technical AI safety measures.
Skynet Date (+1 days): If Musk prevails and OpenAI is forced to restructure away from its for-profit model, it could slow the company's commercial AI development and deployment pace due to reduced funding and operational disruption. However, the impact would be limited to one organization and might simply shift resources elsewhere.
AGI Progress (-0.01%): The legal dispute focuses on corporate governance rather than technical AI capabilities or research breakthroughs. The uncertainty and potential organizational restructuring could marginally distract from research efforts but doesn't fundamentally change the technical path to AGI.
AGI Date (+0 days): A verdict forcing OpenAI to restructure could temporarily slow one of the leading AGI research organizations through operational disruption and potential funding constraints. However, the competitive AI landscape means other organizations would likely continue advancing at their current pace.
Recursive Superintelligence Startup Emerges with $650M to Build Self-Improving AI Systems
Richard Socher has launched Recursive Superintelligence, a San Francisco-based AI startup that emerged from stealth with $650 million in funding, aiming to create recursively self-improving AI models. The company, staffed by prominent AI researchers including Peter Norvig and Tim Shi, is focused on building systems that can autonomously identify their own weaknesses and redesign themselves without human intervention, using an "open-endedness" approach inspired by biological evolution. Socher indicates that products will be released within quarters rather than years.
Skynet Chance (+0.09%): Autonomous self-improving AI systems that can redesign themselves without human oversight directly increase risks of loss of control and alignment challenges, as the system's evolution may diverge from human values. The explicit goal of removing humans from the improvement loop reduces our ability to monitor and correct problematic developments.
Skynet Date (-1 days): The $650M funding and claim of product release within quarters suggests rapid progress toward systems that autonomously improve themselves, potentially accelerating the timeline to scenarios where AI capabilities exceed human control mechanisms. The focus on removing human bottlenecks from AI development could compress timelines significantly.
AGI Progress (+0.06%): Recursive self-improvement represents a fundamental capability leap toward AGI, as it addresses the core challenge of autonomous research and development. The well-funded team of prominent researchers with a concrete technical approach (open-endedness, co-evolution) suggests meaningful progress toward systems that can independently advance their own capabilities.
AGI Date (-1 days): The substantial funding ($650M), high-caliber team, and near-term product timeline (quarters not years) indicate significant acceleration of efforts toward AGI through recursive self-improvement. If successful, such systems could dramatically compress development timelines by automating AI research itself, potentially achieving what Socher calls "superintelligence at scale."
Wirestock Raises $23M to Supply Multi-Modal Creative Data to AI Foundation Model Makers
Wirestock, a platform that originally helped photographers sell stock photos, has pivoted to become a data provider for AI labs, raising $23 million in Series A funding. The company now supplies images, videos, design assets, and 3D content from over 700,000 artists and designers to six major foundation model makers, achieving a $40 million annual revenue run-rate. Wirestock focuses on providing high-quality, annotated multi-modal data for creative AI applications like image and video generation.
Skynet Chance (0%): This news addresses data supply for AI training, which is a capability enhancement factor, but does not directly relate to AI safety, alignment, control mechanisms, or autonomous decision-making that would affect loss of control scenarios. The focus is purely on commercial data procurement for creative applications.
Skynet Date (+0 days): Improved access to high-quality multi-modal training data could marginally accelerate the development of more capable foundation models, though the focus on creative applications rather than reasoning or autonomous systems limits the impact on risk timeline. The effect on pace toward potentially dangerous AI systems is minimal.
AGI Progress (+0.02%): High-quality multi-modal data is crucial for training more capable foundation models, and this represents improved infrastructure for scaling AI systems across images, video, 3D, and potentially audio modalities. However, this is incremental progress in data supply rather than a fundamental breakthrough in AI capabilities or architecture.
AGI Date (+0 days): The availability of specialized, high-quality multi-modal datasets from a professional platform with 700,000 contributors and $40M revenue run-rate moderately accelerates the pace at which AI labs can train and improve their models. This addresses a key bottleneck (quality training data) but represents evolutionary rather than revolutionary progress in the timeline toward AGI.
AI News Calendar
AI Risk Assessment Methodology
Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:
Data Collection
We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.
Impact Analysis
Each news item undergoes rigorous assessment through:
- Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
- Safety Research: Progress in alignment, interpretability, and containment mechanisms
- Governance Factors: Regulatory developments, industry standards, and institutional safeguards
Indicator Calculation
Our indicators are updated using a Bayesian probabilistic model that:
- Assigns weighted impact scores to each analyzed development
- Calculates cumulative effects on control loss probability and AGI timelines
- Accounts for interdependencies between different technological trajectories
- Maintains historical trends to identify acceleration or deceleration patterns
This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.