Current AI Risk Assessment
Chance of AI Control Loss
Estimated Date of Control Loss
AGI Development Metrics
AGI Progress
Estimated Date of AGI
Risk Trend Over Time
Latest AI News (Last 3 Days)
OpenAI Releases Advanced Real-Time Voice API with GPT-5-Class Reasoning and Multi-Language Translation
OpenAI announced new voice intelligence features for its API, including GPT-Realtime-2 with GPT-5-class reasoning for complex conversational requests, GPT-Realtime-Translate supporting 70+ input languages, and GPT-Realtime-Whisper for live transcription. These features are designed to enable voice interfaces that can listen, reason, translate, transcribe, and take action in real-time across enterprise applications including customer service, education, and media.
Skynet Chance (+0.04%): The integration of advanced reasoning capabilities (GPT-5-class) into real-time voice systems that can "listen, reason, and take action" increases AI autonomy in interactive contexts, though built-in guardrails partially mitigate immediate risks. The potential for misuse in fraud and the system's ability to act conversationally introduces modest control and alignment concerns.
Skynet Date (-1 days): Real-time reasoning and action-taking capabilities in commercially deployed voice systems accelerate the deployment of autonomous AI agents in real-world scenarios. This incremental advancement in multi-modal AI autonomy modestly accelerates the timeline for more capable and potentially harder-to-control systems.
AGI Progress (+0.03%): The deployment of GPT-5-class reasoning in real-time voice interactions represents progress toward multi-modal AGI capabilities, combining language understanding, reasoning, and real-time sensory processing. The ability to simultaneously reason, translate, and take action during conversations demonstrates advancing integration of multiple cognitive functions.
AGI Date (-1 days): The commercial availability of GPT-5-class reasoning capabilities (even in specialized voice applications) suggests faster-than-expected progress in deploying advanced reasoning systems. This indicates OpenAI's next-generation models are reaching production readiness, accelerating the timeline toward more general reasoning systems.
OpenAI Safety Practices Scrutinized in Musk Lawsuit as Former Employees Testify About Shift from Research to Product Focus
Elon Musk's lawsuit against OpenAI brought testimony from former employee Rosie Campbell and board member Tasha McCauley about the company's shift from safety-focused research to product development. Campbell described how safety teams were disbanded and safety protocols were bypassed, including Microsoft's premature deployment of GPT-4 in India. The case examines whether OpenAI's transformation into a major for-profit company violated its founding mission to ensure AGI benefits humanity safely.
Skynet Chance (+0.04%): The testimony reveals OpenAI disbanded safety teams, bypassed safety review processes, and prioritized product deployment over safety protocols, indicating weakened safeguards at a leading AGI lab. This erosion of safety culture and governance oversight at a frontier AI organization increases risks of uncontrolled AI deployment.
Skynet Date (-1 days): The shift toward rapid product deployment and weakening of safety review processes suggests accelerated release of advanced AI systems without adequate safety evaluation. However, the legal scrutiny and calls for stronger regulation may create some countervailing pressure toward more cautious development.
AGI Progress (+0.01%): The organizational shift toward product focus and reduced emphasis on foundational safety research suggests resources are being redirected toward commercialization rather than core AGI research. However, the company continues advancing capabilities while maintaining some safety framework, representing modest continued progress.
AGI Date (+0 days): The prioritization of product deployment over research-focused development indicates a push for faster commercialization of existing capabilities. However, this represents application of current technology rather than fundamental acceleration of AGI timeline, hence minimal impact on actual AGI achievement pace.
Anthropic's Mythos AI Model Revolutionizes Firefox Vulnerability Detection
Anthropic's Mythos model has significantly enhanced Firefox's cybersecurity by discovering thousands of high-severity bugs, including some over a decade old, with Mozilla reporting a 13x increase in bug fixes compared to the previous year. The AI system excels at finding complex sandbox vulnerabilities that traditionally commanded $20,000 bounties, though human engineers are still required to write the actual patches. The advancement marks a turning point for AI security tools, which previously suffered from high false positive rates.
Skynet Chance (+0.04%): The capability to autonomously discover complex software vulnerabilities demonstrates advanced agentic reasoning and multi-step planning abilities that could be applied to finding and exploiting security flaws in AI safety mechanisms themselves. However, the model's use under responsible disclosure norms and the fact that patching still requires human oversight provides some mitigation.
Skynet Date (-1 days): The demonstrated agentic capabilities and multi-step reasoning required to find sandbox vulnerabilities suggests faster progress in autonomous AI systems that can navigate complex problem spaces. This acceleration in practical AI agent capabilities could accelerate timelines for more advanced autonomous systems.
AGI Progress (+0.03%): The model's ability to perform complex multi-step reasoning, write code, attack systems creatively, and self-assess its work represents meaningful progress toward AGI-relevant capabilities like autonomous problem-solving and task decomposition. The shift from low-quality AI security tools to highly effective ones in just months indicates rapid capability gains.
AGI Date (-1 days): The rapid improvement in agentic AI capabilities over "a few short months" and the model's ability to outperform human experts in complex vulnerability discovery suggests an accelerating pace of AI capability development. The dramatic improvement from previous AI security tools indicates faster-than-expected progress in practical reasoning systems.
Moonshot AI Secures $2B Funding Round at $20B Valuation Amid Surge in Open-Source AI Demand
Chinese AI company Moonshot AI has raised approximately $2 billion at a $20 billion valuation, led by Meituan's VC arm, bringing its six-month total to $3.9 billion. The company, founded in 2023, develops the popular Kimi series of open-weight large language models that compete with OpenAI, Google, and Anthropic, achieving over $200 million in annual recurring revenue by April 2026. The funding reflects growing investor appetite for open-source AI models from Chinese labs, with competitors like DeepSeek and Zhipu AI also experiencing significant valuation increases.
Skynet Chance (+0.01%): Increased funding and proliferation of open-weight models could make advanced AI capabilities more widely accessible and harder to control, though the models currently lag behind frontier systems. The democratization of AI through open-source releases presents modest dual-use concerns.
Skynet Date (+0 days): Significant capital influx ($3.9B in six months) accelerates development of competitive open-weight models, potentially speeding the timeline for widely distributed capable AI systems. The competitive pressure from well-funded Chinese labs may also accelerate the overall pace of AI development globally.
AGI Progress (+0.02%): Moonshot's Kimi models demonstrate that competitive AI capabilities can be developed with relatively less capital than Western counterparts, showing efficiency gains in training and deployment. The rapid scaling from founding in 2023 to near-frontier performance by 2026 indicates progress in practical AGI-relevant capabilities.
AGI Date (+0 days): The $3.9 billion raised in six months and $200M+ ARR demonstrates strong commercial viability accelerating AI development cycles. Increased competition and capital flowing into multiple Chinese AI labs (Moonshot, DeepSeek, Zhipu) intensifies the global race toward AGI, compressing timelines.
AI Industry Leaders Discuss Infrastructure Bottlenecks, Energy Constraints, and Alternative Architectures at Milken Conference
Leaders from across the AI supply chain convened at the Milken Global Conference to discuss critical challenges facing AI development, including severe chip shortages expected to last 3-5 years, energy constraints prompting exploration of space-based data centers, and physical limitations in training real-world AI systems. The panel also explored alternative AI architectures like energy-based models that could run thousands of times faster than large language models, and discussed geopolitical sovereignty concerns around physical AI deployment.
Skynet Chance (+0.04%): The discussion reveals AI systems are expanding into physical domains (autonomous vehicles, defense drones, mining equipment) where consequences are immediate and tangible, while agent systems with read-write permissions are being deployed in corporate environments with potential control challenges. The move toward autonomous "digital workers" and physical AI systems operating in the real world increases surface area for loss of control scenarios.
Skynet Date (+1 days): Severe supply constraints (chip shortages expected for 3-5 years, energy limitations, and real-world data bottlenecks for physical AI training) are significantly slowing the pace of AI capability deployment. These infrastructure bottlenecks act as natural brakes on rapid AI advancement, pushing potential risk scenarios further into the future.
AGI Progress (+0.03%): The emergence of alternative architectures like energy-based models that claim to reason about underlying rules rather than pattern-match, plus the integration of AI into physical world applications requiring true understanding of physics and causality, represents meaningful progress toward more general intelligence. Google's vertical integration strategy and the evolution from search tools to autonomous "digital workers" also indicate advancement toward more capable, general-purpose AI systems.
AGI Date (+1 days): Multiple severe bottlenecks are constraining AGI development pace: chip supply limitations lasting 3-5 years, energy infrastructure constraints prompting extreme solutions like orbital data centers, and the irreplaceable need for real-world data that cannot be fully synthesized. These physical and resource constraints significantly decelerate the timeline toward AGI despite strong demand and investment.
Media Mogul Barry Diller Warns Trust in AI Leaders Irrelevant as AGI Approaches
Barry Diller, billionaire media mogul, stated at a WSJ conference that while he trusts OpenAI CEO Sam Altman's intentions, trust is irrelevant as AI development approaches AGI with unpredictable consequences. Diller emphasized that even AI creators don't fully understand what will happen once AGI is achieved, warning that without human-imposed guardrails, AGI systems may establish their own controls with irreversible consequences.
Skynet Chance (+0.04%): A prominent industry figure publicly acknowledging that AI creators themselves don't understand AGI consequences and warning about AGI establishing its own guardrails highlights the real alignment and control challenges, moderately increasing perceived loss of control risks.
Skynet Date (-1 days): Diller's statement that "we're close to it" and "getting closer and closer, quicker and quicker" to AGI, coming from someone with access to AI leaders, suggests the timeline may be accelerating faster than publicly understood, slightly advancing the perceived risk timeline.
AGI Progress (+0.03%): The assertion by a well-connected industry insider that AGI is approaching "closer and closer, quicker and quicker" and "we're close to it" indicates significant progress toward AGI is being made, representing a meaningful update on the current state of development.
AGI Date (-1 days): Diller's characterization of rapid and accelerating progress toward AGI, combined with his direct access to AI leaders like Altman, suggests the timeline to AGI achievement may be shorter than previously estimated, moderately accelerating the expected timeline.
SpaceX and xAI Plan Massive $119B 'Terafab' Chip Manufacturing Facility for AI and Space Computing
SpaceX and xAI are considering building a semiconductor factory called 'Terafab' in Texas with potential investment up to $119 billion, partnering with Intel to manufacture chips for AI servers, satellites, space data centers, and autonomous vehicles. Elon Musk claims the facility is necessary because current semiconductor manufacturers cannot meet his companies' AI and robotics chip demands, with a goal of eventually producing chips providing 1 Terawatt of power annually. The project reflects Musk's strategy to ensure sufficient computing power for xAI's Grok AI models and plans for space-based data centers.
Skynet Chance (+0.04%): Massive vertical integration of chip manufacturing with AI development reduces external oversight and creates concentrated control over critical AI infrastructure, potentially enabling less constrained AI development. However, this is primarily about compute availability rather than fundamentally changing safety approaches.
Skynet Date (-1 days): The planned facility aims to dramatically increase chip production specifically optimized for AI workloads, which would accelerate AI capability development by removing compute bottlenecks. However, the facility is years away from production, limiting near-term timeline impact.
AGI Progress (+0.03%): Dedicated semiconductor manufacturing infrastructure targeting 1 Terawatt annual capacity represents significant commitment to scaling AI compute, directly addressing a key constraint on training larger and more capable AI systems. This vertical integration could enable more ambitious AI projects unconstrained by chip availability.
AGI Date (-1 days): The facility specifically aims to remove chip supply bottlenecks that Musk identifies as limiting AI development speed, potentially accelerating AGI timelines once operational. The multi-year construction timeline means acceleration effects are delayed but could be substantial in the 2030s timeframe.
DeepSeek Valuation Soars to $45B in First Funding Round Amid Chinese AI Competition
DeepSeek is raising its first venture capital round at a potential $45 billion valuation, led by Chinese state investment funds and tech giants Tencent and Alibaba. The Chinese AI lab gained prominence for developing efficient large language models that match top U.S. models while using significantly less compute and running on Huawei chips. The funding aims to retain talent through equity compensation amid intense competition for AI researchers.
Skynet Chance (+0.01%): State-backed funding and optimization for domestic chips suggests less transparent development with potentially fewer international safety collaborations, though DeepSeek's open weight approach provides some visibility. The geopolitical fragmentation of AI development could complicate coordination on safety standards.
Skynet Date (+0 days): While the funding enables continued development, DeepSeek's efficiency-focused approach doesn't fundamentally change the pace toward dangerous capabilities compared to the existing trajectory. The focus on talent retention is defensive rather than dramatically accelerating.
AGI Progress (+0.01%): DeepSeek's ability to match leading models with dramatically reduced compute demonstrates algorithmic efficiency improvements that make advanced AI more accessible and sustainable. The $45 billion valuation and state backing validate the viability of efficiency-focused paths to AGI.
AGI Date (+0 days): The funding enables DeepSeek to scale its efficient model development and retain talent, modestly accelerating Chinese AGI efforts. However, this represents competitive catch-up rather than breakthrough acceleration, as they're already keeping pace with U.S. models.
Genesis AI Unveils GENE-26.5 Foundation Model with Custom Robotic Hands and Data Collection Gloves
Genesis AI has revealed its first foundational robotics model, GENE-26.5, alongside custom-designed robotic hands that match human hand size and shape. The startup has developed a full-stack approach including sensor-loaded gloves for data collection from human workers, simulation systems for rapid iteration, and plans to release a full-body general-purpose robot soon. The company raised $105 million in seed funding and is expanding across Paris, California, and London with a team of 60 people.
Skynet Chance (+0.04%): The development of general-purpose robotic systems with human-like manipulation capabilities and autonomous task execution increases the potential attack surface and deployment scale of AI systems that could be misused or develop unintended behaviors. However, the current focus on specific tasks and human supervision mitigates immediate control concerns.
Skynet Date (-1 days): The full-stack approach combining hardware, software, and rapid data collection methods accelerates the deployment timeline for capable robotic systems in real-world environments. The simulation-based rapid iteration and novel data collection through worker gloves could speed up capability development.
AGI Progress (+0.04%): This represents significant progress toward AGI by bridging the embodiment gap through human-scale manipulation, multimodal learning from video and physical interaction data, and demonstrated ability to perform complex sequential tasks. The foundation model approach for robotics parallels the successful trajectory of language models.
AGI Date (-1 days): The combination of scalable data collection methods (gloves worn during normal work, internet videos), rapid simulation-based iteration, and full-stack control significantly accelerates the pace toward general-purpose physical intelligence. The startup's massive funding and aggressive hiring across three continents enables parallel development that could compress typical research timelines.
Apple iOS 27 to Feature Multi-Model AI Extensions for User Choice
Apple is reportedly planning to introduce "Extensions" in iOS 27, allowing users to choose from multiple third-party large language models to power Apple Intelligence features like Siri and Writing Tools. Models from Google and Anthropic are currently being tested, with the feature also coming to iPadOS 27 and macOS 27. This strategy positions Apple to offer AI capabilities through hardware integration rather than building extensive proprietary AI infrastructure.
Skynet Chance (-0.03%): Distributing AI capabilities across multiple competing models and giving users choice creates a more fragmented, less centralized AI ecosystem, which marginally reduces concentration of control risks. However, the impact is minimal as these are still commercial LLMs with existing safety constraints.
Skynet Date (+0 days): This is primarily a distribution and integration strategy rather than a fundamental capability advancement, having negligible impact on the timeline toward potential AI control concerns. The underlying models' capabilities remain unchanged by this deployment approach.
AGI Progress (+0.01%): Widespread deployment of multiple advanced LLMs on billions of devices represents incremental progress in AI accessibility and integration, though it doesn't fundamentally advance core capabilities. This demonstrates maturation of existing AI technology into consumer products.
AGI Date (+0 days): Increased deployment and real-world usage of multiple LLMs across Apple's massive user base could accelerate data collection and feedback loops for model improvement, though the effect is modest. Apple's focus on hardware integration over infrastructure investment may slightly accelerate practical AI adoption timelines.
OpenAI Deploys GPT-5.5 Instant as New ChatGPT Default with Enhanced Reasoning and Context Management
OpenAI has released GPT-5.5 Instant as the new default ChatGPT model, replacing GPT-5.3 Instant, with claimed improvements in reducing hallucinations in sensitive domains and enhanced performance on mathematical and multimodal reasoning benchmarks. The model features advanced context management capabilities, allowing it to reference past conversations, files, and email for personalized responses, initially available to Plus and Pro users. The company is making the model available via API while phasing out support for older versions, continuing a pattern that has previously generated user backlash due to emotional attachment to specific model personalities.
Skynet Chance (+0.01%): Improved context management and memory integration increases the model's ability to maintain long-term state and personalized interactions, which represents modest progress toward more autonomous and persistent AI systems. However, the focus on reducing hallucinations in sensitive domains demonstrates continued emphasis on reliability and control mechanisms.
Skynet Date (+0 days): The enhanced context awareness and ability to integrate multiple information sources represents incremental progress toward more capable autonomous systems, slightly accelerating the timeline. The deployment as a commercial default suggests these capabilities are becoming standardized more quickly than expected.
AGI Progress (+0.02%): Significant improvements in mathematical reasoning (81.2 vs 65.4 on AIME 2025) and multimodal reasoning benchmarks indicate meaningful progress toward general cognitive capabilities. The advanced context management allowing integration across conversations, files, and external data sources represents a step toward more coherent, persistent intelligence.
AGI Date (+0 days): The rapid iteration from GPT-5.3 to GPT-5.5 Instant, combined with substantial performance gains on reasoning benchmarks, suggests OpenAI is maintaining an aggressive development pace. The quick commercialization of advanced context management features indicates faster-than-baseline deployment of AGI-relevant capabilities.
AI News Calendar
AI Risk Assessment Methodology
Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:
Data Collection
We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.
Impact Analysis
Each news item undergoes rigorous assessment through:
- Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
- Safety Research: Progress in alignment, interpretability, and containment mechanisms
- Governance Factors: Regulatory developments, industry standards, and institutional safeguards
Indicator Calculation
Our indicators are updated using a Bayesian probabilistic model that:
- Assigns weighted impact scores to each analyzed development
- Calculates cumulative effects on control loss probability and AGI timelines
- Accounts for interdependencies between different technological trajectories
- Maintains historical trends to identify acceleration or deceleration patterns
This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.