Current AI Risk Assessment
Chance of AI Control Loss
Estimated Date of Control Loss
AGI Development Metrics
AGI Progress
Estimated Date of AGI
Risk Trend Over Time
Latest AI News (Last 3 Days)
Meta Acquires Moltbook to Develop Agent-to-Agent Commerce Infrastructure
Meta has acquired Moltbook, a social network for AI agents, primarily as an acqui-hire to bring talent into its Superintelligence Labs. The acquisition appears focused on building infrastructure for an "agentic web" where AI agents interact autonomously on behalf of businesses and consumers, potentially enabling agent-to-agent advertising and commerce ecosystems. This move aligns with Meta CEO Mark Zuckerberg's vision that every business will have a dedicated AI agent for customer interaction and transactions.
Skynet Chance (+0.01%): The development of autonomous AI agents that can act independently and negotiate with each other introduces minor coordination and control complexity, though the agents described operate within commercial bounds with human oversight. The risk increase is minimal as these are narrow-purpose agents rather than general autonomous systems.
Skynet Date (+0 days): Meta's investment in autonomous agent infrastructure represents incremental progress toward more independent AI systems, though focused on commercial applications. This slightly accelerates the timeline for autonomous AI deployment, albeit in constrained domains.
AGI Progress (+0.01%): Building infrastructure for multi-agent coordination and autonomous decision-making represents progress toward more sophisticated AI systems that can operate independently. However, these remain narrow-domain commercial agents rather than general intelligence, so the impact is modest.
AGI Date (+0 days): Meta's strategic focus on agentic systems and dedicated team building (Superintelligence Labs) suggests accelerated investment in autonomous AI capabilities. This acqui-hire and the broader push toward agent ecosystems modestly speeds the pace of development toward more capable autonomous systems.
Mira Murati's Thinking Machines Lab Secures Major Nvidia Compute Partnership for AI Development
Thinking Machines Lab, founded by former OpenAI co-founder Mira Murati, has signed a multi-year strategic partnership with Nvidia to deploy at least one gigawatt of Vera Rubin systems starting in 2027. The seed-stage company, valued at over $12 billion with $2 billion raised, is developing AI models that create reproducible results but has not yet released any products.
Skynet Chance (+0.01%): Massive compute scaling enables more powerful AI systems, but the focus on reproducible results could marginally improve control and reliability. The net effect is a slight increase in risk due to capability advancement outweighing the reliability focus.
Skynet Date (-1 days): The deployment of gigawatt-scale compute infrastructure accelerates the timeline for developing more capable AI systems that could pose control challenges. This represents significant acceleration in available resources for frontier AI development starting in 2027.
AGI Progress (+0.02%): A multi-billion dollar compute deal enabling gigawatt-scale deployments represents substantial progress in the infrastructure necessary for AGI development. The partnership between a well-funded AI lab and leading chip manufacturer signals serious commitment to advancing frontier AI capabilities.
AGI Date (-1 days): Securing gigawatt-scale compute starting in 2027 significantly accelerates the timeline for AGI by providing the computational resources needed for training increasingly capable models. This level of infrastructure investment suggests AGI development could proceed faster than scenarios without such massive compute availability.
Yann LeCun's AMI Labs Secures $1.03B to Develop World Models as Alternative to LLMs
AMI Labs, cofounded by Turing Prize winner Yann LeCun, has raised $1.03 billion at a $3.5 billion valuation to develop world models based on Joint Embedding Predictive Architecture (JEPA). Unlike traditional large language models, world models aim to learn from reality rather than just language, with initial applications planned in healthcare through partner Nabla. The ambitious project focuses on fundamental research and may take years before producing commercial applications, with the startup committing to open research and code sharing.
Skynet Chance (-0.03%): The focus on world models that understand reality through grounded learning and the emphasis on safety-critical applications like healthcare suggests a more controlled approach to AI development compared to less interpretable LLMs. The commitment to open research also enables broader safety scrutiny, though the fundamental capability advancement carries minimal inherent risk increase.
Skynet Date (+1 days): The multi-year fundamental research timeline and focus on safer, more grounded AI architectures rather than rapidly deployable products suggests a more deliberate development pace. This measured approach with extensive testing in real-world scenarios before deployment pushes potential risk timelines further out.
AGI Progress (+0.04%): World models that learn from reality rather than just language represent a significant architectural shift toward more general intelligence, addressing key LLM limitations like hallucinations and grounding. The substantial funding ($1.03B) and heavyweight team including LeCun, plus major backing from NVIDIA and other tech giants, indicates serious progress toward systems with broader understanding.
AGI Date (-1 days): The massive billion-dollar funding round, top-tier research talent, and major compute investment significantly accelerate the development of world models as a promising AGI pathway. Despite the multi-year timeline mentioned, the resource commitment and parallel efforts by competitors like Fei-Fei Li's World Labs suggest this approach is rapidly maturing toward AGI-relevant capabilities.
AI Industry Rallies Behind Anthropic in Pentagon Supply Chain Risk Designation Dispute
Over 30 employees from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic's lawsuit against the U.S. Department of Defense, which labeled the AI firm a supply chain risk after it refused to allow use of its technology for mass surveillance or autonomous weapons. The Pentagon subsequently signed a deal with OpenAI, prompting industry-wide concern about government overreach and its implications for AI development guardrails. The employees argue that punishing Anthropic for establishing safety boundaries will harm U.S. AI competitiveness and discourage responsible AI development practices.
Skynet Chance (-0.08%): The industry-wide defense of Anthropic's refusal to enable mass surveillance and autonomous weapons demonstrates collective commitment to safety guardrails, which reduces risks of AI misuse. However, the Pentagon's ability to simply switch to OpenAI shows these safeguards can be bypassed, limiting the positive impact.
Skynet Date (+0 days): The establishment of industry norms around AI safety boundaries and the legal precedent being set may slow deployment of unrestricted AI systems in sensitive applications. However, the DOD's quick pivot to OpenAI suggests minimal delay in government AI adoption.
AGI Progress (0%): This is a governance and ethics dispute that doesn't involve new capabilities, research breakthroughs, or technical limitations relevant to AGI development. The controversy centers on use restrictions rather than technological advancement.
AGI Date (+0 days): Increased regulatory tension and potential legal constraints on AI development could create minor friction in the research environment. However, the continued availability of multiple AI providers to government agencies suggests negligible practical impact on development pace.
Anthropic Deploys AI-Powered Code Review Tool to Manage Surge in AI-Generated Code
Anthropic has launched Code Review, an AI-powered tool integrated into Claude Code that automatically analyzes pull requests to catch bugs and logical errors in AI-generated code. The tool uses multiple AI agents working in parallel to review code from different perspectives, focusing on high-priority logical errors rather than style issues. This product targets enterprise customers dealing with increased code review bottlenecks caused by AI coding tools that rapidly generate large amounts of code.
Skynet Chance (-0.03%): The tool represents a safety measure that adds automated oversight to AI-generated code, potentially catching bugs and security vulnerabilities before they enter production systems. This defensive layer slightly reduces risks associated with poorly understood or buggy AI-generated code reaching critical systems.
Skynet Date (+0 days): While the tool improves code quality oversight, it doesn't fundamentally change AI control mechanisms or safety architectures that would affect the timeline of potential AI risk scenarios. The focus is on practical software quality rather than existential risk mitigation.
AGI Progress (+0.02%): The multi-agent architecture where different AI agents examine code from various perspectives and aggregate findings demonstrates advancing capabilities in AI coordination and specialized reasoning. This represents incremental progress in building systems where multiple AI agents collaborate effectively on complex cognitive tasks.
AGI Date (+0 days): The tool's success in automating complex code review tasks and Anthropic's reported $2.5 billion run-rate revenue demonstrates rapid commercial adoption of AI coding tools, which accelerates AI development cycles and funding. Faster iteration and increased enterprise investment in AI capabilities modestly accelerates the overall pace toward more advanced AI systems.
OpenAI Acquires AI Security Startup Promptfoo to Bolster Agent Safety
OpenAI has acquired Promptfoo, an AI security startup founded in 2024 that specializes in protecting large language models from adversaries and testing security vulnerabilities. The acquisition will integrate Promptfoo's technology into OpenAI Frontier, OpenAI's enterprise platform for AI agents, enabling automated red-teaming, security evaluation, and risk monitoring. The deal highlights growing concerns about securing autonomous AI agents as they gain access to sensitive business operations.
Skynet Chance (-0.08%): This acquisition demonstrates proactive investment in security infrastructure and red-teaming capabilities for AI agents, which helps address control and safety vulnerabilities that could lead to unintended harmful behaviors. The focus on monitoring, compliance, and adversarial testing directly mitigates risks of AI systems being exploited or operating outside intended parameters.
Skynet Date (+0 days): While improved security measures reduce risk probability, they also enable safer deployment of more powerful autonomous agents, potentially allowing continued capability advancement without pausing for safety concerns. The net effect on timeline is minor deceleration as security infrastructure must be built and integrated before wider deployment.
AGI Progress (+0.01%): The acquisition supports the development and deployment of more autonomous AI agents by addressing critical security barriers that would otherwise limit their application in enterprise settings. This infrastructure investment enables safer scaling of agentic systems, which are a step toward more general AI capabilities.
AGI Date (+0 days): By reducing security-related deployment barriers for AI agents, this acquisition may accelerate the timeline for widespread autonomous agent adoption and iterative improvement. However, the impact is modest as this addresses infrastructure rather than fundamental capability breakthroughs.
AI News Calendar
AI Risk Assessment Methodology
Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:
Data Collection
We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.
Impact Analysis
Each news item undergoes rigorous assessment through:
- Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
- Safety Research: Progress in alignment, interpretability, and containment mechanisms
- Governance Factors: Regulatory developments, industry standards, and institutional safeguards
Indicator Calculation
Our indicators are updated using a Bayesian probabilistic model that:
- Assigns weighted impact scores to each analyzed development
- Calculates cumulative effects on control loss probability and AGI timelines
- Accounts for interdependencies between different technological trajectories
- Maintains historical trends to identify acceleration or deceleration patterns
This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.