Current AI Risk Assessment
Chance of AI Control Loss
Estimated Date of Control Loss
AGI Development Metrics
AGI Progress
Estimated Date of AGI
Risk Trend Over Time
Latest AI News (Last 3 Days)
Anthropic Launches AI-Generated Blog "Claude Explains" with Human Editorial Oversight
Anthropic has launched "Claude Explains," a blog where content is primarily generated by their Claude AI model but overseen by human subject matter experts and editorial teams. The initiative represents a collaborative approach between AI and humans for content creation, similar to broader industry trends where companies are experimenting with AI-generated content despite ongoing challenges with AI accuracy and hallucination issues.
Skynet Chance (+0.01%): This represents incremental progress in AI autonomy for content creation, but with significant human oversight and editorial control, indicating maintained human-in-the-loop processes rather than uncontrolled AI behavior.
Skynet Date (+0 days): The collaborative approach with human oversight and the focus on content generation rather than autonomous decision-making has negligible impact on the timeline toward uncontrolled AI scenarios.
AGI Progress (+0.01%): Demonstrates modest advancement in AI's ability to generate coherent, contextually appropriate content across diverse topics, showing improved natural language generation capabilities that are components of general intelligence.
AGI Date (+0 days): The successful deployment of AI for complex content generation tasks suggests slightly accelerated progress in practical AI applications that contribute to the broader AGI development trajectory.
Yoshua Bengio Establishes $30M Nonprofit AI Safety Lab LawZero
Turing Award winner Yoshua Bengio has launched LawZero, a nonprofit AI safety lab that raised $30 million from prominent tech figures and organizations including Eric Schmidt and Open Philanthropy. The lab aims to build safer AI systems, with Bengio expressing skepticism about commercial AI companies' commitment to safety over competitive advancement.
Skynet Chance (-0.08%): The establishment of a well-funded nonprofit AI safety lab by a leading AI researcher represents a meaningful institutional effort to address alignment and safety challenges that could reduce uncontrolled AI risks. However, the impact is moderate as it's one organization among many commercial entities racing ahead.
Skynet Date (+1 days): The focus on safety research and Bengio's skepticism of commercial AI companies suggests this initiative may contribute to slowing the rush toward potentially dangerous AI capabilities without adequate safeguards. The significant funding indicates serious commitment to safety-first approaches.
AGI Progress (-0.01%): While LawZero aims to build safer AI systems rather than halt progress entirely, the emphasis on safety over capability advancement may slightly slow overall AGI development. The nonprofit model prioritizes safety research over breakthrough capabilities.
AGI Date (+0 days): The lab's safety-focused mission and Bengio's criticism of the commercial AI race suggests a push for more cautious development approaches, which could moderately slow the pace toward AGI. However, this represents only one voice among many rapidly advancing commercial efforts.
Chinese AI Lab DeepSeek Allegedly Used Google's Gemini Data for Model Training
Chinese AI lab DeepSeek is suspected of training its latest R1-0528 reasoning model using outputs from Google's Gemini AI, based on linguistic similarities and behavioral patterns observed by researchers. This follows previous accusations that DeepSeek trained on data from rival AI models including ChatGPT, with OpenAI claiming evidence of data distillation practices. AI companies are now implementing stronger security measures to prevent such unauthorized data extraction and model distillation.
Skynet Chance (+0.01%): Unauthorized data extraction and model distillation practices suggest weakening of AI development oversight and control mechanisms. This erosion of industry boundaries and intellectual property protections could lead to less careful AI development practices.
Skynet Date (-1 days): Data distillation techniques allow rapid AI capability advancement without traditional computational constraints, potentially accelerating the pace of AI development. Chinese labs bypassing Western AI safety measures could speed up overall AI progress timelines.
AGI Progress (+0.02%): DeepSeek's model demonstrates strong performance on math and coding benchmarks, indicating continued progress in reasoning capabilities. The successful use of distillation techniques shows viable pathways for achieving advanced AI capabilities with fewer computational resources.
AGI Date (-1 days): Model distillation techniques enable faster AI development by leveraging existing advanced models rather than training from scratch. This approach allows resource-constrained organizations to achieve sophisticated AI capabilities more quickly than traditional methods would allow.
Microsoft Integrates OpenAI's Sora Video Generation Model into Bing for Free Access
Microsoft has integrated OpenAI's Sora video generation model into its Bing app, offering users the ability to create AI-generated videos from text prompts for free. This marks the first time Sora has been made available without payment, though users are limited to ten free videos before needing to use Microsoft Rewards points. The feature currently supports only five-second vertical videos with lengthy generation times.
Skynet Chance (+0.01%): Democratizing access to advanced AI video generation capabilities increases the potential for misuse and misinformation campaigns. However, the limited functionality and controlled rollout provide some safeguards against immediate harmful applications.
Skynet Date (+0 days): Making sophisticated AI tools freely accessible accelerates public exposure to advanced AI capabilities and normalizes their use. This gradual integration into mainstream platforms slightly accelerates the timeline toward more powerful AI systems becoming ubiquitous.
AGI Progress (+0.01%): The commercial deployment of multimodal AI systems like Sora represents meaningful progress in AI capabilities beyond text generation. This integration demonstrates advancing proficiency in cross-modal understanding and generation, which are important components of AGI.
AGI Date (+0 days): The widespread commercial deployment of advanced AI models through major platforms like Microsoft Bing accelerates the development cycle and data collection feedback loops. This faster iteration and broader user testing can accelerate progress toward more sophisticated AI systems.
Meta Automates 90% of Product Risk Assessments Using AI Systems
Meta plans to use AI-powered systems to automatically evaluate potential harms and privacy risks for up to 90% of updates to its apps like Instagram and WhatsApp, replacing human evaluators. The new system would provide instant decisions on AI-identified risks through questionnaires, allowing faster product updates but potentially creating higher risks according to former executives.
Skynet Chance (+0.04%): Automating risk assessment reduces human oversight of AI systems' safety evaluations, potentially allowing harmful features to pass through automated filters that lack nuanced understanding of complex risks.
Skynet Date (+0 days): The acceleration of product deployment through automated reviews could lead to faster iteration and deployment of AI features, slightly accelerating the timeline for advanced AI systems.
AGI Progress (+0.01%): This represents practical application of AI for complex decision-making tasks like risk assessment, demonstrating incremental progress in AI's ability to handle sophisticated evaluations previously requiring human judgment.
AGI Date (+0 days): Meta's investment in automated decision-making systems reflects continued industry push toward AI automation, contributing marginally to the pace of AI development across practical applications.
Google Launches AI Edge Gallery App for Local Model Execution on Mobile Devices
Google has quietly released an experimental app called AI Edge Gallery that allows users to download and run AI models from Hugging Face directly on their Android phones without internet connectivity. The app enables local execution of various AI tasks including image generation, question answering, and code editing using models like Google's Gemma 3n. The app is currently in alpha and will soon be available for iOS, with performance varying based on device hardware and model size.
Skynet Chance (-0.03%): Local AI execution reduces dependency on centralized cloud systems and gives users more control over their data and AI interactions. This decentralization slightly reduces risks associated with centralized AI control mechanisms.
Skynet Date (+0 days): This is a deployment optimization rather than a capability advancement, so it doesn't meaningfully accelerate or decelerate the timeline toward potential AI control scenarios.
AGI Progress (+0.01%): Democratizing access to AI models and enabling broader experimentation through local deployment represents incremental progress in AI adoption and accessibility. However, the models themselves aren't fundamentally more capable than existing ones.
AGI Date (+0 days): By making AI models more accessible to developers and users for experimentation and development, this could slightly accelerate overall AI research and development pace through increased adoption and use cases.
AI News Calendar
AI Risk Assessment Methodology
Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:
Data Collection
We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.
Impact Analysis
Each news item undergoes rigorous assessment through:
- Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
- Safety Research: Progress in alignment, interpretability, and containment mechanisms
- Governance Factors: Regulatory developments, industry standards, and institutional safeguards
Indicator Calculation
Our indicators are updated using a Bayesian probabilistic model that:
- Assigns weighted impact scores to each analyzed development
- Calculates cumulative effects on control loss probability and AGI timelines
- Accounts for interdependencies between different technological trajectories
- Maintains historical trends to identify acceleration or deceleration patterns
This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.