Current AI Risk Assessment
Chance of AI Control Loss
Estimated Date of Control Loss
AGI Development Metrics
AGI Progress
Estimated Date of AGI
Risk Trend Over Time
Latest AI News (Last 3 Days)
Google Cloud Unveils Specialized TPU 8t and TPU 8i Chips for AI Training and Inference
Google Cloud announced its eighth generation tensor processing units (TPUs), splitting into two specialized chips: TPU 8t for model training and TPU 8i for inference. The new chips promise 3x faster training, 80% better performance per dollar, and support for clusters exceeding 1 million TPUs. Despite this advancement, Google continues to offer Nvidia's latest chips alongside its own custom processors, with both companies collaborating on networking optimization.
Skynet Chance (+0.01%): Increased availability of powerful, cost-effective AI compute infrastructure makes large-scale AI deployment more accessible, slightly increasing proliferation risks. However, the incremental nature of this hardware improvement and continued focus on commercial cloud services suggests minimal impact on fundamental AI control challenges.
Skynet Date (+0 days): More efficient and scalable compute infrastructure modestly accelerates the timeline for deploying powerful AI systems at scale. The ability to cluster 1 million+ TPUs together enables larger training runs, though this represents evolutionary rather than revolutionary progress.
AGI Progress (+0.02%): Significant improvements in training speed (3x faster) and scalability (1 million+ TPU clusters) directly enable larger model training runs and more rapid experimentation cycles. Better performance-per-dollar economics removes some resource constraints that might otherwise slow AGI research progress.
AGI Date (+0 days): The combination of faster training, massive scalability, and improved cost-efficiency accelerates the pace at which researchers can iterate on large models and test AGI-relevant architectures. Reduced infrastructure costs lower barriers for organizations pursuing AGI research, compressing timelines.
Google Integrates Gemini AI Agent into Enterprise Chrome Browser with Auto-Browse Capabilities
Google announced it will integrate Gemini AI-powered "auto browse" agentic capabilities into Chrome for enterprise users, enabling the AI to perform tasks like booking travel, data entry, and meeting scheduling across browser tabs. The feature requires human approval before final actions and will be available to Workspace users in the U.S., with Google also introducing security measures to detect unsanctioned AI tools in the workplace. Google emphasizes this will free workers for strategic tasks, though studies suggest AI may actually intensify workloads rather than reduce them.
Skynet Chance (+0.04%): The deployment of autonomous AI agents in enterprise environments that can take actions across multiple systems increases the surface area for potential loss of control, though the mandatory human-in-the-loop approval requirement provides a meaningful safety constraint. The detection and blocking of "unsanctioned" AI tools suggests growing complexity in managing multiple autonomous systems.
Skynet Date (-1 days): The mainstreaming of AI agents into everyday workplace tools accelerates the integration of autonomous AI systems into critical infrastructure and business processes. This normalization of agent-based AI could incrementally speed the path toward more capable autonomous systems.
AGI Progress (+0.03%): This represents a significant step in deploying multi-modal AI agents that can understand context across multiple browser tabs and execute complex multi-step workflows autonomously. The ability to handle diverse tasks like CRM data entry, price comparison, and scheduling demonstrates progress toward more general-purpose AI assistance.
AGI Date (-1 days): Google's deployment of agentic AI capabilities into its widely-used Chrome browser accelerates real-world testing and iteration of autonomous AI systems at massive scale. The enterprise rollout will generate substantial data and feedback that could accelerate development of more capable agent architectures.
Google Launches Gemini Enterprise Agent Platform for IT Teams at Cloud Next Conference
Google announced its Gemini Enterprise Agent Platform at the Cloud Next conference, a tool designed for building and managing AI agents at enterprise scale, positioning it as a competitor to Amazon Bedrock AgentCore and Microsoft Foundry. The platform is specifically targeted at IT and technical teams, while business users are directed to the separate Gemini Enterprise app for simpler agent-based tasks. The platform supports multiple models including Google's Gemini and Anthropic's Claude family (Opus, Sonnet, and Haiku).
Skynet Chance (+0.01%): Enterprise-scale agent deployment tools increase the surface area for potential loss of control or misalignment, though the focus on managed IT environments with human oversight provides some containment. The magnitude remains small as this is deployment infrastructure rather than capability advancement.
Skynet Date (+0 days): Platform tools that make agent deployment easier and more widespread could modestly accelerate the timeline for AI systems operating with increasing autonomy in critical infrastructure. However, the enterprise focus with IT oversight limits the acceleration effect.
AGI Progress (+0.01%): The release demonstrates progress in orchestrating multiple AI models and building practical agentic systems that can perform multi-step tasks autonomously, which are prerequisites for AGI. However, this is infrastructure for existing models rather than fundamental capability advancement.
AGI Date (+0 days): By providing enterprise-ready tools for agent deployment and making multi-model orchestration accessible, Google accelerates the practical application and scaling of agentic AI systems. This commercial infrastructure helps move agentic AI from research to production faster.
Thinking Machines Lab Secures Multi-Billion Dollar Google Cloud Deal for Advanced AI Infrastructure
Mira Murati's startup Thinking Machines Lab has signed a multi-billion-dollar agreement with Google Cloud for access to advanced AI infrastructure, including systems powered by Nvidia's latest GB300 GPUs. The deal supports the company's reinforcement learning workloads for Tinker, a tool that automates the creation of custom frontier AI models, and marks Google's strategy to lock in emerging AI labs early. Thinking Machines previously raised $2 billion at a $12 billion valuation and this represents its first major cloud provider partnership.
Skynet Chance (+0.06%): Automating the creation of frontier AI models through tools like Tinker could democratize access to powerful AI capabilities and reduce human oversight in the model development process. This automation of AI creation, combined with massive computational resources, increases risks of misaligned or uncontrollable systems being developed at scale with less deliberate safety consideration.
Skynet Date (-1 days): The combination of multi-billion-dollar compute deals, 2X faster GB300 GPUs, and automated frontier model creation tools significantly accelerates the pace at which powerful AI systems can be developed and deployed. The scale of investment and infrastructure access suggests capability advancement is outpacing safety research development.
AGI Progress (+0.05%): Tinker's ability to automate creation of custom frontier models represents meaningful progress toward generalizable AI systems, while the reinforcement learning focus aligns with approaches that have driven recent breakthroughs at DeepMind and OpenAI. The massive computational resources (multi-billion-dollar scale) enable exploration of capability frontiers previously inaccessible.
AGI Date (-1 days): The deal provides access to cutting-edge GB300 infrastructure offering 2X training speed improvements, combined with a tool that automates frontier model creation, substantially accelerating the pace of AGI research. Multi-billion-dollar compute commitments to reinforcement learning workloads enable dramatically faster iteration cycles on AGI-relevant approaches.
Meta Harvests Employee Keystroke Data to Train AI Models
Meta plans to use data from its employees' mouse movements and keystrokes as training data for its AI models, according to a Reuters report. This practice highlights the AI industry's growing need for new training data sources and raises significant privacy concerns as internal corporate communications become raw material for AI development. The trend extends beyond Meta, with reports of old startups' internal communications being harvested for AI training purposes.
Skynet Chance (+0.04%): The willingness to harvest employee data without clear boundaries demonstrates weakening privacy norms and oversight in AI development, which correlates with reduced safety constraints. This erosion of ethical guardrails in the pursuit of training data suggests companies may increasingly prioritize capability advancement over alignment and control considerations.
Skynet Date (+0 days): While concerning from a privacy perspective, employee keystroke data does not represent a qualitative breakthrough in AI capabilities or control mechanisms. The practice affects data sourcing methods but doesn't materially accelerate or decelerate the timeline toward potential loss of control scenarios.
AGI Progress (+0.01%): Access to diverse human interaction data (keystrokes and mouse movements) provides marginal additional training signal for AI models to better understand human work patterns. However, this represents incremental data augmentation rather than a fundamental breakthrough in capabilities or understanding required for AGI.
AGI Date (+0 days): The trend of exploiting previously untapped internal data sources (employee activity, corporate communications) provides modest acceleration by expanding the available training data pool. This could slightly speed up model improvements, though the impact on AGI timeline is minimal compared to algorithmic or architectural breakthroughs.
Anthropic's Mythos Cybersecurity AI Tool Reportedly Accessed by Unauthorized Group
An unauthorized group has allegedly gained access to Anthropic's Mythos, a powerful AI cybersecurity tool designed for enterprise security but potentially dangerous in wrong hands. The group reportedly accessed the tool through a third-party vendor on the same day it was announced, using knowledge of Anthropic's model naming conventions. Anthropic is investigating but has found no evidence of system compromise so far.
Skynet Chance (+0.04%): This incident demonstrates vulnerabilities in controlling access to powerful dual-use AI systems, showing that security measures can be circumvented even for tools explicitly designed with safety concerns. The breach highlights real-world challenges in preventing AI capabilities from reaching unauthorized actors who could weaponize them.
Skynet Date (+0 days): The successful unauthorized access suggests that AI safety barriers may be more porous than anticipated, potentially accelerating the timeline for dangerous AI capabilities to spread beyond intended controls. However, the group's stated benign intentions and Anthropic's rapid investigation response provide some counterbalancing mitigation factors.
AGI Progress (+0.01%): The development of Mythos itself represents progress in creating sophisticated AI tools with advanced reasoning capabilities for complex cybersecurity tasks. However, this news primarily concerns access control rather than fundamental capability advancement.
AGI Date (+0 days): This security incident does not meaningfully affect the pace of AGI development itself, as it involves unauthorized access to an existing tool rather than breakthroughs in AI capabilities or resources. The incident may lead to more cautious rollouts but won't significantly slow technical progress.
NeoCognition Raises $40M to Develop Self-Learning AI Agents with Human-Like Specialization
NeoCognition, a startup spun out from Ohio State University, has emerged from stealth with $40 million in seed funding to build AI agents that can autonomously learn and specialize in any domain, similar to human learning. The company aims to address the current 50% reliability problem in existing AI agents by developing systems that build domain-specific "world models" through continuous self-learning. NeoCognition plans to sell its agent technology primarily to enterprises and SaaS companies looking to build autonomous agent-workers.
Skynet Chance (+0.04%): The development of autonomous agents that can self-learn and specialize without human intervention introduces potential alignment challenges, as the agents' self-directed learning process could lead to unpredictable behaviors or goal divergence. However, the focus on reliability and controlled enterprise deployment provides some mitigation.
Skynet Date (-1 days): The $40M funding and focus on autonomous self-learning agents accelerates development of systems that can operate independently with minimal oversight. The enterprise deployment strategy could rapidly scale autonomous agent adoption across multiple domains.
AGI Progress (+0.03%): Self-learning agents that can autonomously build domain-specific world models and specialize like humans represent a significant step toward general intelligence, addressing key limitations in current AI systems' ability to adapt and learn independently. The approach of combining broad generalist capabilities with rapid specialization mirrors a fundamental aspect of human-level intelligence.
AGI Date (-1 days): Substantial seed funding ($40M) and a team of PhD researchers focused specifically on autonomous learning capabilities could accelerate progress toward AGI by addressing the critical gap between narrow AI and adaptable general intelligence. The backing from major tech investors and Vista's enterprise network enables rapid scaling and testing of self-learning systems.
Amazon Invests Additional $5B in Anthropic, Secures $100B Cloud Commitment for Custom AI Chips
Amazon has invested an additional $5 billion in Anthropic, bringing its total investment to $13 billion, while Anthropic commits to spending over $100 billion on AWS cloud services over the next decade. The deal centers on Amazon's custom AI chips (Trainium and Graviton), with Anthropic securing access to current and future chip generations including the unreleased Trainium4. This follows a similar Amazon-OpenAI agreement and comes amid reports that Anthropic may seek additional funding at an $800 billion valuation.
Skynet Chance (+0.04%): Massive resource allocation to AI development through concentrated corporate partnerships increases capability advancement without clear corresponding safety infrastructure commitments. The vertical integration of compute, chips, and AI development consolidates control but also accelerates unchecked capability scaling.
Skynet Date (-1 days): The $100 billion compute commitment and access to future-generation custom chips significantly accelerates the timeline for advanced AI development. This unprecedented resource allocation compresses the development cycle for increasingly capable AI systems.
AGI Progress (+0.04%): Access to 5GW of computing capacity and next-generation custom AI accelerators represents a major infrastructure leap enabling training of significantly larger and more capable models. The scale of committed resources ($100B over 10 years) removes key bottlenecks in the path toward AGI.
AGI Date (-1 days): The guaranteed access to massive compute resources and future chip generations (through Trainium4 and beyond) substantially accelerates the AGI timeline by eliminating infrastructure uncertainty. This deal enables Anthropic to scale capabilities far faster than relying on commercially available resources.
NSA Deploys Anthropic's Unreleased Mythos AI Model for Cybersecurity Despite Pentagon Supply Chain Dispute
The National Security Agency is reportedly using Anthropic's Mythos Preview, a frontier AI model designed for cybersecurity that was withheld from public release due to its offensive capabilities. This occurs amid a conflict where the Department of Defense labeled Anthropic a "supply chain risk" after the company refused unrestricted Pentagon access and declined to enable mass surveillance and autonomous weapons applications.
Skynet Chance (+0.04%): The development and restricted deployment of an AI model explicitly too dangerous for public release due to offensive cyber capabilities demonstrates advancement in dual-use AI systems that could be weaponized. The tension between corporate AI safety restrictions and military pressure for unrestricted access suggests weakening barriers against dangerous AI applications.
Skynet Date (+0 days): The NSA's active deployment of advanced offensive-capable AI systems for vulnerability scanning indicates the operational integration of powerful AI tools into national security infrastructure is already underway. However, Anthropic's resistance to unrestricted military use provides some modest counterpressure against uncontrolled proliferation.
AGI Progress (+0.03%): Mythos represents a frontier model with capabilities in cybersecurity tasks advanced enough that Anthropic deemed it too dangerous for public release, indicating significant progress in specialized AI capabilities. The model's ability to perform offensive cyberattacks suggests improved agentic reasoning and domain expertise relevant to AGI development.
AGI Date (+0 days): Anthropic's development of a model sufficiently capable in complex cybersecurity tasks to warrant restricted access suggests faster-than-expected progress in creating highly capable domain-specific AI systems. The limited deployment to approximately 40 organizations indicates rapid advancement in frontier model capabilities occurring behind closed doors.
AI News Calendar
AI Risk Assessment Methodology
Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:
Data Collection
We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.
Impact Analysis
Each news item undergoes rigorous assessment through:
- Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
- Safety Research: Progress in alignment, interpretability, and containment mechanisms
- Governance Factors: Regulatory developments, industry standards, and institutional safeguards
Indicator Calculation
Our indicators are updated using a Bayesian probabilistic model that:
- Assigns weighted impact scores to each analyzed development
- Calculates cumulative effects on control loss probability and AGI timelines
- Accounts for interdependencies between different technological trajectories
- Maintains historical trends to identify acceleration or deceleration patterns
This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.