April 21, 2026 News
Meta Harvests Employee Keystroke Data to Train AI Models
Meta plans to use data from its employees' mouse movements and keystrokes as training data for its AI models, according to a Reuters report. This practice highlights the AI industry's growing need for new training data sources and raises significant privacy concerns as internal corporate communications become raw material for AI development. The trend extends beyond Meta, with reports of old startups' internal communications being harvested for AI training purposes.
Skynet Chance (+0.04%): The willingness to harvest employee data without clear boundaries demonstrates weakening privacy norms and oversight in AI development, which correlates with reduced safety constraints. This erosion of ethical guardrails in the pursuit of training data suggests companies may increasingly prioritize capability advancement over alignment and control considerations.
Skynet Date (+0 days): While concerning from a privacy perspective, employee keystroke data does not represent a qualitative breakthrough in AI capabilities or control mechanisms. The practice affects data sourcing methods but doesn't materially accelerate or decelerate the timeline toward potential loss of control scenarios.
AGI Progress (+0.01%): Access to diverse human interaction data (keystrokes and mouse movements) provides marginal additional training signal for AI models to better understand human work patterns. However, this represents incremental data augmentation rather than a fundamental breakthrough in capabilities or understanding required for AGI.
AGI Date (+0 days): The trend of exploiting previously untapped internal data sources (employee activity, corporate communications) provides modest acceleration by expanding the available training data pool. This could slightly speed up model improvements, though the impact on AGI timeline is minimal compared to algorithmic or architectural breakthroughs.
Anthropic's Mythos Cybersecurity AI Tool Reportedly Accessed by Unauthorized Group
An unauthorized group has allegedly gained access to Anthropic's Mythos, a powerful AI cybersecurity tool designed for enterprise security but potentially dangerous in wrong hands. The group reportedly accessed the tool through a third-party vendor on the same day it was announced, using knowledge of Anthropic's model naming conventions. Anthropic is investigating but has found no evidence of system compromise so far.
Skynet Chance (+0.04%): This incident demonstrates vulnerabilities in controlling access to powerful dual-use AI systems, showing that security measures can be circumvented even for tools explicitly designed with safety concerns. The breach highlights real-world challenges in preventing AI capabilities from reaching unauthorized actors who could weaponize them.
Skynet Date (+0 days): The successful unauthorized access suggests that AI safety barriers may be more porous than anticipated, potentially accelerating the timeline for dangerous AI capabilities to spread beyond intended controls. However, the group's stated benign intentions and Anthropic's rapid investigation response provide some counterbalancing mitigation factors.
AGI Progress (+0.01%): The development of Mythos itself represents progress in creating sophisticated AI tools with advanced reasoning capabilities for complex cybersecurity tasks. However, this news primarily concerns access control rather than fundamental capability advancement.
AGI Date (+0 days): This security incident does not meaningfully affect the pace of AGI development itself, as it involves unauthorized access to an existing tool rather than breakthroughs in AI capabilities or resources. The incident may lead to more cautious rollouts but won't significantly slow technical progress.
NeoCognition Raises $40M to Develop Self-Learning AI Agents with Human-Like Specialization
NeoCognition, a startup spun out from Ohio State University, has emerged from stealth with $40 million in seed funding to build AI agents that can autonomously learn and specialize in any domain, similar to human learning. The company aims to address the current 50% reliability problem in existing AI agents by developing systems that build domain-specific "world models" through continuous self-learning. NeoCognition plans to sell its agent technology primarily to enterprises and SaaS companies looking to build autonomous agent-workers.
Skynet Chance (+0.04%): The development of autonomous agents that can self-learn and specialize without human intervention introduces potential alignment challenges, as the agents' self-directed learning process could lead to unpredictable behaviors or goal divergence. However, the focus on reliability and controlled enterprise deployment provides some mitigation.
Skynet Date (-1 days): The $40M funding and focus on autonomous self-learning agents accelerates development of systems that can operate independently with minimal oversight. The enterprise deployment strategy could rapidly scale autonomous agent adoption across multiple domains.
AGI Progress (+0.03%): Self-learning agents that can autonomously build domain-specific world models and specialize like humans represent a significant step toward general intelligence, addressing key limitations in current AI systems' ability to adapt and learn independently. The approach of combining broad generalist capabilities with rapid specialization mirrors a fundamental aspect of human-level intelligence.
AGI Date (-1 days): Substantial seed funding ($40M) and a team of PhD researchers focused specifically on autonomous learning capabilities could accelerate progress toward AGI by addressing the critical gap between narrow AI and adaptable general intelligence. The backing from major tech investors and Vista's enterprise network enables rapid scaling and testing of self-learning systems.