October 22, 2025 News
Adaption Labs Challenges AI Scaling Paradigm with Real-Time Learning Approach
Sara Hooker, former VP of AI Research at Cohere, has launched Adaption Labs with the thesis that scaling large language models has reached diminishing returns. The startup aims to build AI systems that can continuously adapt and learn from real-world experiences more efficiently than current scaling-focused approaches. This reflects growing skepticism in the AI research community about whether simply adding more compute power will lead to superintelligent systems.
Skynet Chance (-0.08%): The shift away from pure scaling toward more adaptive, efficient learning approaches could improve AI controllability and alignment by making systems more interpretable and less dependent on massive, opaque compute clusters. If adaptive learning proves successful, it may enable more targeted safety interventions during real-time operation.
Skynet Date (+1 days): Growing recognition that scaling has limitations and requires fundamental breakthroughs in learning approaches suggests near-term progress toward powerful AI may be slower than scaling optimists predicted. The need to develop entirely new methodologies for adaptive learning introduces additional research time before reaching highly capable systems.
AGI Progress (-0.03%): The acknowledgment that current scaling approaches may have hit diminishing returns represents a potential setback to AGI timelines, as it suggests the straightforward path of adding more compute may not be sufficient. However, the pursuit of adaptive learning from real-world experience could represent a complementary capability needed for AGI.
AGI Date (+1 days): The recognition that scaling LLMs faces fundamental limitations and that new breakthroughs in adaptive learning are needed suggests AGI development may take longer than expected by scaling enthusiasts. The industry must now invest in developing and validating entirely new approaches rather than simply scaling existing methods.
Meta Reduces Superintelligence Lab Staff by 600 in Efficiency-Driven Restructuring
Meta is cutting approximately 600 jobs from its superintelligence lab as part of an ongoing reorganization effort aimed at streamlining decision-making processes. The company's chief AI officer stated that reducing team size will allow for fewer required conversations per decision and give remaining staff members greater scope and impact. Most affected employees are expected to find other positions within Meta, suggesting a redistribution of talent rather than an overall headcount reduction.
Skynet Chance (-0.03%): Reducing the size of a superintelligence lab could marginally slow the development of potentially dangerous advanced AI systems by decreasing research capacity and velocity. However, the talent redistribution within Meta and continued competition among major AI labs limits the actual risk reduction.
Skynet Date (+0 days): The reorganization may temporarily slow Meta's superintelligence research through disruption and reduced lab capacity, potentially delaying dangerous capability development. However, the impact is minimal given talent remains within the company and competitor labs continue full speed.
AGI Progress (-0.02%): Cutting 600 researchers from a dedicated superintelligence lab represents a reduction in focused AGI research capacity at one of the major AI companies. While the talent may be redistributed internally, the disbanded concentration of effort on superintelligence specifically suggests a near-term setback for Meta's AGI ambitions.
AGI Date (+0 days): The lab downsizing and reorganization will likely cause some delays in Meta's AGI research timeline due to disrupted teams and reduced focused capacity. However, the overall impact on the industry timeline is minimal since other companies like OpenAI, Anthropic, and Google continue aggressive development.