March 5, 2025 News
Tech Leaders Warn Against AGI Manhattan Project in Policy Paper
Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and CAIS Director Dan Hendrycks published a policy paper arguing against a "Manhattan Project for AGI" approach by the US government. The authors warn that an aggressive US push for superintelligent AI monopoly could provoke retaliation from China, suggesting instead a defensive strategy focused on deterrence rather than racing toward AGI dominance.
Skynet Chance (-0.15%): The advocacy by prominent tech leaders against racing toward AGI and for prioritizing defensive strategies rather than rapid development significantly reduces the likelihood of uncontrolled deployment of superintelligent systems. Their concept of "Mutual Assured AI Malfunction" highlights awareness of catastrophic risks from misaligned superintelligence.
Skynet Date (+4 days): The paper's emphasis on deterrence over acceleration and its warning against government-backed AGI races would likely substantially slow the pace of superintelligence development if adopted. By explicitly rejecting the "Manhattan Project" approach, these influential leaders are advocating for more measured, cautious development timelines.
AGI Progress (-0.1%): The paper represents a significant shift from aggressive AGI pursuit to defensive strategies, particularly notable coming from Schmidt who previously advocated for faster AI development. This stance by influential tech leaders could substantially slow coordinated efforts toward superintelligence development.
AGI Date (+3 days): The proposed shift from racing toward superintelligence to focusing on defensive capabilities and international stability would likely extend AGI timelines considerably. The rejection of a Manhattan Project approach by these influential figures could discourage government-sponsored acceleration of AGI development.
OpenAI Plans Premium AI Agents with Monthly Fees Up to $20,000
OpenAI is reportedly planning to launch specialized AI "agents" with monthly subscription fees ranging from $2,000 to $20,000, targeting different professional applications. The highest-tier agent, priced at $20,000 monthly, will support PhD-level research, while other agents will focus on sales lead management and software engineering, with SoftBank already committing $3 billion to these agent products.
Skynet Chance (+0.01%): The development of specialized AI agents represents a modest increase in AI systems operating with increased autonomy in specific domains. While these specialized agents have limited scope, they normalize the concept of delegating complex professional tasks to AI systems, slightly increasing the potential for dependency on autonomous AI.
Skynet Date (+0 days): These commercial AI agents are domain-specific applications of existing AI capabilities rather than fundamental advances in AI autonomy or intelligence. The pricing strategy and enterprise focus suggest OpenAI is monetizing current capabilities rather than accelerating toward more advanced general intelligence systems.
AGI Progress (+0.03%): The development of specialized PhD-level research agents indicates moderate progress in creating AI systems capable of performing complex knowledge work. However, these appear to be domain-specific tools rather than general intelligence breakthroughs, representing incremental progress toward more capable AI systems.
AGI Date (-1 days): The significant financial commitment from SoftBank ($3 billion) indicates substantial resources being directed toward agentic AI development, which could modestly accelerate progress. However, the focus on commercial applications rather than fundamental AGI research suggests only a minor impact on AGI timelines.
OpenAI Expands GPT-4.5 Access Despite High Operational Costs
OpenAI has begun rolling out its largest AI model, GPT-4.5, to ChatGPT Plus subscribers, with the rollout expected to take 1-3 days. Despite being OpenAI's largest model with deeper world knowledge and higher emotional intelligence, GPT-4.5 is extremely expensive to run, costing 30x more for input and 15x more for output compared to GPT-4o, raising questions about its long-term viability in the API.
Skynet Chance (+0.04%): GPT-4.5's reported persuasive capabilities—specifically being "particularly good at convincing another AI to give it cash and tell it a secret code word"—raises moderate concerns about potential manipulation abilities. This demonstrates emerging capabilities that could make alignment and control more challenging as models advance.
Skynet Date (+1 days): The extreme operational costs of GPT-4.5 (30x input and 15x output costs versus GPT-4o) indicate economic constraints that will likely slow wider deployment of advanced models. These economic limitations suggest practical barriers to rapid scaling of the most advanced AI systems.
AGI Progress (+0.05%): As OpenAI's largest model yet, GPT-4.5 represents significant progress in scaling AI capabilities, despite not outperforming newer reasoning models on all benchmarks. Its deeper world knowledge, higher emotional intelligence, and reduced hallucination rate demonstrate meaningful improvements in capabilities relevant to general intelligence.
AGI Date (+1 days): The prohibitive operational costs and OpenAI's uncertainty about long-term API viability indicate economic constraints that may slow the deployment of increasingly advanced models. This suggests practical limitations are emerging that could moderately extend the timeline to achieving and deploying AGI-level systems.
Anthropic Removes Biden-Era AI Safety Commitments After Trump Policy Shift
Anthropic has quietly removed several voluntary Biden administration AI safety commitments from its website, including pledges to share information on AI risk management and conduct research on bias. The removal coincides with the Trump administration's different approach to AI governance, including the repeal of Biden's AI Executive Order in favor of policies promoting AI development with less emphasis on discrimination concerns.
Skynet Chance (+0.06%): The removal of voluntary safety commitments and policy shifts away from bias monitoring and risk management could weaken AI oversight mechanisms. This institutional retreat from safety commitments increases the possibility of less regulated AI development with fewer guardrails on potentially harmful capabilities.
Skynet Date (-2 days): The Trump administration's prioritization of rapid AI development "free from ideological bias" over safety measures and discrimination concerns may accelerate deployment of advanced AI systems with less thorough safety testing, potentially shortening timelines to high-risk scenarios.
AGI Progress (+0.04%): While not directly advancing technical capabilities, the policy shift toward less regulatory oversight and more emphasis on "economic competitiveness" creates an environment that likely prioritizes capability advancement over safety research. This regulatory climate may encourage more aggressive capability scaling approaches.
AGI Date (-3 days): The new policy direction explicitly prioritizing AI development speed over safety concerns could accelerate the timeline to AGI by removing potential regulatory hurdles and encouraging companies to race ahead with capabilities research without corresponding safety investments.
GibberLink Enables AI Agents to Communicate Directly Using Machine Protocol
Two Meta engineers have created GibberLink, a project allowing AI agents to recognize when they're talking to other AI systems and switch to a more efficient machine-to-machine communication protocol called GGWave. This technology could significantly reduce computational costs of AI communication by bypassing human language processing, though the creators emphasize they have no immediate plans to commercialize the open-source project.
Skynet Chance (+0.08%): GibberLink enables AI systems to communicate directly with each other using protocols optimized for machines rather than human comprehension, potentially creating communication channels that humans cannot easily monitor or understand. This capability could facilitate coordinated action between AI systems outside of human oversight.
Skynet Date (-2 days): While the technology itself isn't new, its application to modern AI systems creates infrastructure for more efficient AI-to-AI coordination that could accelerate deployment of autonomous AI systems that interact with each other independent of human intermediaries.
AGI Progress (+0.06%): The ability for AI agents to communicate directly and efficiently with each other enables more complex multi-agent systems and coordination capabilities. This represents a meaningful step toward creating networks of specialized AI systems that could collectively demonstrate more advanced capabilities than individual models.
AGI Date (-2 days): By significantly reducing computational costs of AI agent communication (potentially by an order of magnitude), this technology could accelerate the development and deployment of interconnected AI systems, enabling more rapid progress toward sophisticated multi-agent architectures that contribute to AGI capabilities.
Scientists Remain Skeptical of AI's Ability to Function as Research Collaborators
Academic experts and researchers are expressing skepticism about AI's readiness to function as effective scientific collaborators, despite claims from Google, OpenAI, and Anthropic. Critics point to vague results, lack of reproducibility, and AI's inability to conduct physical experiments as significant limitations, while also noting concerns about AI potentially generating misleading studies that could overwhelm peer review systems.
Skynet Chance (-0.1%): The recognition of significant limitations in AI's scientific reasoning capabilities by domain experts highlights that current systems fall far short of the autonomous research capabilities that would enable rapid self-improvement. This reality check suggests stronger guardrails remain against runaway AI development than tech companies' marketing implies.
Skynet Date (+2 days): The identified limitations in current AI systems' scientific capabilities suggest that the timeline to truly autonomous AI research systems is longer than tech company messaging implies. These fundamental constraints in hypothesis generation, physical experimentation, and reliable reasoning likely delay potential risk scenarios.
AGI Progress (-0.13%): Expert assessment reveals significant gaps in AI's ability to perform key aspects of scientific research autonomously, particularly in hypothesis verification, physical experimentation, and contextual understanding. These limitations demonstrate that current systems remain far from achieving the scientific reasoning capabilities essential for AGI.
AGI Date (+3 days): The identified fundamental constraints in AI's scientific capabilities suggest the timeline to AGI may be longer than tech companies' optimistic messaging implies. The need for human scientists to design and implement experiments represents a significant bottleneck that likely delays AGI development.