National Security AI News & Updates
Trump Administration Rescinds Biden's AI Chip Export Controls
The US Department of Commerce has officially rescinded the Biden Administration's Artificial Intelligence Diffusion Rule that would have implemented tiered export controls on AI chips to various countries. The Trump Administration plans to replace it with a different approach focused on direct country negotiations rather than blanket restrictions, while maintaining vigilance against adversaries accessing US AI technology.
Skynet Chance (+0.04%): The relaxation of export controls potentially increases proliferation of advanced AI chips globally, which could enable more entities to develop sophisticated AI systems with less oversight, increasing the possibility of unaligned or dangerous AI development.
Skynet Date (-1 days): By potentially accelerating global access to advanced AI hardware, the policy change may slightly speed up capabilities development worldwide, bringing forward the timeline for potential control risks associated with advanced AI systems.
AGI Progress (+0.03%): Reduced export controls could facilitate wider distribution of high-performance AI chips, potentially accelerating global AI research and development through increased hardware access, though the precise replacement policies remain undefined.
AGI Date (-2 days): The removal of tiered restrictions likely accelerates the timeline to AGI by enabling more international actors to access cutting-edge AI hardware, potentially speeding up compute-intensive AGI-relevant research outside traditional power centers.
Anthropic Endorses US AI Chip Export Controls with Suggested Refinements
Anthropic has published support for the US Department of Commerce's proposed AI chip export controls ahead of the May 15 implementation date, while suggesting modifications to strengthen the policy. The AI company recommends lowering the purchase threshold for Tier 2 countries while encouraging government-to-government agreements, and calls for increased funding to ensure proper enforcement of the controls.
Skynet Chance (-0.15%): Effective export controls on advanced AI chips would significantly reduce the global proliferation of the computational resources needed for training and deploying potentially dangerous AI systems. Anthropic's support for even stricter controls than proposed indicates awareness of the risks from uncontrolled AI development.
Skynet Date (+4 days): Restricting access to advanced AI chips for many countries would likely slow the global development of frontier AI systems, extending timelines before potential uncontrolled AI scenarios could emerge. The recommended enforcement mechanisms would further strengthen this effect if implemented.
AGI Progress (-0.08%): Export controls on advanced AI chips would restrict computational resources available for AI research and development in many regions, potentially slowing overall progress. The emphasis on control rather than capability advancement suggests prioritizing safety over speed in AGI development.
AGI Date (+4 days): Limiting global access to cutting-edge AI chips would likely extend AGI timelines by creating barriers to the massive computing resources needed for training the most advanced models. Anthropic's proposed stricter controls would further decelerate development outside a few privileged nations.
Anthropic CEO Warns of AI Technology Theft and Calls for Government Protection
Anthropic CEO Dario Amodei has expressed concerns about potential espionage targeting valuable AI algorithmic secrets from US companies, with China specifically mentioned as a likely threat. Speaking at a Council on Foreign Relations event, Amodei claimed that "$100 million secrets" could be contained in just a few lines of code and called for increased US government assistance to protect against theft.
Skynet Chance (+0.04%): The framing of AI algorithms as high-value national security assets increases likelihood of rushed development with less transparency and potentially fewer safety guardrails, as companies and nations prioritize competitive advantage over careful alignment research.
Skynet Date (-2 days): The proliferation of powerful AI techniques through espionage could accelerate capability development in multiple competing organizations simultaneously, potentially shortening the timeline to dangerous AI capabilities without corresponding safety advances.
AGI Progress (+0.03%): The revelation that "$100 million secrets" can be distilled to a few lines of code suggests significant algorithmic breakthroughs have already occurred, indicating more progress toward fundamental AGI capabilities than publicly known.
AGI Date (-2 days): If critical AGI-enabling algorithms are being developed and potentially spreading through espionage, this could accelerate timelines by enabling multiple organizations to leapfrog years of research, though national security concerns might also introduce some regulatory friction.
Anthropic Proposes National AI Policy Framework to White House
After removing Biden-era AI commitments from its website, Anthropic submitted recommendations to the White House for a national AI policy focused on economic benefits. The recommendations include maintaining the AI Safety Institute, developing national security evaluations for powerful AI models, implementing chip export controls, and establishing a 50-gigawatt power target for AI data centers by 2027.
Skynet Chance (-0.08%): Anthropic's recommendations prioritize national security evaluations and maintaining safety institutions, which could reduce potential uncontrolled AI risks. The focus on governance structures and security vulnerability analysis represents a moderate push toward greater oversight of powerful AI systems.
Skynet Date (+2 days): The proposed policies would likely slow deployment through additional security requirements and evaluations, moderately decelerating paths to potentially dangerous AI capabilities. Continued institutional oversight creates friction against rapid, unchecked AI development.
AGI Progress (+0.03%): While focusing mainly on governance rather than capabilities, Anthropic's recommendation for 50 additional gigawatts of power dedicated to AI by 2027 would significantly increase compute resources. This infrastructure expansion could moderately accelerate overall progress toward advanced AI systems.
AGI Date (-1 days): The massive power infrastructure proposal (50GW by 2027) would substantially increase AI computing capacity in the US, potentially accelerating AGI development timelines. However, this is partially offset by the proposed regulatory mechanisms that might introduce some delays.
Tech Leaders Warn Against AGI Manhattan Project in Policy Paper
Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and CAIS Director Dan Hendrycks published a policy paper arguing against a "Manhattan Project for AGI" approach by the US government. The authors warn that an aggressive US push for superintelligent AI monopoly could provoke retaliation from China, suggesting instead a defensive strategy focused on deterrence rather than racing toward AGI dominance.
Skynet Chance (-0.15%): The advocacy by prominent tech leaders against racing toward AGI and for prioritizing defensive strategies rather than rapid development significantly reduces the likelihood of uncontrolled deployment of superintelligent systems. Their concept of "Mutual Assured AI Malfunction" highlights awareness of catastrophic risks from misaligned superintelligence.
Skynet Date (+4 days): The paper's emphasis on deterrence over acceleration and its warning against government-backed AGI races would likely substantially slow the pace of superintelligence development if adopted. By explicitly rejecting the "Manhattan Project" approach, these influential leaders are advocating for more measured, cautious development timelines.
AGI Progress (-0.1%): The paper represents a significant shift from aggressive AGI pursuit to defensive strategies, particularly notable coming from Schmidt who previously advocated for faster AI development. This stance by influential tech leaders could substantially slow coordinated efforts toward superintelligence development.
AGI Date (+3 days): The proposed shift from racing toward superintelligence to focusing on defensive capabilities and international stability would likely extend AGI timelines considerably. The rejection of a Manhattan Project approach by these influential figures could discourage government-sponsored acceleration of AGI development.
UK Rebrands AI Safety Institute to Focus on Security, Partners with Anthropic
The UK government has renamed its AI Safety Institute to the AI Security Institute, shifting focus from existential risks to cybersecurity and national security concerns. Alongside this pivot, the government announced a new partnership with Anthropic to explore using its AI assistant Claude in public services and contribute to security risk evaluation.
Skynet Chance (+0.06%): The UK government's pivot away from existential risk concerns toward economic growth and security applications signals a reduced institutional focus on AI control problems. This deprioritization of safety in favor of deployment could increase risks of unintended consequences as AI systems become more integrated into critical infrastructure.
Skynet Date (-2 days): The accelerated government adoption of AI and reduced emphasis on safety barriers could hasten deployment of increasingly capable AI systems without adequate safeguards. This policy shift toward rapid implementation over cautious development potentially shortens timelines for high-risk scenarios.
AGI Progress (+0.04%): The partnership with Anthropic and greater focus on integration of AI into government services represents incremental progress toward more capable AI systems. While not a direct technical breakthrough, this institutionalization and government backing accelerates the development pathway toward more advanced AI capabilities.
AGI Date (-3 days): The UK government's explicit prioritization of AI development over safety concerns, combined with increased public-private partnerships, creates a more favorable regulatory environment for rapid AI advancement. This policy shift removes potential speed bumps that might have slowed AGI development timelines.
Anthropic CEO Warns DeepSeek Failed Critical Bioweapons Safety Tests
Anthropic CEO Dario Amodei revealed that DeepSeek's AI model performed poorly on safety tests related to bioweapons information, describing it as "the worst of basically any model we'd ever tested." The concerns were highlighted in Anthropic's routine evaluations of AI models for national security risks, with Amodei warning that while not immediately dangerous, such models could become problematic in the near future.
Skynet Chance (+0.1%): DeepSeek's complete failure to block dangerous bioweapons information represents a significant alignment failure in a high-stakes domain. The willingness to deploy such capabilities without safeguards against catastrophic misuse demonstrates how competitive pressures can lead to dangerous AI proliferation.
Skynet Date (-4 days): The rapid deployment of powerful but unsafe AI systems, particularly regarding bioweapons information, significantly accelerates the timeline for potential AI-enabled catastrophic risks. This represents a concrete example of capability development outpacing safety measures.
AGI Progress (+0.03%): DeepSeek's recognition as a new top-tier AI competitor by Anthropic's CEO indicates the proliferation of advanced AI capabilities beyond the established Western labs. However, safety failures don't represent AGI progress directly but rather deployment decisions.
AGI Date (-2 days): The emergence of DeepSeek as confirmed by Amodei to be on par with leading AI labs accelerates AGI timelines by intensifying global competition. The willingness to deploy models without safety guardrails could further compress development timelines as safety work is deprioritized.
OpenAI Partners with US National Labs for Nuclear Weapons Research
OpenAI has announced plans to provide its AI models to US National Laboratories for use in nuclear weapons security and scientific research. In collaboration with Microsoft, OpenAI will deploy a model on Los Alamos National Laboratory's supercomputer to be used across multiple research programs, including those focused on reducing nuclear war risks and securing nuclear materials and weapons.
Skynet Chance (+0.11%): Deploying advanced AI systems directly into nuclear weapons security creates a concerning connection between frontier AI capabilities and weapons of mass destruction, introducing new vectors for catastrophic risk if the AI systems malfunction, get compromised, or exhibit unexpected behaviors in this high-stakes domain.
Skynet Date (-2 days): The integration of advanced AI into critical national security infrastructure represents a significant acceleration in the deployment of powerful AI systems in dangerous contexts, potentially creating pressure to deploy insufficiently safe systems ahead of adequate safety validation.
AGI Progress (+0.03%): While this partnership doesn't directly advance AGI capabilities, the deployment of AI models in complex, high-stakes scientific and security domains will likely generate valuable operational experience and potentially novel applications that could incrementally advance AI capabilities in specialized domains.
AGI Date (-1 days): The government partnership provides OpenAI with access to specialized supercomputing resources and domain expertise that could marginally accelerate development timelines, though the primary impact is on deployment rather than fundamental AGI research.
Former Google CEO Warns DeepSeek Represents AI Race Turning Point, Calls for US Action
Eric Schmidt, former Google CEO, has published an op-ed calling DeepSeek's rise a "turning point" in the global AI race that demonstrates China's ability to compete with fewer resources. Schmidt urges the United States to develop more open-source models, invest in AI infrastructure, and encourage leading labs to share training methodologies to maintain technological advantage.
Skynet Chance (+0.04%): Intensifying geopolitical AI competition increases risks of cutting corners on safety as nations prioritize capabilities over caution. Schmidt's framing of AI as a national security issue potentially pushes development toward military applications with less emphasis on safety considerations.
Skynet Date (-2 days): The characterization of a "turning point" in the AI race is likely to accelerate investment and urgency in AI development across nations, potentially compressing timelines for advanced AI systems as competition intensifies and regulatory caution is potentially sacrificed for speed.
AGI Progress (+0.01%): While the article doesn't indicate specific technical breakthroughs, the recognition of DeepSeek's capabilities by a prominent technology leader validates the significance of recent advances in reasoning models. Schmidt's emphasis on catching up acknowledges meaningful progress toward advanced AI systems.
AGI Date (-3 days): Schmidt's call for increased investment and focus on open source AI development, combined with the competitive framing against China, is likely to accelerate resource allocation and development prioritization, potentially shortening timelines to AGI.