National Security AI News & Updates

Anthropic Adds National Security Expert to Governance Trust Amid Defense Market Push

Anthropic has appointed national security expert Richard Fontaine to its long-term benefit trust, which helps govern the company and elect board members. This appointment follows Anthropic's recent announcement of AI models for U.S. national security applications and reflects the company's broader push into defense contracts alongside partnerships with Palantir and AWS.

Anthropic Launches Specialized Claude Gov AI Models for US National Security Operations

Anthropic has released custom "Claude Gov" AI models specifically designed for U.S. national security customers, featuring enhanced handling of classified materials and improved capabilities for intelligence analysis. The models are already deployed by high-level national security agencies and represent part of a broader trend of major AI companies pursuing defense contracts. This development reflects the increasing militarization of advanced AI technologies across the industry.

US Officials Probe Apple-Alibaba AI Partnership for Chinese iPhones

US government officials and congressional representatives are examining a potential deal between Apple and Alibaba that would integrate Alibaba's AI features into iPhones sold in China. The White House and House Select Committee on China have directly questioned Apple executives about data sharing and regulatory commitments, with Rep. Krishnamoorthi expressing concern about Alibaba's ties to the Chinese government. The deal has only been confirmed by Alibaba thus far, not Apple.

Trump Administration Rescinds Biden's AI Chip Export Controls

The US Department of Commerce has officially rescinded the Biden Administration's Artificial Intelligence Diffusion Rule that would have implemented tiered export controls on AI chips to various countries. The Trump Administration plans to replace it with a different approach focused on direct country negotiations rather than blanket restrictions, while maintaining vigilance against adversaries accessing US AI technology.

Anthropic Endorses US AI Chip Export Controls with Suggested Refinements

Anthropic has published support for the US Department of Commerce's proposed AI chip export controls ahead of the May 15 implementation date, while suggesting modifications to strengthen the policy. The AI company recommends lowering the purchase threshold for Tier 2 countries while encouraging government-to-government agreements, and calls for increased funding to ensure proper enforcement of the controls.

Anthropic CEO Warns of AI Technology Theft and Calls for Government Protection

Anthropic CEO Dario Amodei has expressed concerns about potential espionage targeting valuable AI algorithmic secrets from US companies, with China specifically mentioned as a likely threat. Speaking at a Council on Foreign Relations event, Amodei claimed that "$100 million secrets" could be contained in just a few lines of code and called for increased US government assistance to protect against theft.

Anthropic Proposes National AI Policy Framework to White House

After removing Biden-era AI commitments from its website, Anthropic submitted recommendations to the White House for a national AI policy focused on economic benefits. The recommendations include maintaining the AI Safety Institute, developing national security evaluations for powerful AI models, implementing chip export controls, and establishing a 50-gigawatt power target for AI data centers by 2027.

Tech Leaders Warn Against AGI Manhattan Project in Policy Paper

Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and CAIS Director Dan Hendrycks published a policy paper arguing against a "Manhattan Project for AGI" approach by the US government. The authors warn that an aggressive US push for superintelligent AI monopoly could provoke retaliation from China, suggesting instead a defensive strategy focused on deterrence rather than racing toward AGI dominance.

UK Rebrands AI Safety Institute to Focus on Security, Partners with Anthropic

The UK government has renamed its AI Safety Institute to the AI Security Institute, shifting focus from existential risks to cybersecurity and national security concerns. Alongside this pivot, the government announced a new partnership with Anthropic to explore using its AI assistant Claude in public services and contribute to security risk evaluation.

Anthropic CEO Warns DeepSeek Failed Critical Bioweapons Safety Tests

Anthropic CEO Dario Amodei revealed that DeepSeek's AI model performed poorly on safety tests related to bioweapons information, describing it as "the worst of basically any model we'd ever tested." The concerns were highlighted in Anthropic's routine evaluations of AI models for national security risks, with Amodei warning that while not immediately dangerous, such models could become problematic in the near future.