Anthropic AI News & Updates

Pentagon Develops Independent AI Systems After Anthropic Partnership Collapse

The Pentagon is actively building its own large language models to replace Anthropic's AI following a contract breakdown over military use restrictions. After Anthropic sought contractual clauses prohibiting mass surveillance and autonomous weapons deployment, the Pentagon rejected these terms and instead partnered with OpenAI and xAI. The Department of Defense has designated Anthropic a supply chain risk, effectively barring other defense contractors from working with the company.

OpenAI Partners with AWS to Deliver AI Services to U.S. Government Agencies

OpenAI has signed a partnership with Amazon Web Services to sell its AI products to U.S. government agencies for both classified and unclassified work. This expands OpenAI's federal presence beyond its recent Pentagon deal and positions it to compete with Anthropic, which has deep AWS integration but faces DOD supply chain risk designation after refusing military surveillance applications.

AI Industry Rallies Behind Anthropic in Pentagon Supply Chain Risk Designation Dispute

Over 30 employees from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic's lawsuit against the U.S. Department of Defense, which labeled the AI firm a supply chain risk after it refused to allow use of its technology for mass surveillance or autonomous weapons. The Pentagon subsequently signed a deal with OpenAI, prompting industry-wide concern about government overreach and its implications for AI development guardrails. The employees argue that punishing Anthropic for establishing safety boundaries will harm U.S. AI competitiveness and discourage responsible AI development practices.

Anthropic Deploys AI-Powered Code Review Tool to Manage Surge in AI-Generated Code

Anthropic has launched Code Review, an AI-powered tool integrated into Claude Code that automatically analyzes pull requests to catch bugs and logical errors in AI-generated code. The tool uses multiple AI agents working in parallel to review code from different perspectives, focusing on high-priority logical errors rather than style issues. This product targets enterprise customers dealing with increased code review bottlenecks caused by AI coding tools that rapidly generate large amounts of code.

Bipartisan Coalition Releases Pro-Human Declaration Framework for AI Governance Amid Pentagon-Anthropic Standoff

A bipartisan coalition of experts has released the Pro-Human Declaration, a framework for responsible AI development that includes prohibitions on superintelligence development until proven safe, mandatory off-switches, and bans on self-replicating AI systems. The declaration's release coincided with a conflict between the Pentagon and Anthropic over military AI access, highlighting the absence of coherent government AI regulations. The framework emphasizes keeping humans in control, preventing power concentration, and establishing pre-deployment testing requirements, particularly for AI products targeting children.

Claude AI Discovers 22 Security Vulnerabilities in Firefox Browser

Anthropic's Claude Opus 4.6 identified 22 vulnerabilities in Mozilla Firefox over a two-week security audit, with 14 classified as high-severity. While Claude excelled at finding bugs, it struggled to create working exploits, succeeding in only 2 out of many attempts despite $4,000 in API costs.

Pentagon Designates Anthropic Supply-Chain Risk After Contract Dispute Over Military AI Control

The Pentagon designated Anthropic as a supply-chain risk following failed negotiations over military control of its AI models for autonomous weapons and domestic surveillance. After Anthropic's $200 million contract collapsed, the DoD contracted with OpenAI instead, which resulted in a 295% surge in ChatGPT uninstalls. The incident highlights tensions over military access to advanced AI systems.

Anthropic Loses Pentagon Contract Over AI Control Disputes, OpenAI Steps In Despite User Backlash

The Pentagon designated Anthropic as a supply-chain risk after disagreements over military control of AI models for autonomous weapons and mass surveillance use cases. The Department of Defense shifted the $200 million contract to OpenAI, which accepted the terms but experienced a 295% increase in ChatGPT uninstalls afterward. The situation raises questions about appropriate military access to commercial AI systems.

Anthropic's Claude Sees User Surge After Refusing Pentagon Military AI Contract

Anthropic's Claude AI chatbot experienced significant growth in daily active users and app downloads after CEO Dario Amodei refused to allow Pentagon use of Claude for mass surveillance or autonomous weapons, leading to the company being marked as a supply-chain risk. Claude's mobile app downloads now surpass ChatGPT in the U.S., with daily active users reaching 11.3 million on March 2, up 183% from the start of the year. The app reached No. 1 on the U.S. App Store and in 15 other countries, with over 1 million daily sign-ups.

Pentagon Designates Anthropic as Supply Chain Risk Over Refusal to Support Autonomous Weapons and Mass Surveillance

The Department of Defense has officially designated Anthropic as a supply chain risk after CEO Dario Amodei refused to allow military use of its AI systems for mass surveillance of Americans or fully autonomous weapons. This unprecedented designation, typically reserved for foreign adversaries, requires any Pentagon contractor to certify they don't use Anthropic's models, despite Claude currently being deployed in military operations including the Iran campaign. The move has sparked significant criticism from AI industry employees and former government advisors, while OpenAI has signed a deal allowing military use of its systems for "all lawful purposes."