Anthropic AI News & Updates

Anthropic Briefs Trump Administration on Unreleased Mythos AI Model with Advanced Cybersecurity Capabilities

Anthropic co-founder Jack Clark confirmed the company briefed the Trump administration on its new Mythos AI model, which possesses powerful cybersecurity capabilities deemed too dangerous for public release. This engagement occurs despite Anthropic's ongoing lawsuit against the Department of Defense over restrictions on military access to its AI systems. The company is also monitoring potential AI-driven employment impacts, particularly in early graduate employment across select industries.

U.S. Treasury and Federal Reserve Push Major Banks to Test Anthropic's Mythos Cybersecurity Model Despite Ongoing Government Conflict

Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell encouraged major bank executives to use Anthropic's new Mythos AI model for detecting security vulnerabilities, with several major banks now reportedly testing it. This comes despite Anthropic's ongoing legal battle with the Trump administration over DoD supply-chain risk designation and concerns about the model being exceptionally capable at finding vulnerabilities. U.K. financial regulators are also discussing risks posed by Mythos.

Anthropic Restricts Mythos Cybersecurity Model to Enterprise Clients, Raising Questions About Motives

Anthropic has limited the release of its new AI model Mythos, claiming it is highly capable of finding security exploits, and will only share it with large enterprises like AWS and JPMorgan Chase rather than releasing it publicly. While Anthropic cites cybersecurity concerns, critics suggest the restricted release may also serve to protect against model distillation by competitors and create an enterprise revenue flywheel. Some AI security startups claim they can replicate Mythos's capabilities using smaller open-weight models, questioning whether the restriction is primarily about safety.

Anthropic Releases Mythos: Powerful Frontier AI Model for Cybersecurity Vulnerability Detection

Anthropic has released a limited preview of Mythos, described as one of its most powerful frontier AI models, to over 40 partner organizations including Amazon, Apple, Microsoft, and Cisco for defensive cybersecurity work. The model has reportedly identified thousands of zero-day vulnerabilities in software systems, some dating back one to two decades. While designed as a general-purpose model with strong coding and reasoning capabilities, concerns exist about potential weaponization by bad actors to exploit rather than fix vulnerabilities.

Anthropic Secures Massive 3.5 Gigawatt Compute Expansion with Google and Broadcom

Anthropic has signed an expanded agreement with Google and Broadcom to secure 3.5 gigawatts of additional compute capacity using Google's TPUs, coming online in 2027. This deal supports the company's explosive growth, with run rate revenue jumping from $9 billion to $30 billion and over 1,000 enterprise customers spending $1M+ annually. The expansion reflects unprecedented demand for Claude AI models despite some U.S. government supply chain concerns.

Anthropic Acquires AI Biotech Startup Coefficient Bio for $400M to Expand Healthcare Capabilities

Anthropic has acquired stealth biotech AI startup Coefficient Bio in a $400 million stock deal to strengthen its healthcare and life sciences division. The 10-person team, including founders from Genentech's computational drug discovery unit, will join Anthropic's existing life sciences group. This follows Anthropic's October launch of Claude for Life Sciences, a tool designed to assist scientific researchers.

Anthropic Accidentally Exposes 512,000 Lines of Claude Code Source in Packaging Error

Anthropic, a company known for emphasizing AI safety and responsibility, accidentally exposed nearly 512,000 lines of source code for its Claude Code developer tool in a software package release due to human error. This marks the second significant security lapse in a week, following an earlier incident where nearly 3,000 internal files were made publicly accessible. The leaked architectural blueprint reveals the scaffolding around Claude Code, which has been gaining significant market traction and reportedly prompted OpenAI to shut down Sora to refocus on developer tools.

Anthropic Introduces Auto Mode for Claude Code with AI-Driven Safety Layer

Anthropic has launched "auto mode" for Claude Code, allowing the AI to autonomously decide which coding actions are safe to execute without human approval, while filtering out risky behaviors and potential prompt injection attacks. This research preview feature uses AI safeguards to review actions before execution, blocking dangerous operations while allowing safe ones to proceed automatically. The feature is rolling out to Enterprise and API users and currently works only with Claude Sonnet 4.6 and Opus 4.6 models, with Anthropic recommending use in isolated environments.

Amazon's Trainium Chip Lab: Powering Anthropic, OpenAI, and Challenging Nvidia's AI Dominance

Amazon Web Services has committed 2 gigawatts of Trainium computing capacity to OpenAI as part of a $50 billion deal, with over 1 million Trainium2 chips already powering Anthropic's Claude. The custom-designed Trainium3 chips, built in Amazon's Austin lab, offer up to 50% cost savings compared to traditional cloud servers and are designed to compete with Nvidia's GPU dominance through PyTorch compatibility and reduced switching costs. The chips handle both training and inference workloads, with Amazon's Bedrock service now running the majority of its inference traffic on Trainium2.

Pentagon Declares Anthropic National Security Risk Over AI Usage Restrictions

The U.S. Department of Defense has labeled Anthropic an "unacceptable risk to national security" after the AI company imposed restrictions on military use of its technology, specifically refusing uses involving mass surveillance and autonomous lethal targeting. The dispute stems from a $200 million Pentagon contract, with the DOD arguing that Anthropic's self-imposed "red lines" could lead to the company disabling its technology during critical military operations. A court hearing on Anthropic's request for a preliminary injunction against the DOD's designation is scheduled for next week.