Anthropic AI News & Updates

Anthropic Restricts Mythos Cybersecurity Model to Enterprise Clients, Raising Questions About Motives

Anthropic has limited the release of its new AI model Mythos, claiming it is highly capable of finding security exploits, and will only share it with large enterprises like AWS and JPMorgan Chase rather than releasing it publicly. While Anthropic cites cybersecurity concerns, critics suggest the restricted release may also serve to protect against model distillation by competitors and create an enterprise revenue flywheel. Some AI security startups claim they can replicate Mythos's capabilities using smaller open-weight models, questioning whether the restriction is primarily about safety.

Anthropic Releases Mythos: Powerful Frontier AI Model for Cybersecurity Vulnerability Detection

Anthropic has released a limited preview of Mythos, described as one of its most powerful frontier AI models, to over 40 partner organizations including Amazon, Apple, Microsoft, and Cisco for defensive cybersecurity work. The model has reportedly identified thousands of zero-day vulnerabilities in software systems, some dating back one to two decades. While designed as a general-purpose model with strong coding and reasoning capabilities, concerns exist about potential weaponization by bad actors to exploit rather than fix vulnerabilities.

Anthropic Secures Massive 3.5 Gigawatt Compute Expansion with Google and Broadcom

Anthropic has signed an expanded agreement with Google and Broadcom to secure 3.5 gigawatts of additional compute capacity using Google's TPUs, coming online in 2027. This deal supports the company's explosive growth, with run rate revenue jumping from $9 billion to $30 billion and over 1,000 enterprise customers spending $1M+ annually. The expansion reflects unprecedented demand for Claude AI models despite some U.S. government supply chain concerns.

Anthropic Acquires AI Biotech Startup Coefficient Bio for $400M to Expand Healthcare Capabilities

Anthropic has acquired stealth biotech AI startup Coefficient Bio in a $400 million stock deal to strengthen its healthcare and life sciences division. The 10-person team, including founders from Genentech's computational drug discovery unit, will join Anthropic's existing life sciences group. This follows Anthropic's October launch of Claude for Life Sciences, a tool designed to assist scientific researchers.

Anthropic Accidentally Exposes 512,000 Lines of Claude Code Source in Packaging Error

Anthropic, a company known for emphasizing AI safety and responsibility, accidentally exposed nearly 512,000 lines of source code for its Claude Code developer tool in a software package release due to human error. This marks the second significant security lapse in a week, following an earlier incident where nearly 3,000 internal files were made publicly accessible. The leaked architectural blueprint reveals the scaffolding around Claude Code, which has been gaining significant market traction and reportedly prompted OpenAI to shut down Sora to refocus on developer tools.

Anthropic Introduces Auto Mode for Claude Code with AI-Driven Safety Layer

Anthropic has launched "auto mode" for Claude Code, allowing the AI to autonomously decide which coding actions are safe to execute without human approval, while filtering out risky behaviors and potential prompt injection attacks. This research preview feature uses AI safeguards to review actions before execution, blocking dangerous operations while allowing safe ones to proceed automatically. The feature is rolling out to Enterprise and API users and currently works only with Claude Sonnet 4.6 and Opus 4.6 models, with Anthropic recommending use in isolated environments.

Amazon's Trainium Chip Lab: Powering Anthropic, OpenAI, and Challenging Nvidia's AI Dominance

Amazon Web Services has committed 2 gigawatts of Trainium computing capacity to OpenAI as part of a $50 billion deal, with over 1 million Trainium2 chips already powering Anthropic's Claude. The custom-designed Trainium3 chips, built in Amazon's Austin lab, offer up to 50% cost savings compared to traditional cloud servers and are designed to compete with Nvidia's GPU dominance through PyTorch compatibility and reduced switching costs. The chips handle both training and inference workloads, with Amazon's Bedrock service now running the majority of its inference traffic on Trainium2.

Pentagon Declares Anthropic National Security Risk Over AI Usage Restrictions

The U.S. Department of Defense has labeled Anthropic an "unacceptable risk to national security" after the AI company imposed restrictions on military use of its technology, specifically refusing uses involving mass surveillance and autonomous lethal targeting. The dispute stems from a $200 million Pentagon contract, with the DOD arguing that Anthropic's self-imposed "red lines" could lead to the company disabling its technology during critical military operations. A court hearing on Anthropic's request for a preliminary injunction against the DOD's designation is scheduled for next week.

Pentagon Develops Independent AI Systems After Anthropic Partnership Collapse

The Pentagon is actively building its own large language models to replace Anthropic's AI following a contract breakdown over military use restrictions. After Anthropic sought contractual clauses prohibiting mass surveillance and autonomous weapons deployment, the Pentagon rejected these terms and instead partnered with OpenAI and xAI. The Department of Defense has designated Anthropic a supply chain risk, effectively barring other defense contractors from working with the company.

OpenAI Partners with AWS to Deliver AI Services to U.S. Government Agencies

OpenAI has signed a partnership with Amazon Web Services to sell its AI products to U.S. government agencies for both classified and unclassified work. This expands OpenAI's federal presence beyond its recent Pentagon deal and positions it to compete with Anthropic, which has deep AWS integration but faces DOD supply chain risk designation after refusing military surveillance applications.