Anthropic AI News & Updates

Anthropic Removes Biden-Era AI Safety Commitments After Trump Policy Shift

Anthropic has quietly removed several voluntary Biden administration AI safety commitments from its website, including pledges to share information on AI risk management and conduct research on bias. The removal coincides with the Trump administration's different approach to AI governance, including the repeal of Biden's AI Executive Order in favor of policies promoting AI development with less emphasis on discrimination concerns.

Anthropic Secures $3.5 Billion in Funding to Advance AI Development

AI startup Anthropic has raised $3.5 billion in a Series E funding round led by Lightspeed Venture Partners, bringing the company's total funding to $18.2 billion. The investment will support Anthropic's development of advanced AI systems, expansion of compute capacity, research in interpretability and alignment, and international growth while the company continues to struggle with profitability despite growing revenues.

Anthropic's Claude 3.7 Sonnet Cost Only Tens of Millions to Train

According to information reportedly provided by Anthropic to Wharton professor Ethan Mollick, their latest flagship AI model Claude 3.7 Sonnet cost only "a few tens of millions of dollars" to train using less than 10^26 FLOPs. This relatively modest training cost for a state-of-the-art model demonstrates the declining expenses of developing cutting-edge AI systems compared to earlier generations that cost $100-200 million.

Anthropic Increases Funding Round to $3.5 Billion Despite Financial Losses

Anthropic is finalizing a $3.5 billion fundraising round at a $61.5 billion valuation, up from an initially planned $2 billion. Despite reaching $1.2 billion in annualized revenue, the company continues to operate at a loss and intends to invest the new capital in developing more capable AI technologies.

Anthropic Launches Claude 3.7 Sonnet with Extended Reasoning Capabilities

Anthropic has released Claude 3.7 Sonnet, described as the industry's first "hybrid AI reasoning model" that can provide both real-time responses and extended, deliberative reasoning. The model outperforms competitors on coding and agent benchmarks while reducing inappropriate refusals by 45%, and is accompanied by a new agentic coding tool called Claude Code.

UK Rebrands AI Safety Institute to Focus on Security, Partners with Anthropic

The UK government has renamed its AI Safety Institute to the AI Security Institute, shifting focus from existential risks to cybersecurity and national security concerns. Alongside this pivot, the government announced a new partnership with Anthropic to explore using its AI assistant Claude in public services and contribute to security risk evaluation.

Anthropic to Launch Hybrid AI Model with Advanced Reasoning Capabilities

Anthropic is preparing to release a new AI model that combines "deep reasoning" capabilities with fast responses. The upcoming model reportedly outperforms OpenAI's reasoning model on some programming tasks and will feature a slider to control the trade-off between advanced reasoning and computational cost.

Anthropic CEO Warns of AI Progress Outpacing Understanding

Anthropic CEO Dario Amodei expressed concerns about the need for urgency in AI governance following the AI Action Summit in Paris, which he called a "missed opportunity." Amodei emphasized the importance of understanding AI models as they become more powerful, describing it as a "race" between developing capabilities and comprehending their inner workings, while still maintaining Anthropic's commitment to frontier model development.

Anthropic CEO Criticizes Lack of Urgency in AI Governance at Paris Summit

Anthropic CEO Dario Amodei criticized the AI Action Summit in Paris as a "missed opportunity," calling for greater urgency in AI governance given the rapidly advancing technology. Amodei warned that AI systems will soon have capabilities comparable to "an entirely new state populated by highly intelligent people" and urged governments to focus on measuring AI use, ensuring economic benefits are widely shared, and increasing transparency around AI safety and security assessment.

Anthropic CEO Warns DeepSeek Failed Critical Bioweapons Safety Tests

Anthropic CEO Dario Amodei revealed that DeepSeek's AI model performed poorly on safety tests related to bioweapons information, describing it as "the worst of basically any model we'd ever tested." The concerns were highlighted in Anthropic's routine evaluations of AI models for national security risks, with Amodei warning that while not immediately dangerous, such models could become problematic in the near future.