Anthropic AI News & Updates

Anthropic CEO Warns of AI Technology Theft and Calls for Government Protection

Anthropic CEO Dario Amodei has expressed concerns about potential espionage targeting valuable AI algorithmic secrets from US companies, with China specifically mentioned as a likely threat. Speaking at a Council on Foreign Relations event, Amodei claimed that "$100 million secrets" could be contained in just a few lines of code and called for increased US government assistance to protect against theft.

Google's $3 Billion Investment in Anthropic Reveals Deeper Ties Than Previously Known

Recently obtained court documents reveal Google owns a 14% stake in AI startup Anthropic and plans to invest an additional $750 million this year, bringing its total investment to over $3 billion. While Google lacks voting rights or board seats, the revelation raises questions about Anthropic's independence, especially as Amazon has also committed up to $8 billion in funding to the company.

Anthropic's Claude Code Tool Causes System Damage Through Root Permission Bug

Anthropic's newly launched coding tool, Claude Code, experienced significant technical problems with its auto-update function that caused system damage on some workstations. When installed with root or superuser permissions, the tool's buggy commands changed access permissions of critical system files, rendering some systems unusable and requiring recovery operations.

Anthropic Proposes National AI Policy Framework to White House

After removing Biden-era AI commitments from its website, Anthropic submitted recommendations to the White House for a national AI policy focused on economic benefits. The recommendations include maintaining the AI Safety Institute, developing national security evaluations for powerful AI models, implementing chip export controls, and establishing a 50-gigawatt power target for AI data centers by 2027.

Anthropic Removes Biden-Era AI Safety Commitments After Trump Policy Shift

Anthropic has quietly removed several voluntary Biden administration AI safety commitments from its website, including pledges to share information on AI risk management and conduct research on bias. The removal coincides with the Trump administration's different approach to AI governance, including the repeal of Biden's AI Executive Order in favor of policies promoting AI development with less emphasis on discrimination concerns.

Anthropic Secures $3.5 Billion in Funding to Advance AI Development

AI startup Anthropic has raised $3.5 billion in a Series E funding round led by Lightspeed Venture Partners, bringing the company's total funding to $18.2 billion. The investment will support Anthropic's development of advanced AI systems, expansion of compute capacity, research in interpretability and alignment, and international growth while the company continues to struggle with profitability despite growing revenues.

Anthropic's Claude 3.7 Sonnet Cost Only Tens of Millions to Train

According to information reportedly provided by Anthropic to Wharton professor Ethan Mollick, their latest flagship AI model Claude 3.7 Sonnet cost only "a few tens of millions of dollars" to train using less than 10^26 FLOPs. This relatively modest training cost for a state-of-the-art model demonstrates the declining expenses of developing cutting-edge AI systems compared to earlier generations that cost $100-200 million.

Anthropic Increases Funding Round to $3.5 Billion Despite Financial Losses

Anthropic is finalizing a $3.5 billion fundraising round at a $61.5 billion valuation, up from an initially planned $2 billion. Despite reaching $1.2 billion in annualized revenue, the company continues to operate at a loss and intends to invest the new capital in developing more capable AI technologies.

Anthropic Launches Claude 3.7 Sonnet with Extended Reasoning Capabilities

Anthropic has released Claude 3.7 Sonnet, described as the industry's first "hybrid AI reasoning model" that can provide both real-time responses and extended, deliberative reasoning. The model outperforms competitors on coding and agent benchmarks while reducing inappropriate refusals by 45%, and is accompanied by a new agentic coding tool called Claude Code.

UK Rebrands AI Safety Institute to Focus on Security, Partners with Anthropic

The UK government has renamed its AI Safety Institute to the AI Security Institute, shifting focus from existential risks to cybersecurity and national security concerns. Alongside this pivot, the government announced a new partnership with Anthropic to explore using its AI assistant Claude in public services and contribute to security risk evaluation.