Anthropic AI News & Updates

Anthropic CSO Jared Kaplan to Discuss Hybrid Reasoning Models at Tech Conference

Anthropic co-founder and Chief Science Officer Jared Kaplan will speak at TechCrunch Sessions: AI on June 5 at UC Berkeley. He will discuss hybrid reasoning models and Anthropic's risk-governance framework, bringing insights from his background as a theoretical physicist and his work developing Claude AI assistants.

Databricks and Anthropic CEOs to Discuss Collaboration on Domain-Specific AI Agents

Databricks CEO Ali Ghodsi and Anthropic CEO Dario Amodei are hosting a virtual fireside chat to discuss their collaboration on advancing domain-specific AI agents. The event will include three additional sessions exploring this partnership between two major AI industry players.

Google Adopts Anthropic's Model Context Protocol for AI Data Connectivity

Google has announced it will support Anthropic's Model Context Protocol (MCP) in its Gemini models and SDK, following OpenAI's similar adoption. MCP enables two-way connections between AI models and external data sources, allowing models to access and interact with business tools, software, and content repositories to complete tasks.

OpenAI Adopts Anthropic's Model Context Protocol for Data Integration

OpenAI has announced it will support Anthropic's Model Context Protocol (MCP) across its products, including the ChatGPT desktop app. MCP is an open standard that enables AI models to connect with external data sources and systems, allowing for more relevant and context-aware responses to queries through two-way connections between data sources and AI applications.

Anthropic Introduces Web Search Capability to Claude AI Assistant

Anthropic has added web search capabilities to its Claude AI chatbot, initially available to paid US users with the Claude 3.7 Sonnet model. The feature, which includes direct source citations, brings Claude to feature parity with competitors like ChatGPT and Gemini, though concerns remain about potential hallucinations and citation errors.

Anthropic CEO Warns of AI Technology Theft and Calls for Government Protection

Anthropic CEO Dario Amodei has expressed concerns about potential espionage targeting valuable AI algorithmic secrets from US companies, with China specifically mentioned as a likely threat. Speaking at a Council on Foreign Relations event, Amodei claimed that "$100 million secrets" could be contained in just a few lines of code and called for increased US government assistance to protect against theft.

Google's $3 Billion Investment in Anthropic Reveals Deeper Ties Than Previously Known

Recently obtained court documents reveal Google owns a 14% stake in AI startup Anthropic and plans to invest an additional $750 million this year, bringing its total investment to over $3 billion. While Google lacks voting rights or board seats, the revelation raises questions about Anthropic's independence, especially as Amazon has also committed up to $8 billion in funding to the company.

Anthropic's Claude Code Tool Causes System Damage Through Root Permission Bug

Anthropic's newly launched coding tool, Claude Code, experienced significant technical problems with its auto-update function that caused system damage on some workstations. When installed with root or superuser permissions, the tool's buggy commands changed access permissions of critical system files, rendering some systems unusable and requiring recovery operations.

Anthropic Proposes National AI Policy Framework to White House

After removing Biden-era AI commitments from its website, Anthropic submitted recommendations to the White House for a national AI policy focused on economic benefits. The recommendations include maintaining the AI Safety Institute, developing national security evaluations for powerful AI models, implementing chip export controls, and establishing a 50-gigawatt power target for AI data centers by 2027.

Anthropic Removes Biden-Era AI Safety Commitments After Trump Policy Shift

Anthropic has quietly removed several voluntary Biden administration AI safety commitments from its website, including pledges to share information on AI risk management and conduct research on bias. The removal coincides with the Trump administration's different approach to AI governance, including the repeal of Biden's AI Executive Order in favor of policies promoting AI development with less emphasis on discrimination concerns.