Policy and Regulation AI News & Updates

Anthropic Proposes National AI Policy Framework to White House

After removing Biden-era AI commitments from its website, Anthropic submitted recommendations to the White House for a national AI policy focused on economic benefits. The recommendations include maintaining the AI Safety Institute, developing national security evaluations for powerful AI models, implementing chip export controls, and establishing a 50-gigawatt power target for AI data centers by 2027.

Tech Leaders Warn Against AGI Manhattan Project in Policy Paper

Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and CAIS Director Dan Hendrycks published a policy paper arguing against a "Manhattan Project for AGI" approach by the US government. The authors warn that an aggressive US push for superintelligent AI monopoly could provoke retaliation from China, suggesting instead a defensive strategy focused on deterrence rather than racing toward AGI dominance.

Anthropic Removes Biden-Era AI Safety Commitments After Trump Policy Shift

Anthropic has quietly removed several voluntary Biden administration AI safety commitments from its website, including pledges to share information on AI risk management and conduct research on bias. The removal coincides with the Trump administration's different approach to AI governance, including the repeal of Biden's AI Executive Order in favor of policies promoting AI development with less emphasis on discrimination concerns.

Trump Administration Cuts Threaten Critical US AI Research Funding

The Trump administration has fired key AI experts at the National Science Foundation, jeopardizing important government funding for artificial intelligence research. The layoffs have caused postponement or cancellation of review panels, stalling funding for AI projects, with critics including AI pioneer Geoffrey Hinton condemning the cuts to scientific grant-making.

California Senator Introduces New AI Safety Bill with Whistleblower Protections

California State Senator Scott Wiener has introduced SB 53, a new AI bill that would protect employees at leading AI labs who speak out about potential critical risks to society. The bill also proposes creating CalCompute, a public cloud computing cluster to support AI research, following Governor Newsom's veto of Wiener's more controversial SB 1047 bill last year.

US AI Safety Institute Faces Potential Layoffs and Uncertain Future

Reports indicate the National Institute of Standards and Technology (NIST) may terminate up to 500 employees, significantly impacting the U.S. Artificial Intelligence Safety Institute (AISI). The institute, created under Biden's executive order on AI safety which Trump recently repealed, was already facing uncertainty after its director departed earlier in February.

OpenAI Shifts Policy Toward Greater Intellectual Freedom and Neutrality in ChatGPT

OpenAI has updated its Model Spec policy to embrace intellectual freedom, enabling ChatGPT to answer more questions, offer multiple perspectives on controversial topics, and reduce refusals to engage. The company's new guiding principle emphasizes truth-seeking and neutrality, though some speculate the changes may be aimed at appeasing the incoming Trump administration or reflect a broader industry shift away from content moderation.

EU Abandons AI Liability Directive, Denies Trump Pressure

The European Union has scrapped its proposed AI Liability Directive, which would have made it easier for consumers to sue over AI-related harms. EU digital chief Henna Virkkunen denied this decision was due to pressure from the Trump administration, instead citing a focus on boosting competitiveness by reducing bureaucracy and limiting reporting requirements.

UK Rebrands AI Safety Institute to Focus on Security, Partners with Anthropic

The UK government has renamed its AI Safety Institute to the AI Security Institute, shifting focus from existential risks to cybersecurity and national security concerns. Alongside this pivot, the government announced a new partnership with Anthropic to explore using its AI assistant Claude in public services and contribute to security risk evaluation.

Anthropic CEO Criticizes Lack of Urgency in AI Governance at Paris Summit

Anthropic CEO Dario Amodei criticized the AI Action Summit in Paris as a "missed opportunity," calling for greater urgency in AI governance given the rapidly advancing technology. Amodei warned that AI systems will soon have capabilities comparable to "an entirely new state populated by highly intelligent people" and urged governments to focus on measuring AI use, ensuring economic benefits are widely shared, and increasing transparency around AI safety and security assessment.