Policy and Regulation AI News & Updates

California AI Policy Group Advocates Anticipatory Approach to Frontier AI Safety Regulations

A California policy group co-led by AI pioneer Fei-Fei Li released a 41-page interim report advocating for AI safety laws that anticipate future risks, even those not yet observed. The report recommends increased transparency from frontier AI labs through mandatory safety test reporting, third-party verification, and enhanced whistleblower protections, while acknowledging uncertain evidence for extreme AI threats but emphasizing high stakes for inaction.

Chinese Government Increases Oversight of AI Startup DeepSeek

The Chinese government has reportedly placed homegrown AI startup DeepSeek under closer supervision following the company's successful launch of its open-source reasoning model R1 in January. New restrictions include travel limitations for some employees, with passports being held by DeepSeek's parent company, and government screening of potential investors, signaling China's strategic interest in protecting its AI technology from foreign influence.

OpenAI Advocates for US Restrictions on Chinese AI Models

OpenAI has submitted a proposal to the Trump administration recommending bans on "PRC-produced" AI models, specifically targeting Chinese AI lab DeepSeek which it describes as "state-subsidized" and "state-controlled." The proposal claims DeepSeek's models present privacy and security risks due to potential Chinese government access to user data, though OpenAI later issued a statement partially contradicting its original stronger stance.

EU Softens AI Regulatory Approach Amid International Pressure

The EU has released a third draft of the Code of Practice for general purpose AI (GPAI) providers that appears to relax certain requirements compared to earlier versions. The draft uses mediated language like "best efforts" and "reasonable measures" for compliance with copyright and transparency obligations, while also narrowing safety requirements for the most powerful models following criticism from industry and US officials.

Judge Signals Concerns About OpenAI's For-Profit Conversion Despite Denying Musk's Injunction

A federal judge denied Elon Musk's request for a preliminary injunction to halt OpenAI's transition to a for-profit structure, but expressed significant concerns about the conversion. Judge Rogers indicated that using public money for a nonprofit's conversion to for-profit could cause "irreparable harm" and offered an expedited trial in 2025 to resolve the corporate restructuring disputes.

Anthropic Proposes National AI Policy Framework to White House

After removing Biden-era AI commitments from its website, Anthropic submitted recommendations to the White House for a national AI policy focused on economic benefits. The recommendations include maintaining the AI Safety Institute, developing national security evaluations for powerful AI models, implementing chip export controls, and establishing a 50-gigawatt power target for AI data centers by 2027.

Tech Leaders Warn Against AGI Manhattan Project in Policy Paper

Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and CAIS Director Dan Hendrycks published a policy paper arguing against a "Manhattan Project for AGI" approach by the US government. The authors warn that an aggressive US push for superintelligent AI monopoly could provoke retaliation from China, suggesting instead a defensive strategy focused on deterrence rather than racing toward AGI dominance.

Anthropic Removes Biden-Era AI Safety Commitments After Trump Policy Shift

Anthropic has quietly removed several voluntary Biden administration AI safety commitments from its website, including pledges to share information on AI risk management and conduct research on bias. The removal coincides with the Trump administration's different approach to AI governance, including the repeal of Biden's AI Executive Order in favor of policies promoting AI development with less emphasis on discrimination concerns.

Trump Administration Cuts Threaten Critical US AI Research Funding

The Trump administration has fired key AI experts at the National Science Foundation, jeopardizing important government funding for artificial intelligence research. The layoffs have caused postponement or cancellation of review panels, stalling funding for AI projects, with critics including AI pioneer Geoffrey Hinton condemning the cuts to scientific grant-making.

California Senator Introduces New AI Safety Bill with Whistleblower Protections

California State Senator Scott Wiener has introduced SB 53, a new AI bill that would protect employees at leading AI labs who speak out about potential critical risks to society. The bill also proposes creating CalCompute, a public cloud computing cluster to support AI research, following Governor Newsom's veto of Wiener's more controversial SB 1047 bill last year.