Policy and Regulation AI News & Updates

U.S. May Permit Export of Nvidia H200 AI Chips to China Despite Congressional Opposition

The U.S. Department of Commerce is reportedly planning to allow Nvidia to export H200 AI chips to China, though only models approximately 18 months old would be permitted. This decision conflicts with bipartisan Congressional efforts to block advanced AI chip exports to China for national security reasons, including the proposed SAFE Chips Act that would impose a 30-month export ban. The move represents a shift in the Trump administration's stance, which has oscillated between restricting and enabling chip exports as part of broader trade negotiations.

Trump Plans Executive Order to Override State AI Regulations Despite Bipartisan Opposition

President Trump announced plans to sign an executive order blocking states from enacting their own AI regulations, arguing that a unified national framework is necessary for the U.S. to maintain its competitive edge in AI development. The proposal faces strong bipartisan pushback from Congress and state leaders who argue it represents federal overreach and removes important local protections for citizens against AI harms. The order would create an AI Litigation Task Force to challenge state laws and consolidate regulatory authority under White House AI czar David Sacks.

Federal Attempt to Block State AI Regulation Fails Amid Bipartisan Opposition

Republican leaders' attempt to include a ban on state AI regulation in the annual defense bill has been rejected following bipartisan pushback. The proposal, supported by Silicon Valley and President Trump, would have preempted states from enacting their own AI laws, but critics argue this would eliminate oversight in the absence of federal AI regulation. House Majority Leader Steve Scalise indicated they will seek alternative legislative approaches to implement the ban.

OpenAI Lobbies Trump Administration for Expanded Tax Credits to Fund Massive AI Infrastructure Buildout

OpenAI has sent a letter to the Trump administration requesting expansion of the Chips Act's Advanced Manufacturing Investment Credit to cover AI data centers, servers, and electrical grid components, seeking to reduce capital costs for infrastructure development. The company is also asking for accelerated permitting processes and a strategic reserve of raw materials needed for AI infrastructure. OpenAI projects reaching over $20 billion in annualized revenue by end of 2025 and has made $1.4 trillion in capital commitments over eight years.

California Enacts First-in-Nation AI Safety Transparency Law Requiring Large Labs to Disclose Catastrophic Risk Protocols

California Governor Gavin Newsom signed SB 53 into law, requiring large AI labs to publicly disclose their safety and security protocols for preventing catastrophic risks like cyber attacks on critical infrastructure or bioweapon development. The bill mandates companies adhere to these protocols under enforcement by the Office of Emergency Services, while youth advocacy group Encode AI argues this demonstrates regulation can coexist with innovation. The law comes amid industry pushback against state-level AI regulation, with major tech companies and VCs funding efforts to preempt state laws through federal legislation.

California Enacts First-in-Nation AI Safety Transparency Law Requiring Disclosure from Major Labs

California Governor Newsom signed SB 53 into law, making it the first state to require major AI companies like OpenAI and Anthropic to disclose and adhere to their safety protocols. The legislation includes whistleblower protections and safety incident reporting requirements, representing a "transparency without liability" approach that succeeded where the more stringent SB 1047 failed.

California Enacts First State-Level AI Safety Transparency Law Requiring Major Labs to Disclose Protocols

California Governor Newsom signed SB 53 into law, making it the first state to mandate AI safety transparency from major AI laboratories like OpenAI and Anthropic. The law requires these companies to publicly disclose and adhere to their safety protocols, marking a significant shift in AI regulation after the previous bill SB 1047 was vetoed last year.

California Enacts First-in-Nation AI Transparency and Safety Bill SB 53

California Governor Gavin Newsom signed SB 53, establishing transparency requirements for major AI labs including OpenAI, Anthropic, Meta, and Google DeepMind regarding safety protocols and critical incident reporting. The bill also provides whistleblower protections and creates mechanisms for reporting AI-related safety incidents to state authorities. This represents the first state-level frontier AI safety legislation in the U.S., though it received mixed industry reactions with some companies lobbying against it.

South Korea Invests $390 Million in Domestic AI Companies to Challenge OpenAI and Google

South Korea has launched a ₩530 billion ($390 million) sovereign AI initiative, funding five local companies to develop large-scale foundational models that can compete with global AI giants. The government will review progress every six months and narrow the field to two frontrunners, with companies like LG AI Research, SK Telecom, Naver Cloud, and Upstage developing Korean-language optimized models.

California Senator Scott Wiener Pushes New AI Safety Bill SB 53 After Previous Legislation Veto

California Senator Scott Wiener has introduced SB 53, a new AI safety bill requiring major AI companies to publish safety reports and disclose testing methods, after his previous bill SB 1047 was vetoed in 2024. The new legislation focuses on transparency and reporting requirements for AI systems that could potentially cause catastrophic harms like cyberattacks, bioweapons creation, or deaths. Unlike the previous bill, SB 53 has received support from some tech companies including Anthropic and partial support from Meta.