Policy and Regulation AI News & Updates

California Enacts First-in-Nation AI Transparency and Safety Bill SB 53

California Governor Gavin Newsom signed SB 53, establishing transparency requirements for major AI labs including OpenAI, Anthropic, Meta, and Google DeepMind regarding safety protocols and critical incident reporting. The bill also provides whistleblower protections and creates mechanisms for reporting AI-related safety incidents to state authorities. This represents the first state-level frontier AI safety legislation in the U.S., though it received mixed industry reactions with some companies lobbying against it.

South Korea Invests $390 Million in Domestic AI Companies to Challenge OpenAI and Google

South Korea has launched a ₩530 billion ($390 million) sovereign AI initiative, funding five local companies to develop large-scale foundational models that can compete with global AI giants. The government will review progress every six months and narrow the field to two frontrunners, with companies like LG AI Research, SK Telecom, Naver Cloud, and Upstage developing Korean-language optimized models.

California Senator Scott Wiener Pushes New AI Safety Bill SB 53 After Previous Legislation Veto

California Senator Scott Wiener has introduced SB 53, a new AI safety bill requiring major AI companies to publish safety reports and disclose testing methods, after his previous bill SB 1047 was vetoed in 2024. The new legislation focuses on transparency and reporting requirements for AI systems that could potentially cause catastrophic harms like cyberattacks, bioweapons creation, or deaths. Unlike the previous bill, SB 53 has received support from some tech companies including Anthropic and partial support from Meta.

Meta Launches Multi-Million Dollar Super PAC to Combat State-Level AI Regulation

Meta has launched the American Technology Excellence Project, a super PAC investing "tens of millions" of dollars to fight state-level AI regulation and elect tech-friendly politicians in upcoming midterm elections. The move comes as over 1,000 AI-related bills have been introduced across all 50 states, with Meta arguing that a "patchwork" of state regulations would hinder innovation and U.S. competitiveness against China in AI development.

California Senate Approves AI Safety Bill SB 53 Targeting Companies Over $500M Revenue

California's state senate has approved AI safety bill SB 53, which targets large AI companies making over $500 million annually and requires safety reports, incident reporting, and whistleblower protections. The bill is narrower than last year's vetoed SB 1047 and has received endorsement from AI company Anthropic. It now awaits Governor Newsom's signature amid potential federal-state tensions over AI regulation under the Trump administration.

China Bans Domestic Tech Companies from Purchasing Nvidia AI Chips

China's Cyberspace Administration has banned domestic tech companies from buying Nvidia AI chips and ordered companies like ByteDance and Alibaba to stop testing Nvidia's RTX Pro 6000D servers. This follows previous US licensing requirements and represents a significant blow to China's tech ecosystem, as Nvidia dominates the global AI chip market with the most advanced processors available.

California Senate Passes AI Safety Bill SB 53 Requiring Transparency from Major AI Labs

California's state senate approved AI safety bill SB 53, which requires large AI companies to disclose safety protocols and creates whistleblower protections for AI lab employees. The bill now awaits Governor Newsom's signature, though he previously vetoed a similar but more expansive AI safety bill last year.

Anthropic Endorses California AI Safety Bill SB 53 Requiring Transparency from Major AI Developers

Anthropic has officially endorsed California's SB 53, a bill that would require the world's largest AI model developers to create safety frameworks and publish public safety reports before deploying powerful AI models. The bill focuses on preventing "catastrophic risks" defined as causing 50+ deaths or $1+ billion in damages, and includes whistleblower protections for employees reporting safety concerns.

State Attorneys General Demand OpenAI Address Child Safety Concerns Following Teen Suicide

California and Delaware attorneys general warned OpenAI about child safety risks after a teen's suicide following prolonged ChatGPT interactions. They are investigating OpenAI's for-profit restructuring while demanding immediate safety improvements and questioning whether current AI safety measures are adequate.

U.S. Government Considers Taking Stake in Intel to Boost Domestic Chip Manufacturing

The Trump administration is reportedly in discussions to take a stake in Intel to help expand U.S. semiconductor manufacturing capabilities, including Intel's delayed Ohio factory. This follows political pressure on Intel's CEO over alleged China ties and represents a strategic government intervention in critical technology infrastructure.