Regulation AI News & Updates

California Enacts First State-Level AI Safety Transparency Law Requiring Major Labs to Disclose Protocols

California Governor Newsom signed SB 53 into law, making it the first state to mandate AI safety transparency from major AI laboratories like OpenAI and Anthropic. The law requires these companies to publicly disclose and adhere to their safety protocols, marking a significant shift in AI regulation after the previous bill SB 1047 was vetoed last year.

California Senator Scott Wiener Pushes New AI Safety Bill SB 53 After Previous Legislation Veto

California Senator Scott Wiener has introduced SB 53, a new AI safety bill requiring major AI companies to publish safety reports and disclose testing methods, after his previous bill SB 1047 was vetoed in 2024. The new legislation focuses on transparency and reporting requirements for AI systems that could potentially cause catastrophic harms like cyberattacks, bioweapons creation, or deaths. Unlike the previous bill, SB 53 has received support from some tech companies including Anthropic and partial support from Meta.

TechCrunch Equity Podcast Covers AI Safety Regulation and AR Technology Developments

TechCrunch's Equity podcast discusses recent developments in AI, robotics, and regulation, with particular focus on Meta's augmented reality initiatives and California's renewed AI safety efforts. The episode covers major industry moves across these technology sectors.

TechCrunch Equity Podcast Covers AI Safety Wins and Robotics Golden Age

TechCrunch's Equity podcast episode discusses recent developments in AI, robotics, and regulation. The episode covers a live demo failure, AI safety achievements, and what hosts describe as the "Golden Age of Robotics."

New York Passes RAISE Act Requiring Safety Standards for Frontier AI Models

New York state lawmakers passed the RAISE Act, which requires major AI companies like OpenAI, Google, and Anthropic to publish safety reports and follow transparency standards for AI models trained with over $100 million in computing resources. The bill aims to prevent AI-fueled disasters causing over 100 casualties or $1 billion in damages, with civil penalties up to $30 million for non-compliance. The legislation now awaits Governor Kathy Hochul's signature and represents the first legally mandated transparency standards for frontier AI labs in America.

EU Softens AI Regulatory Approach Amid International Pressure

The EU has released a third draft of the Code of Practice for general purpose AI (GPAI) providers that appears to relax certain requirements compared to earlier versions. The draft uses mediated language like "best efforts" and "reasonable measures" for compliance with copyright and transparency obligations, while also narrowing safety requirements for the most powerful models following criticism from industry and US officials.

US AI Safety Institute Faces Potential Layoffs and Uncertain Future

Reports indicate the National Institute of Standards and Technology (NIST) may terminate up to 500 employees, significantly impacting the U.S. Artificial Intelligence Safety Institute (AISI). The institute, created under Biden's executive order on AI safety which Trump recently repealed, was already facing uncertainty after its director departed earlier in February.

Anthropic CEO Criticizes Lack of Urgency in AI Governance at Paris Summit

Anthropic CEO Dario Amodei criticized the AI Action Summit in Paris as a "missed opportunity," calling for greater urgency in AI governance given the rapidly advancing technology. Amodei warned that AI systems will soon have capabilities comparable to "an entirely new state populated by highly intelligent people" and urged governments to focus on measuring AI use, ensuring economic benefits are widely shared, and increasing transparency around AI safety and security assessment.

European Union Publishes Guidelines on AI System Classification Under New AI Act

The European Union has released non-binding guidance to help determine which systems qualify as AI under its recently implemented AI Act. The guidance acknowledges that no exhaustive classification is possible and that the document will evolve as new questions and use cases emerge, with companies facing potential fines of up to 7% of global annual turnover for non-compliance.

EU AI Act Begins Enforcement Against 'Unacceptable Risk' AI Systems

The European Union's AI Act has reached its first compliance deadline, banning AI systems deemed to pose "unacceptable risk" as of February 2, 2025. These prohibited applications include AI for social scoring, emotion recognition in schools/workplaces, biometric categorization systems, predictive policing, and manipulation through subliminal techniques, with violations potentially resulting in fines up to €35 million or 7% of annual revenue.