Catastrophic Risk AI News & Updates
California Enacts First-in-Nation AI Safety Transparency Law Requiring Large Labs to Disclose Catastrophic Risk Protocols
California Governor Gavin Newsom signed SB 53 into law, requiring large AI labs to publicly disclose their safety and security protocols for preventing catastrophic risks like cyber attacks on critical infrastructure or bioweapon development. The bill mandates companies adhere to these protocols under enforcement by the Office of Emergency Services, while youth advocacy group Encode AI argues this demonstrates regulation can coexist with innovation. The law comes amid industry pushback against state-level AI regulation, with major tech companies and VCs funding efforts to preempt state laws through federal legislation.
Skynet Chance (-0.08%): Mandating transparency and adherence to safety protocols for catastrophic risks (cyber attacks, bioweapons) creates accountability mechanisms that reduce the likelihood of uncontrolled AI deployment or companies cutting safety corners under competitive pressure. The enforcement structure provides institutional oversight that didn't previously exist in binding legal form.
Skynet Date (+0 days): While the law introduces safety requirements that could marginally slow deployment timelines for high-risk systems, the bill codifies practices companies already claim to follow, suggesting minimal actual deceleration. The enforcement mechanism may create some procedural delays but is unlikely to significantly alter the pace toward potential catastrophic scenarios.
AGI Progress (0%): This policy focuses on transparency and safety documentation for catastrophic risks rather than imposing technical constraints on AI capability development itself. The law doesn't restrict research directions, model architectures, or compute scaling that drive AGI progress.
AGI Date (+0 days): The bill codifies existing industry practices around safety testing and model cards without imposing new technical barriers to capability advancement. Companies can continue AGI research at the same pace while meeting transparency requirements that are already part of their workflows.
California Senator Scott Wiener Pushes New AI Safety Bill SB 53 After Previous Legislation Veto
California Senator Scott Wiener has introduced SB 53, a new AI safety bill requiring major AI companies to publish safety reports and disclose testing methods, after his previous bill SB 1047 was vetoed in 2024. The new legislation focuses on transparency and reporting requirements for AI systems that could potentially cause catastrophic harms like cyberattacks, bioweapons creation, or deaths. Unlike the previous bill, SB 53 has received support from some tech companies including Anthropic and partial support from Meta.
Skynet Chance (-0.08%): The bill mandates transparency and safety reporting requirements for AI systems, particularly focusing on catastrophic risks like cyberattacks and bioweapons creation, which could help identify and mitigate potential uncontrollable AI scenarios. The establishment of whistleblower protections for AI lab employees also creates channels to surface safety concerns before they become critical threats.
Skynet Date (+1 days): By requiring detailed safety reporting and creating regulatory oversight mechanisms, the bill introduces procedural hurdles that may slow down the deployment of the most capable AI systems. The focus on transparency over liability suggests a more measured approach to AI development that could extend timelines for reaching potentially dangerous capability levels.
AGI Progress (-0.01%): The bill primarily focuses on safety reporting rather than restricting core AI research and development activities, so it has minimal direct impact on AGI progress. The creation of CalCompute, a state-operated cloud computing cluster, could actually provide additional research resources that might slightly benefit AGI development.
AGI Date (+0 days): The reporting requirements and regulatory compliance processes may create administrative overhead for major AI labs, potentially slowing their development cycles slightly. However, since the bill targets only companies with over $500 million in revenue and focuses on transparency rather than restricting capabilities, the impact on AGI timeline is minimal.
Meta Establishes Framework to Limit Development of High-Risk AI Systems
Meta has published its Frontier AI Framework that outlines policies for handling powerful AI systems with significant safety risks. The company commits to limiting internal access to "high-risk" systems and implementing mitigations before release, while halting development altogether on "critical-risk" systems that could enable catastrophic attacks or weapons development.
Skynet Chance (-0.2%): Meta's explicit framework for identifying and restricting development of high-risk AI systems represents a significant institutional safeguard against uncontrolled deployment of potentially dangerous systems, establishing concrete governance mechanisms tied to specific risk categories.
Skynet Date (+1 days): By creating formal processes to identify and restrict high-risk AI systems, Meta is introducing safety-oriented friction into the development pipeline, likely slowing the deployment of advanced systems until appropriate safeguards can be implemented.
AGI Progress (-0.01%): While not directly impacting technical capabilities, Meta's framework represents a potential constraint on AGI development by establishing governance processes that may limit certain research directions or delay deployment of advanced capabilities.
AGI Date (+1 days): Meta's commitment to halt development of critical-risk systems and implement mitigations for high-risk systems suggests a more cautious, safety-oriented approach that will likely extend timelines for deploying the most advanced AI capabilities.