Regulation AI News & Updates
EU Softens AI Regulatory Approach Amid International Pressure
The EU has released a third draft of the Code of Practice for general purpose AI (GPAI) providers that appears to relax certain requirements compared to earlier versions. The draft uses mediated language like "best efforts" and "reasonable measures" for compliance with copyright and transparency obligations, while also narrowing safety requirements for the most powerful models following criticism from industry and US officials.
Skynet Chance (+0.06%): The weakening of AI safety and transparency regulations in the EU, particularly for the most powerful models, reduces oversight and accountability mechanisms that could help prevent misalignment or harmful capabilities, potentially increasing risks from advanced AI systems deployed with inadequate safeguards or monitoring.
Skynet Date (-2 days): The softening of regulatory requirements reduces friction for AI developers, potentially accelerating the deployment timeline for powerful AI systems with fewer mandatory safety evaluations or risk mitigation measures in place.
AGI Progress (+0.03%): While this regulatory shift doesn't directly advance AGI capabilities, it creates a more permissive environment for AI companies to develop and deploy increasingly powerful models with fewer constraints, potentially enabling faster progress toward advanced capabilities without commensurate safety measures.
AGI Date (-3 days): The dilution of AI regulations in response to industry and US pressure creates a more favorable environment for rapid AI development with fewer compliance burdens, potentially accelerating the timeline for AGI by reducing regulatory friction and oversight requirements.
US AI Safety Institute Faces Potential Layoffs and Uncertain Future
Reports indicate the National Institute of Standards and Technology (NIST) may terminate up to 500 employees, significantly impacting the U.S. Artificial Intelligence Safety Institute (AISI). The institute, created under Biden's executive order on AI safety which Trump recently repealed, was already facing uncertainty after its director departed earlier in February.
Skynet Chance (+0.1%): The gutting of a federal AI safety institute substantially increases Skynet risk by removing critical government oversight and expertise dedicated to researching and mitigating catastrophic AI risks at precisely the time when advanced AI development is accelerating.
Skynet Date (-3 days): The elimination of safety guardrails and regulatory mechanisms significantly accelerates the timeline for potential AI risk scenarios by creating a more permissive environment for rapid, potentially unsafe AI development with minimal government supervision.
AGI Progress (+0.04%): Reduced government oversight will likely allow AI developers to pursue more aggressive capability advancements with fewer regulatory hurdles or safety requirements, potentially accelerating technical progress toward AGI.
AGI Date (-3 days): The dismantling of safety-focused institutions will likely encourage AI labs to pursue riskier, faster development trajectories without regulatory barriers, potentially bringing AGI timelines significantly closer.
Anthropic CEO Criticizes Lack of Urgency in AI Governance at Paris Summit
Anthropic CEO Dario Amodei criticized the AI Action Summit in Paris as a "missed opportunity," calling for greater urgency in AI governance given the rapidly advancing technology. Amodei warned that AI systems will soon have capabilities comparable to "an entirely new state populated by highly intelligent people" and urged governments to focus on measuring AI use, ensuring economic benefits are widely shared, and increasing transparency around AI safety and security assessment.
Skynet Chance (+0.06%): Amodei's explicit warning about advanced AI presenting "significant global security dangers" and his comparison of AI systems to "an entirely new state populated by highly intelligent people" increases awareness of control risks, though his call for action hasn't yet resulted in concrete safeguards.
Skynet Date (-2 days): The failure of international governance bodies to agree on meaningful AI safety measures, as highlighted by Amodei calling the summit a "missed opportunity," suggests defensive measures are falling behind technological advancement, potentially accelerating the timeline to control problems.
AGI Progress (+0.03%): While focused on policy rather than technical breakthroughs, Amodei's characterization of AI systems becoming like "an entirely new state populated by highly intelligent people" suggests frontier labs like Anthropic are making significant progress toward human-level capabilities.
AGI Date (-2 days): Amodei's urgent call for faster and clearer action, coupled with his statement about "the pace at which the technology is progressing," suggests AI capabilities are advancing more rapidly than previously expected, potentially shortening the timeline to AGI.
European Union Publishes Guidelines on AI System Classification Under New AI Act
The European Union has released non-binding guidance to help determine which systems qualify as AI under its recently implemented AI Act. The guidance acknowledges that no exhaustive classification is possible and that the document will evolve as new questions and use cases emerge, with companies facing potential fines of up to 7% of global annual turnover for non-compliance.
Skynet Chance (-0.15%): The EU's implementation of a structured risk-based regulatory framework decreases the chances of uncontrolled AI development by establishing accountability mechanisms and prohibitions on dangerous applications. By formalizing governance for AI systems, the EU creates guardrails that make unchecked AI proliferation less likely.
Skynet Date (+4 days): The implementation of regulatory requirements with substantial penalties likely delays the timeline for potential uncontrolled AI risks by forcing companies to invest time and resources in compliance, risk assessment, and safety mechanisms before deploying advanced AI systems.
AGI Progress (-0.08%): The EU's regulatory framework introduces additional compliance hurdles for AI development that may modestly slow technical progress toward AGI by diverting resources and attention toward regulatory concerns. Companies may need to modify development approaches to ensure compliance with the risk-based requirements.
AGI Date (+2 days): The compliance requirements and potential penalties introduced by the AI Act are likely to extend development timelines for advanced AI systems in Europe, as companies must navigate regulatory uncertainty and implement additional safeguards before deploying capabilities that could contribute to AGI.
EU AI Act Begins Enforcement Against 'Unacceptable Risk' AI Systems
The European Union's AI Act has reached its first compliance deadline, banning AI systems deemed to pose "unacceptable risk" as of February 2, 2025. These prohibited applications include AI for social scoring, emotion recognition in schools/workplaces, biometric categorization systems, predictive policing, and manipulation through subliminal techniques, with violations potentially resulting in fines up to €35 million or 7% of annual revenue.
Skynet Chance (-0.2%): The EU AI Act establishes significant guardrails against potentially harmful AI applications, creating a comprehensive regulatory framework that reduces the probability of unchecked AI development leading to uncontrolled or harmful systems, particularly by preventing manipulative and surveillance-oriented applications.
Skynet Date (+4 days): The implementation of substantial regulatory oversight and prohibition of certain AI applications will likely slow the deployment of advanced AI systems in the EU, extending the timeline for potentially harmful AI by requiring thorough risk assessments and compliance protocols before deployment.
AGI Progress (-0.08%): While not directly targeting AGI research, the EU's risk-based approach creates regulatory friction that may slow certain paths to AGI, particularly those involving human behavioral manipulation, mass surveillance, or other risky capabilities that might otherwise contribute to broader AI advancement.
AGI Date (+2 days): The regulatory requirements for high-risk AI systems will likely increase development time and compliance costs, potentially pushing back AGI timelines as companies must dedicate resources to ensuring their systems meet regulatory standards rather than focusing solely on capability advancement.