Policy and Regulation AI News & Updates
Anthropic Proposes National AI Policy Framework to White House
After removing Biden-era AI commitments from its website, Anthropic submitted recommendations to the White House for a national AI policy focused on economic benefits. The recommendations include maintaining the AI Safety Institute, developing national security evaluations for powerful AI models, implementing chip export controls, and establishing a 50-gigawatt power target for AI data centers by 2027.
Skynet Chance (-0.08%): Anthropic's recommendations prioritize national security evaluations and maintaining safety institutions, which could reduce potential uncontrolled AI risks. The focus on governance structures and security vulnerability analysis represents a moderate push toward greater oversight of powerful AI systems.
Skynet Date (+2 days): The proposed policies would likely slow deployment through additional security requirements and evaluations, moderately decelerating paths to potentially dangerous AI capabilities. Continued institutional oversight creates friction against rapid, unchecked AI development.
AGI Progress (+0.03%): While focusing mainly on governance rather than capabilities, Anthropic's recommendation for 50 additional gigawatts of power dedicated to AI by 2027 would significantly increase compute resources. This infrastructure expansion could moderately accelerate overall progress toward advanced AI systems.
AGI Date (-1 days): The massive power infrastructure proposal (50GW by 2027) would substantially increase AI computing capacity in the US, potentially accelerating AGI development timelines. However, this is partially offset by the proposed regulatory mechanisms that might introduce some delays.
Tech Leaders Warn Against AGI Manhattan Project in Policy Paper
Former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and CAIS Director Dan Hendrycks published a policy paper arguing against a "Manhattan Project for AGI" approach by the US government. The authors warn that an aggressive US push for superintelligent AI monopoly could provoke retaliation from China, suggesting instead a defensive strategy focused on deterrence rather than racing toward AGI dominance.
Skynet Chance (-0.15%): The advocacy by prominent tech leaders against racing toward AGI and for prioritizing defensive strategies rather than rapid development significantly reduces the likelihood of uncontrolled deployment of superintelligent systems. Their concept of "Mutual Assured AI Malfunction" highlights awareness of catastrophic risks from misaligned superintelligence.
Skynet Date (+4 days): The paper's emphasis on deterrence over acceleration and its warning against government-backed AGI races would likely substantially slow the pace of superintelligence development if adopted. By explicitly rejecting the "Manhattan Project" approach, these influential leaders are advocating for more measured, cautious development timelines.
AGI Progress (-0.1%): The paper represents a significant shift from aggressive AGI pursuit to defensive strategies, particularly notable coming from Schmidt who previously advocated for faster AI development. This stance by influential tech leaders could substantially slow coordinated efforts toward superintelligence development.
AGI Date (+3 days): The proposed shift from racing toward superintelligence to focusing on defensive capabilities and international stability would likely extend AGI timelines considerably. The rejection of a Manhattan Project approach by these influential figures could discourage government-sponsored acceleration of AGI development.
Anthropic Removes Biden-Era AI Safety Commitments After Trump Policy Shift
Anthropic has quietly removed several voluntary Biden administration AI safety commitments from its website, including pledges to share information on AI risk management and conduct research on bias. The removal coincides with the Trump administration's different approach to AI governance, including the repeal of Biden's AI Executive Order in favor of policies promoting AI development with less emphasis on discrimination concerns.
Skynet Chance (+0.06%): The removal of voluntary safety commitments and policy shifts away from bias monitoring and risk management could weaken AI oversight mechanisms. This institutional retreat from safety commitments increases the possibility of less regulated AI development with fewer guardrails on potentially harmful capabilities.
Skynet Date (-2 days): The Trump administration's prioritization of rapid AI development "free from ideological bias" over safety measures and discrimination concerns may accelerate deployment of advanced AI systems with less thorough safety testing, potentially shortening timelines to high-risk scenarios.
AGI Progress (+0.04%): While not directly advancing technical capabilities, the policy shift toward less regulatory oversight and more emphasis on "economic competitiveness" creates an environment that likely prioritizes capability advancement over safety research. This regulatory climate may encourage more aggressive capability scaling approaches.
AGI Date (-3 days): The new policy direction explicitly prioritizing AI development speed over safety concerns could accelerate the timeline to AGI by removing potential regulatory hurdles and encouraging companies to race ahead with capabilities research without corresponding safety investments.
Trump Administration Cuts Threaten Critical US AI Research Funding
The Trump administration has fired key AI experts at the National Science Foundation, jeopardizing important government funding for artificial intelligence research. The layoffs have caused postponement or cancellation of review panels, stalling funding for AI projects, with critics including AI pioneer Geoffrey Hinton condemning the cuts to scientific grant-making.
Skynet Chance (+0.06%): The dismantling of government oversight and funding structures for AI research reduces coordinated public-interest supervision of AI development, potentially allowing commercial interests to dominate the field with less emphasis on safety and alignment concerns.
Skynet Date (+4 days): Reduced government funding for diverse, safety-oriented research may slow overall development of robust AI safety measures and alignment techniques, potentially extending the timeline before safe, advanced AI systems can be developed.
AGI Progress (-0.08%): The disruption to government funding pipelines will likely slow certain academic and public-sector AI research initiatives, potentially delaying breakthrough work that could contribute to AGI development. The cuts represent a modest setback to overall research momentum.
AGI Date (+2 days): The reduction in government-sponsored AI research funding will likely slow some foundational research efforts, particularly in academia, potentially extending timelines for reaching key AGI-enabling breakthroughs that rely on diverse research approaches.
California Senator Introduces New AI Safety Bill with Whistleblower Protections
California State Senator Scott Wiener has introduced SB 53, a new AI bill that would protect employees at leading AI labs who speak out about potential critical risks to society. The bill also proposes creating CalCompute, a public cloud computing cluster to support AI research, following Governor Newsom's veto of Wiener's more controversial SB 1047 bill last year.
Skynet Chance (-0.1%): The bill's whistleblower protections could increase transparency and safety oversight at frontier AI companies, potentially reducing the chance of dangerous AI systems being developed in secret. Creating mechanisms for employees to report risks without retaliation establishes an important safety valve for dangerous AI development.
Skynet Date (+2 days): The bill's regulatory framework would likely slow the pace of high-risk AI system deployment by requiring greater internal accountability and preventing companies from silencing safety concerns. However, the limited scope of the legislation and uncertain political climate mean the deceleration effect is modest.
AGI Progress (+0.03%): The proposed CalCompute cluster would increase compute resources available to researchers and startups, potentially accelerating certain aspects of AI research. However, the impact is modest because the bill focuses more on safety and oversight than on directly advancing capabilities.
AGI Date (-1 days): While CalCompute would expand compute access that could slightly accelerate some AI research paths, the increased regulatory oversight and whistleblower protections may create modest delays in frontier model development. The net effect is a very slight acceleration toward AGI.
US AI Safety Institute Faces Potential Layoffs and Uncertain Future
Reports indicate the National Institute of Standards and Technology (NIST) may terminate up to 500 employees, significantly impacting the U.S. Artificial Intelligence Safety Institute (AISI). The institute, created under Biden's executive order on AI safety which Trump recently repealed, was already facing uncertainty after its director departed earlier in February.
Skynet Chance (+0.1%): The gutting of a federal AI safety institute substantially increases Skynet risk by removing critical government oversight and expertise dedicated to researching and mitigating catastrophic AI risks at precisely the time when advanced AI development is accelerating.
Skynet Date (-3 days): The elimination of safety guardrails and regulatory mechanisms significantly accelerates the timeline for potential AI risk scenarios by creating a more permissive environment for rapid, potentially unsafe AI development with minimal government supervision.
AGI Progress (+0.04%): Reduced government oversight will likely allow AI developers to pursue more aggressive capability advancements with fewer regulatory hurdles or safety requirements, potentially accelerating technical progress toward AGI.
AGI Date (-3 days): The dismantling of safety-focused institutions will likely encourage AI labs to pursue riskier, faster development trajectories without regulatory barriers, potentially bringing AGI timelines significantly closer.
OpenAI Shifts Policy Toward Greater Intellectual Freedom and Neutrality in ChatGPT
OpenAI has updated its Model Spec policy to embrace intellectual freedom, enabling ChatGPT to answer more questions, offer multiple perspectives on controversial topics, and reduce refusals to engage. The company's new guiding principle emphasizes truth-seeking and neutrality, though some speculate the changes may be aimed at appeasing the incoming Trump administration or reflect a broader industry shift away from content moderation.
Skynet Chance (+0.06%): Reducing safeguards and guardrails around controversial content increases the risk of AI systems being misused or manipulated toward harmful ends. The shift toward presenting all perspectives without editorial judgment weakens alignment mechanisms that previously constrained AI behavior within safer boundaries.
Skynet Date (-2 days): The deliberate relaxation of safety constraints and removal of warning systems accelerates the timeline toward potential AI risks by prioritizing capability deployment over safety considerations. This industry-wide shift away from content moderation reflects a market pressure toward fewer restrictions that could hasten unsafe deployment.
AGI Progress (+0.04%): While not directly advancing technical capabilities, the removal of guardrails and constraints enables broader deployment and usage of AI systems in previously restricted domains. The policy change expands the operational scope of ChatGPT, effectively increasing its functional capabilities across more contexts.
AGI Date (-1 days): This industry-wide movement away from content moderation and toward fewer restrictions accelerates deployment and mainstream acceptance of increasingly powerful AI systems. The reduced emphasis on safety guardrails reflects prioritization of capability deployment over cautious, measured advancement.
EU Abandons AI Liability Directive, Denies Trump Pressure
The European Union has scrapped its proposed AI Liability Directive, which would have made it easier for consumers to sue over AI-related harms. EU digital chief Henna Virkkunen denied this decision was due to pressure from the Trump administration, instead citing a focus on boosting competitiveness by reducing bureaucracy and limiting reporting requirements.
Skynet Chance (+0.08%): Abandoning the AI Liability Directive significantly reduces accountability mechanisms for AI systems and weakens consumer protections against AI harms. This regulatory retreat signals a shift toward prioritizing AI development speed over safety guardrails, potentially increasing risks of harmful AI deployment without adequate oversight.
Skynet Date (-3 days): The EU's pivot away from strong AI liability rules represents a major shift toward regulatory permissiveness that will likely accelerate AI development and deployment. By reducing potential legal consequences for harmful AI systems, companies face fewer incentives to implement robust safety measures.
AGI Progress (+0.04%): The reduction in liability concerns and reporting requirements will likely accelerate AI development by reducing legal barriers and compliance costs. Companies will have greater freedom to deploy advanced AI systems without extensive safety testing or concerns about legal liability for unintended consequences.
AGI Date (-2 days): The EU's policy shift toward deregulation and reduced reporting requirements will likely accelerate AI development timelines by removing significant regulatory barriers. This global trend toward regulatory permissiveness could compress AGI timelines as companies face fewer external constraints on deployment speed.
UK Rebrands AI Safety Institute to Focus on Security, Partners with Anthropic
The UK government has renamed its AI Safety Institute to the AI Security Institute, shifting focus from existential risks to cybersecurity and national security concerns. Alongside this pivot, the government announced a new partnership with Anthropic to explore using its AI assistant Claude in public services and contribute to security risk evaluation.
Skynet Chance (+0.06%): The UK government's pivot away from existential risk concerns toward economic growth and security applications signals a reduced institutional focus on AI control problems. This deprioritization of safety in favor of deployment could increase risks of unintended consequences as AI systems become more integrated into critical infrastructure.
Skynet Date (-2 days): The accelerated government adoption of AI and reduced emphasis on safety barriers could hasten deployment of increasingly capable AI systems without adequate safeguards. This policy shift toward rapid implementation over cautious development potentially shortens timelines for high-risk scenarios.
AGI Progress (+0.04%): The partnership with Anthropic and greater focus on integration of AI into government services represents incremental progress toward more capable AI systems. While not a direct technical breakthrough, this institutionalization and government backing accelerates the development pathway toward more advanced AI capabilities.
AGI Date (-3 days): The UK government's explicit prioritization of AI development over safety concerns, combined with increased public-private partnerships, creates a more favorable regulatory environment for rapid AI advancement. This policy shift removes potential speed bumps that might have slowed AGI development timelines.
Anthropic CEO Criticizes Lack of Urgency in AI Governance at Paris Summit
Anthropic CEO Dario Amodei criticized the AI Action Summit in Paris as a "missed opportunity," calling for greater urgency in AI governance given the rapidly advancing technology. Amodei warned that AI systems will soon have capabilities comparable to "an entirely new state populated by highly intelligent people" and urged governments to focus on measuring AI use, ensuring economic benefits are widely shared, and increasing transparency around AI safety and security assessment.
Skynet Chance (+0.06%): Amodei's explicit warning about advanced AI presenting "significant global security dangers" and his comparison of AI systems to "an entirely new state populated by highly intelligent people" increases awareness of control risks, though his call for action hasn't yet resulted in concrete safeguards.
Skynet Date (-2 days): The failure of international governance bodies to agree on meaningful AI safety measures, as highlighted by Amodei calling the summit a "missed opportunity," suggests defensive measures are falling behind technological advancement, potentially accelerating the timeline to control problems.
AGI Progress (+0.03%): While focused on policy rather than technical breakthroughs, Amodei's characterization of AI systems becoming like "an entirely new state populated by highly intelligent people" suggests frontier labs like Anthropic are making significant progress toward human-level capabilities.
AGI Date (-2 days): Amodei's urgent call for faster and clearer action, coupled with his statement about "the pace at which the technology is progressing," suggests AI capabilities are advancing more rapidly than previously expected, potentially shortening the timeline to AGI.