Policy and Regulation AI News & Updates
Pope Leo XIV Positions AI Threat to Humanity as Central Legacy Issue
Pope Leo XIV is making AI's threat to humanity a signature issue of his papacy, drawing parallels to his namesake's advocacy for workers during the Industrial Revolution. The Vatican is pushing for a binding international AI treaty, putting the Pope at odds with tech industry leaders who have been courting Vatican influence on AI policy.
Skynet Chance (-0.08%): High-profile religious opposition to uncontrolled AI development and push for binding international treaties could create institutional resistance to reckless AI advancement. The Vatican's moral authority may help establish global norms prioritizing safety over unchecked innovation.
Skynet Date (+1 days): International treaty negotiations and institutional resistance from religious authorities typically slow technological development timelines. The Vatican's influence on global policy could create regulatory hurdles that decelerate risky AI deployment.
AGI Progress (-0.03%): Religious institutional opposition and calls for binding treaties may create headwinds for AI research funding and development. However, this represents policy pressure rather than technical obstacles, so impact on core progress is limited.
AGI Date (+1 days): Vatican-led international regulatory efforts could slow AGI development by creating compliance requirements and political obstacles for tech companies. The emphasis on binding treaties suggests potential for meaningful policy constraints on AI advancement pace.
Taiwan Imposes Export Controls on Chinese AI Chip Manufacturers Huawei and SMIC
Taiwan has placed Chinese companies Huawei and SMIC on a restricted entity list, requiring government approval for any Taiwanese exports to these firms. This action will limit their access to critical plant construction technologies, materials, and equipment needed for AI semiconductor development, potentially hindering China's AI chip manufacturing capabilities.
Skynet Chance (-0.05%): Export controls that slow AI chip development may reduce the immediate risk of uncontrolled AI advancement by creating technological barriers. However, this could also lead to fragmented AI development with less international oversight and cooperation.
Skynet Date (+1 days): Restricting access to advanced semiconductor manufacturing resources will likely slow the pace of AI capability development in affected regions. This deceleration in hardware progress could delay both beneficial AI advances and potential risk scenarios.
AGI Progress (-0.04%): Limiting access to advanced AI chip manufacturing capabilities represents a significant constraint on compute resources needed for AGI development. Reduced semiconductor access will likely slow progress toward AGI by creating hardware bottlenecks.
AGI Date (+1 days): Export controls on critical AI chip manufacturing resources will decelerate the timeline toward AGI by constraining the compute infrastructure necessary for training advanced AI systems. This regulatory barrier creates meaningful delays in hardware scaling.
New York Passes RAISE Act Requiring Safety Standards for Frontier AI Models
New York state lawmakers passed the RAISE Act, which requires major AI companies like OpenAI, Google, and Anthropic to publish safety reports and follow transparency standards for AI models trained with over $100 million in computing resources. The bill aims to prevent AI-fueled disasters causing over 100 casualties or $1 billion in damages, with civil penalties up to $30 million for non-compliance. The legislation now awaits Governor Kathy Hochul's signature and represents the first legally mandated transparency standards for frontier AI labs in America.
Skynet Chance (-0.08%): The RAISE Act establishes mandatory transparency requirements and safety reporting standards for frontier AI models, creating oversight mechanisms that could help identify and mitigate dangerous AI behaviors before they escalate. These regulatory safeguards represent a positive step toward preventing uncontrolled AI scenarios.
Skynet Date (+0 days): While the regulation provides important safety oversight, the relatively light regulatory burden and focus on transparency rather than capability restrictions means it's unlikely to significantly slow down AI development timelines. The requirements may add some compliance overhead but shouldn't substantially delay progress toward advanced AI systems.
AGI Progress (-0.01%): The RAISE Act imposes transparency and safety reporting requirements that may create some administrative overhead for AI companies, potentially slowing development slightly. However, the bill was specifically designed not to chill innovation, so the impact on actual AGI research progress should be minimal.
AGI Date (+0 days): The regulatory compliance requirements may introduce minor delays in AI model development and deployment as companies adapt to new reporting standards. However, given the bill's light regulatory burden and focus on transparency rather than capability restrictions, the impact on AGI timeline acceleration should be negligible.
NVIDIA and AMD Develop Restricted AI Chips for Chinese Market to Comply with US Export Controls
NVIDIA and AMD are developing new AI chips specifically for the Chinese market to comply with US export restrictions on advanced semiconductor technology. NVIDIA plans to sell a stripped-down "B20" GPU while AMD is targeting AI workloads with its Radeon AI PRO R9700, with both companies expected to begin sales in July. NVIDIA reported significant financial impacts from these restrictions, including a $4.5 billion Q1 charge and forecasted $8 billion revenue hit in Q2.
Skynet Chance (+0.01%): Export restrictions may fragment AI development globally, potentially reducing coordination on AI safety standards between major powers. However, the impact on overall AI safety is limited as restrictions target compute access rather than safety mechanisms.
Skynet Date (+1 days): US export controls may slow China's AI development pace by limiting access to cutting-edge compute, potentially delaying global AI capability advancement. The restrictions create barriers that could decelerate the overall timeline for advanced AI systems.
AGI Progress (-0.01%): Export restrictions and the need to develop separate chip variants may fragment research efforts and reduce overall computational resources available for AGI development. This represents a minor setback to coordinated global progress toward AGI.
AGI Date (+1 days): Limiting access to advanced AI chips in China while forcing companies to develop restricted alternatives likely slows the global pace of AGI development. The fragmentation of the AI hardware ecosystem and reduced compute availability create delays in reaching AGI milestones.
US Officials Probe Apple-Alibaba AI Partnership for Chinese iPhones
US government officials and congressional representatives are examining a potential deal between Apple and Alibaba that would integrate Alibaba's AI features into iPhones sold in China. The White House and House Select Committee on China have directly questioned Apple executives about data sharing and regulatory commitments, with Rep. Krishnamoorthi expressing concern about Alibaba's ties to the Chinese government. The deal has only been confirmed by Alibaba thus far, not Apple.
Skynet Chance (+0.01%): The potential for AI systems developed under Chinese governmental influence to be deployed on millions of Apple devices creates a minor increase in risk of AI control and governance issues. The lack of transparency about data sharing and regulatory requirements adds uncertainty about potential security implications.
Skynet Date (+0 days): While this partnership may influence AI development directions, it primarily represents a commercial implementation of existing AI capabilities rather than fundamental research that would accelerate or decelerate the timeline toward advanced AI risks.
AGI Progress (+0.01%): This partnership could modestly accelerate AI capability development through increased deployment, data collection, and commercial competition between US and Chinese tech ecosystems. Cross-border AI collaborations potentially combine different AI approaches and datasets that could incrementally advance the field.
AGI Date (+0 days): The competitive pressure from cross-border AI partnerships might slightly accelerate the timeline to AGI by creating additional incentives for rapid AI advancement in consumer products. Government scrutiny may increase the urgency for both US and Chinese companies to develop competitive AI systems.
Trump Administration Rescinds Biden's AI Chip Export Controls
The US Department of Commerce has officially rescinded the Biden Administration's Artificial Intelligence Diffusion Rule that would have implemented tiered export controls on AI chips to various countries. The Trump Administration plans to replace it with a different approach focused on direct country negotiations rather than blanket restrictions, while maintaining vigilance against adversaries accessing US AI technology.
Skynet Chance (+0.04%): The relaxation of export controls potentially increases proliferation of advanced AI chips globally, which could enable more entities to develop sophisticated AI systems with less oversight, increasing the possibility of unaligned or dangerous AI development.
Skynet Date (-1 days): By potentially accelerating global access to advanced AI hardware, the policy change may slightly speed up capabilities development worldwide, bringing forward the timeline for potential control risks associated with advanced AI systems.
AGI Progress (+0.01%): Reduced export controls could facilitate wider distribution of high-performance AI chips, potentially accelerating global AI research and development through increased hardware access, though the precise replacement policies remain undefined.
AGI Date (-1 days): The removal of tiered restrictions likely accelerates the timeline to AGI by enabling more international actors to access cutting-edge AI hardware, potentially speeding up compute-intensive AGI-relevant research outside traditional power centers.
Trump Dismisses Copyright Office Director Following AI Training Report
President Trump fired Shira Perlmutter, the Register of Copyrights, shortly after the Copyright Office released a report on AI training with copyrighted content. Representative Morelle linked the firing to Perlmutter's reluctance to support Elon Musk's interests in using copyrighted works for AI training, while the report itself suggested limitations on fair use claims when AI companies train on copyrighted materials.
Skynet Chance (+0.05%): The firing potentially signals reduced regulatory oversight on AI training data acquisition, which could lead to more aggressive and less constrained AI development practices. Removing officials who advocate for copyright limitations could reduce guardrails in AI development, increasing risks of uncontrolled advancement.
Skynet Date (-1 days): This political intervention suggests a potential streamlining of regulatory barriers for AI companies, possibly accelerating AI development timelines by reducing legal challenges to training data acquisition. The interference in regulatory bodies could create an environment of faster, less constrained AI advancement.
AGI Progress (+0.01%): Access to broader training data without copyright restrictions could marginally enhance AI capabilities by providing more diverse learning materials. However, this regulatory shift primarily affects data acquisition rather than core AGI research methodologies or architectural breakthroughs.
AGI Date (+0 days): Reduced copyright enforcement could accelerate AGI development timelines by removing legal impediments to training data acquisition and potentially decreasing associated costs. This political reshuffling suggests a potentially more permissive environment for AI companies to rapidly scale their training processes.
OpenAI Maintains Nonprofit Control Despite Earlier For-Profit Conversion Plans
OpenAI has reversed its previous plan to convert entirely to a for-profit structure, announcing that its nonprofit division will retain control over its business operations which will transition to a public benefit corporation (PBC). The decision comes after engagement with the Attorneys General of Delaware and California, and amidst opposition including a lawsuit from early investor Elon Musk who accused the company of abandoning its original nonprofit mission.
Skynet Chance (-0.2%): OpenAI maintaining nonprofit control significantly reduces Skynet scenario risks by prioritizing its original mission of ensuring AI benefits humanity over pure profit motives, preserving crucial governance guardrails that help prevent unaligned or dangerous AI development.
Skynet Date (+1 days): The decision to maintain nonprofit oversight likely introduces additional governance friction and accountability measures that would slow down potentially risky AI development paths, meaningfully decelerating the timeline toward scenarios where AI could become uncontrollable.
AGI Progress (-0.01%): This governance decision doesn't directly impact technical AI capabilities, but the continued nonprofit oversight might slightly slow aggressive capability development by ensuring safety and alignment considerations remain central to OpenAI's research agenda.
AGI Date (+1 days): Maintaining nonprofit control will likely result in more deliberate, safety-oriented development timelines rather than aggressive commercial timelines, potentially extending the time horizon for AGI development as careful oversight balances against capital deployment.
Nvidia and Anthropic Clash Over AI Chip Export Controls
Nvidia and Anthropic have taken opposing positions on the US Department of Commerce's upcoming AI chip export restrictions. Anthropic supports the controls, while Nvidia strongly disagrees, arguing that American firms should focus on innovation rather than restrictions and suggesting that China already has capable AI experts at every level of the AI stack.
Skynet Chance (0%): This disagreement over export controls is primarily a business and geopolitical issue that doesn't directly impact the likelihood of uncontrolled AI development. While regulations could theoretically influence AI safety, this specific dispute focuses on market access rather than technical safety measures.
Skynet Date (+0 days): Export controls might slightly delay the global pace of advanced AI development by restricting cutting-edge hardware access in certain regions, potentially slowing the overall timeline for reaching potentially dangerous capability thresholds.
AGI Progress (0%): The dispute between Nvidia and Anthropic over export controls is a policy and business conflict that doesn't directly affect technical progress toward AGI capabilities. While access to advanced chips influences development speed, this news itself doesn't change the technological trajectory.
AGI Date (+0 days): Export restrictions on advanced AI chips could moderately decelerate global AGI development timelines by limiting hardware access in certain regions, potentially creating bottlenecks in compute-intensive research and training required for the most advanced models.
Anthropic Endorses US AI Chip Export Controls with Suggested Refinements
Anthropic has published support for the US Department of Commerce's proposed AI chip export controls ahead of the May 15 implementation date, while suggesting modifications to strengthen the policy. The AI company recommends lowering the purchase threshold for Tier 2 countries while encouraging government-to-government agreements, and calls for increased funding to ensure proper enforcement of the controls.
Skynet Chance (-0.15%): Effective export controls on advanced AI chips would significantly reduce the global proliferation of the computational resources needed for training and deploying potentially dangerous AI systems. Anthropic's support for even stricter controls than proposed indicates awareness of the risks from uncontrolled AI development.
Skynet Date (+2 days): Restricting access to advanced AI chips for many countries would likely slow the global development of frontier AI systems, extending timelines before potential uncontrolled AI scenarios could emerge. The recommended enforcement mechanisms would further strengthen this effect if implemented.
AGI Progress (-0.04%): Export controls on advanced AI chips would restrict computational resources available for AI research and development in many regions, potentially slowing overall progress. The emphasis on control rather than capability advancement suggests prioritizing safety over speed in AGI development.
AGI Date (+1 days): Limiting global access to cutting-edge AI chips would likely extend AGI timelines by creating barriers to the massive computing resources needed for training the most advanced models. Anthropic's proposed stricter controls would further decelerate development outside a few privileged nations.