AI Governance AI News & Updates
Bipartisan Coalition Releases Pro-Human Declaration Framework for AI Governance Amid Pentagon-Anthropic Standoff
A bipartisan coalition of experts has released the Pro-Human Declaration, a framework for responsible AI development that includes prohibitions on superintelligence development until proven safe, mandatory off-switches, and bans on self-replicating AI systems. The declaration's release coincided with a conflict between the Pentagon and Anthropic over military AI access, highlighting the absence of coherent government AI regulations. The framework emphasizes keeping humans in control, preventing power concentration, and establishing pre-deployment testing requirements, particularly for AI products targeting children.
Skynet Chance (-0.13%): The Pro-Human Declaration's provisions for mandatory off-switches, bans on self-replicating and autonomously self-improving AI systems, and prohibition on superintelligence development until proven safe directly address key loss-of-control scenarios. These proposed guardrails, if implemented, would significantly reduce risks of uncontrollable AI systems.
Skynet Date (+1 days): The framework's prohibition on superintelligence development until scientific consensus on safety and democratic buy-in would create regulatory barriers that delay the development of potentially dangerous advanced AI systems. However, this remains a proposal without legal force, limiting its immediate decelerating effect.
AGI Progress (-0.01%): While the declaration proposes regulations that could slow certain AI development paths, it represents a policy framework rather than a technical setback. The focus is on responsible development rather than halting progress entirely, resulting in minimal impact on overall AGI trajectory.
AGI Date (+0 days): If enacted, the framework's requirements for pre-deployment testing, prohibition on superintelligence development, and mandatory safety consensus would introduce regulatory friction that slows the pace toward AGI. The bipartisan support suggests potential legislative action that could create meaningful delays in advanced AI development timelines.
Pentagon Designates Anthropic Supply-Chain Risk After Contract Dispute Over Military AI Control
The Pentagon designated Anthropic as a supply-chain risk following failed negotiations over military control of its AI models for autonomous weapons and domestic surveillance. After Anthropic's $200 million contract collapsed, the DoD contracted with OpenAI instead, which resulted in a 295% surge in ChatGPT uninstalls. The incident highlights tensions over military access to advanced AI systems.
Skynet Chance (-0.08%): Anthropic's refusal to grant unrestricted military control over its AI models demonstrates corporate resistance to potentially dangerous applications like autonomous weapons, slightly reducing risks of uncontrolled military AI deployment. However, OpenAI's acceptance of similar terms partially offsets this positive signal.
Skynet Date (+0 days): The dispute and subsequent designation as supply-chain risk creates friction and delays in military AI integration, slightly decelerating the timeline for deployment of advanced AI in autonomous weapons systems. Corporate pushback may slow adoption of less constrained military AI applications.
AGI Progress (0%): This is a contractual and governance dispute rather than a technical development, with no direct impact on underlying AI capabilities or progress toward general intelligence. The disagreement concerns deployment constraints, not fundamental research or capability advancement.
AGI Date (+0 days): Military contract disputes do not materially affect the pace of AGI research or development timelines, as this concerns application constraints rather than fundamental research velocity. Both companies continue their core AGI development work regardless of Pentagon relationships.
OpenAI and Anthropic Navigate Turbulent Government Contracts Amid Pentagon Pressure
OpenAI CEO Sam Altman faced public backlash after accepting a Pentagon contract that Anthropic rejected due to concerns over mass surveillance and automated weaponry. The U.S. Defense Secretary threatened to designate Anthropic as a supply chain risk for refusing to change contract terms, creating unprecedented pressure on AI companies working with government. The situation highlights how leading AI labs are unprepared for the political complexities of becoming national security contractors.
Skynet Chance (+0.04%): The normalization of AI companies providing capabilities for mass surveillance and automated weaponry to government agencies increases risks of misuse and loss of control over powerful AI systems. The political pressure forcing companies to choose between survival and ethical constraints weakens safety guardrails.
Skynet Date (-1 days): The government's aggressive push to integrate AI into defense infrastructure and willingness to destroy non-compliant companies accelerates the deployment of powerful AI systems in high-stakes military contexts. This bypasses careful safety considerations and rushes advanced AI into operational use.
AGI Progress (+0.01%): While the article focuses on governance rather than technical capabilities, the integration of frontier AI models into national security infrastructure indicates these systems are becoming sufficiently capable for critical applications. However, this is primarily about deployment of existing capabilities rather than fundamental research progress.
AGI Date (+0 days): Massive government investment and prioritization of AI development for national security purposes will likely increase funding and urgency around AI capabilities research. The competitive dynamics between companies seeking government contracts may accelerate capability development, though this is a secondary effect.
Trump Administration Executive Order Seeks Federal Preemption of State AI Laws, Creating Legal Uncertainty for Startups
President Trump signed an executive order directing federal agencies to challenge state AI laws and establish a national framework, arguing that the current state-by-state patchwork creates burdens for startups. The order directs the DOJ to create a task force to challenge state laws, instructs the Commerce Department to compile a list of "onerous" state regulations, and asks federal agencies to explore preemptive standards. Legal experts warn the order will create prolonged legal battles and uncertainty rather than immediate clarity, potentially harming startups more than the current patchwork while favoring large tech companies that can absorb legal risks.
Skynet Chance (+0.03%): Weakening regulatory oversight through federal preemption without establishing clear alternatives reduces accountability mechanisms for AI systems. The executive order appears designed to benefit large tech companies over consumer protection, potentially enabling less constrained AI development.
Skynet Date (+0 days): Removing state-level regulatory barriers accelerates AI deployment timelines by reducing compliance requirements, though legal uncertainty may create temporary slowdowns. The administration's pro-AI deregulation stance signals reduced friction for rapid AI advancement.
AGI Progress (+0.01%): Reduced regulatory friction may accelerate AI research and deployment by lowering compliance costs, though the relationship between regulation and technical progress is indirect. The focus on removing barriers suggests faster iteration cycles for AI development.
AGI Date (+0 days): Deregulation and federal preemption of restrictive state laws removes friction from AI development and deployment, particularly benefiting well-funded companies. The administration's explicit pro-AI innovation stance combined with reduced oversight accelerates the timeline toward more advanced AI systems.
California Senate Approves AI Safety Bill SB 53 Targeting Companies Over $500M Revenue
California's state senate has approved AI safety bill SB 53, which targets large AI companies making over $500 million annually and requires safety reports, incident reporting, and whistleblower protections. The bill is narrower than last year's vetoed SB 1047 and has received endorsement from AI company Anthropic. It now awaits Governor Newsom's signature amid potential federal-state tensions over AI regulation under the Trump administration.
Skynet Chance (-0.08%): The bill creates meaningful oversight mechanisms including mandatory safety reports, incident reporting, and whistleblower protections for large AI companies, which could help identify and mitigate risks before they escalate. These transparency requirements and accountability measures represent steps toward better control and monitoring of advanced AI systems.
Skynet Date (+0 days): While the bill provides safety oversight, it only applies to companies over $500M revenue and focuses on reporting rather than restricting capabilities development. The regulatory framework may slightly slow deployment timelines but doesn't significantly impede the underlying pace of AI advancement.
AGI Progress (-0.01%): The legislation primarily focuses on safety reporting and transparency rather than restricting core AI research and development capabilities. While it may create some administrative overhead for large companies, it doesn't fundamentally alter the technical trajectory toward AGI.
AGI Date (+0 days): The bill's compliance requirements may introduce modest delays in model deployment and development cycles for affected companies. However, the narrow scope targeting only large revenue-generating companies limits broader impact on the overall AGI development timeline.
California Senate Passes AI Safety Bill SB 53 Requiring Transparency from Major AI Labs
California's state senate approved AI safety bill SB 53, which requires large AI companies to disclose safety protocols and creates whistleblower protections for AI lab employees. The bill now awaits Governor Newsom's signature, though he previously vetoed a similar but more expansive AI safety bill last year.
Skynet Chance (-0.08%): The bill creates transparency requirements and whistleblower protections that could help identify and prevent dangerous AI developments before they become uncontrollable. These safety oversight mechanisms reduce the likelihood of unchecked AI advancement leading to loss of control scenarios.
Skynet Date (+0 days): Regulatory requirements for safety disclosures and compliance protocols may slightly slow down AI development timelines as companies allocate resources to meet transparency obligations. However, the impact is modest since the bill focuses on disclosure rather than restricting capabilities research.
AGI Progress (-0.01%): The bill primarily addresses safety transparency rather than advancing AI capabilities or research. While it doesn't directly hinder technical progress, compliance requirements may divert some resources from core AGI development.
AGI Date (+0 days): Safety compliance and reporting requirements will likely add administrative overhead that could marginally slow AGI development timelines. Companies will need to allocate engineering and legal resources to meet transparency obligations rather than focusing solely on capability advancement.
Lattice CEO Advocates for Human-Centric AI Implementation with Proper Oversight
Lattice CEO Sarah Franklin emphasizes the importance of maintaining human oversight and "checks and balances" when implementing AI in business operations. She argues that companies should prioritize people and customers over AI efficiency, stressing that trust, transparency, and human accountability are essential for successful AI adoption. Franklin believes that human connection cannot be replaced and that the most successful companies will be those that put people first while using AI as an augmentation tool rather than a replacement.
Skynet Chance (-0.08%): The emphasis on human oversight, accountability, and "checks and balances" for AI systems represents a positive approach to AI safety that could reduce risks of uncontrolled AI deployment. The focus on keeping humans "in service" rather than serving AI suggests better alignment practices.
Skynet Date (+0 days): The advocacy for human oversight and responsible AI implementation may slow down reckless AI deployment, potentially delaying scenarios where AI systems operate without adequate human control. However, the impact on overall timeline is modest as this represents one company's philosophy rather than industry-wide policy.
AGI Progress (-0.01%): While Lattice is developing AI agents for HR tasks, the focus is on narrow, human-supervised applications rather than advancing toward general intelligence. The emphasis on human oversight may actually constrain AI capability development in favor of safety.
AGI Date (+0 days): The conservative approach to AI development with heavy human oversight and narrow application focus may slow progress toward AGI by prioritizing safety and human control over pushing capability boundaries. However, this represents a single company's approach rather than a broad industry shift.
Meta Automates 90% of Product Risk Assessments Using AI Systems
Meta plans to use AI-powered systems to automatically evaluate potential harms and privacy risks for up to 90% of updates to its apps like Instagram and WhatsApp, replacing human evaluators. The new system would provide instant decisions on AI-identified risks through questionnaires, allowing faster product updates but potentially creating higher risks according to former executives.
Skynet Chance (+0.04%): Automating risk assessment reduces human oversight of AI systems' safety evaluations, potentially allowing harmful features to pass through automated filters that lack nuanced understanding of complex risks.
Skynet Date (+0 days): The acceleration of product deployment through automated reviews could lead to faster iteration and deployment of AI features, slightly accelerating the timeline for advanced AI systems.
AGI Progress (+0.01%): This represents practical application of AI for complex decision-making tasks like risk assessment, demonstrating incremental progress in AI's ability to handle sophisticated evaluations previously requiring human judgment.
AGI Date (+0 days): Meta's investment in automated decision-making systems reflects continued industry push toward AI automation, contributing marginally to the pace of AI development across practical applications.
Netflix Co-Founder Reed Hastings Joins Anthropic Board to Guide AI Company's Growth
Netflix co-founder Reed Hastings has been appointed to Anthropic's board of directors by the company's Long-Term Benefit Trust. The appointment brings experienced tech leadership to the AI safety-focused company as it competes with OpenAI and grows from startup to major corporation.
Skynet Chance (-0.03%): The appointment emphasizes Anthropic's governance structure focused on long-term benefit of humanity, potentially strengthening AI safety oversight. However, the impact is minimal as this is primarily a business leadership change rather than a technical safety breakthrough.
Skynet Date (+0 days): Adding experienced business leadership doesn't significantly alter the technical pace of AI development or safety research. This is a governance move that maintains the existing trajectory rather than accelerating or decelerating progress.
AGI Progress (+0.01%): Experienced tech leadership from Netflix, Microsoft, and Meta boards could help Anthropic scale operations and compete more effectively with OpenAI. This may marginally accelerate Anthropic's AI development capabilities through better resource management and strategic guidance.
AGI Date (+0 days): Hastings' experience scaling major tech companies could help Anthropic grow faster and compete more effectively in the AI race. However, the impact on actual AGI timeline is minimal since this addresses business execution rather than core research capabilities.
Anthropic CSO Jared Kaplan to Discuss Hybrid Reasoning Models at Tech Conference
Anthropic co-founder and Chief Science Officer Jared Kaplan will speak at TechCrunch Sessions: AI on June 5 at UC Berkeley. He will discuss hybrid reasoning models and Anthropic's risk-governance framework, bringing insights from his background as a theoretical physicist and his work developing Claude AI assistants.
Skynet Chance (+0.01%): Anthropic's focus on risk-governance frameworks and having a dedicated responsible scaling officer indicates some institutional commitment to AI safety, but the continued rapid development of more capable models like Claude still increases overall risk potential slightly.
Skynet Date (+1 days): Anthropic's emphasis on responsible scaling and risk governance suggests a more measured approach to AI development, potentially slowing the timeline toward uncontrolled AI scenarios while still advancing capabilities.
AGI Progress (+0.02%): Anthropic's development of hybrid reasoning models that balance quick responses with deeper processing for complex problems represents a meaningful step toward more capable AI systems that can handle diverse cognitive tasks - a key component for AGI progress.
AGI Date (+0 days): The rapid advancement of Anthropic's Claude models, including hybrid reasoning capabilities and autonomous research features, suggests accelerated development toward AGI-like systems, particularly with their $61.5 billion valuation fueling further research.