AI Governance AI News & Updates

Bipartisan Coalition Releases Pro-Human Declaration Framework for AI Governance Amid Pentagon-Anthropic Standoff

A bipartisan coalition of experts has released the Pro-Human Declaration, a framework for responsible AI development that includes prohibitions on superintelligence development until proven safe, mandatory off-switches, and bans on self-replicating AI systems. The declaration's release coincided with a conflict between the Pentagon and Anthropic over military AI access, highlighting the absence of coherent government AI regulations. The framework emphasizes keeping humans in control, preventing power concentration, and establishing pre-deployment testing requirements, particularly for AI products targeting children.

Pentagon Designates Anthropic Supply-Chain Risk After Contract Dispute Over Military AI Control

The Pentagon designated Anthropic as a supply-chain risk following failed negotiations over military control of its AI models for autonomous weapons and domestic surveillance. After Anthropic's $200 million contract collapsed, the DoD contracted with OpenAI instead, which resulted in a 295% surge in ChatGPT uninstalls. The incident highlights tensions over military access to advanced AI systems.

OpenAI and Anthropic Navigate Turbulent Government Contracts Amid Pentagon Pressure

OpenAI CEO Sam Altman faced public backlash after accepting a Pentagon contract that Anthropic rejected due to concerns over mass surveillance and automated weaponry. The U.S. Defense Secretary threatened to designate Anthropic as a supply chain risk for refusing to change contract terms, creating unprecedented pressure on AI companies working with government. The situation highlights how leading AI labs are unprepared for the political complexities of becoming national security contractors.

Trump Administration Executive Order Seeks Federal Preemption of State AI Laws, Creating Legal Uncertainty for Startups

President Trump signed an executive order directing federal agencies to challenge state AI laws and establish a national framework, arguing that the current state-by-state patchwork creates burdens for startups. The order directs the DOJ to create a task force to challenge state laws, instructs the Commerce Department to compile a list of "onerous" state regulations, and asks federal agencies to explore preemptive standards. Legal experts warn the order will create prolonged legal battles and uncertainty rather than immediate clarity, potentially harming startups more than the current patchwork while favoring large tech companies that can absorb legal risks.

California Senate Approves AI Safety Bill SB 53 Targeting Companies Over $500M Revenue

California's state senate has approved AI safety bill SB 53, which targets large AI companies making over $500 million annually and requires safety reports, incident reporting, and whistleblower protections. The bill is narrower than last year's vetoed SB 1047 and has received endorsement from AI company Anthropic. It now awaits Governor Newsom's signature amid potential federal-state tensions over AI regulation under the Trump administration.

California Senate Passes AI Safety Bill SB 53 Requiring Transparency from Major AI Labs

California's state senate approved AI safety bill SB 53, which requires large AI companies to disclose safety protocols and creates whistleblower protections for AI lab employees. The bill now awaits Governor Newsom's signature, though he previously vetoed a similar but more expansive AI safety bill last year.

Lattice CEO Advocates for Human-Centric AI Implementation with Proper Oversight

Lattice CEO Sarah Franklin emphasizes the importance of maintaining human oversight and "checks and balances" when implementing AI in business operations. She argues that companies should prioritize people and customers over AI efficiency, stressing that trust, transparency, and human accountability are essential for successful AI adoption. Franklin believes that human connection cannot be replaced and that the most successful companies will be those that put people first while using AI as an augmentation tool rather than a replacement.

Meta Automates 90% of Product Risk Assessments Using AI Systems

Meta plans to use AI-powered systems to automatically evaluate potential harms and privacy risks for up to 90% of updates to its apps like Instagram and WhatsApp, replacing human evaluators. The new system would provide instant decisions on AI-identified risks through questionnaires, allowing faster product updates but potentially creating higher risks according to former executives.

Netflix Co-Founder Reed Hastings Joins Anthropic Board to Guide AI Company's Growth

Netflix co-founder Reed Hastings has been appointed to Anthropic's board of directors by the company's Long-Term Benefit Trust. The appointment brings experienced tech leadership to the AI safety-focused company as it competes with OpenAI and grows from startup to major corporation.

Anthropic CSO Jared Kaplan to Discuss Hybrid Reasoning Models at Tech Conference

Anthropic co-founder and Chief Science Officer Jared Kaplan will speak at TechCrunch Sessions: AI on June 5 at UC Berkeley. He will discuss hybrid reasoning models and Anthropic's risk-governance framework, bringing insights from his background as a theoretical physicist and his work developing Claude AI assistants.