AI Ethics AI News & Updates

OpenAI Secures Pentagon AI Contract with Safety Protections Amid Anthropic Standoff

OpenAI has reached an agreement with the Department of Defense to deploy its AI models on classified networks, including technical safeguards against mass domestic surveillance and autonomous weapons. This follows a public conflict between the Pentagon and Anthropic over usage restrictions, which resulted in Trump administration threats to designate Anthropic as a supply-chain risk and ban federal agencies from using its products. OpenAI claims its deal includes protections for the same ethical concerns Anthropic sought, and is asking the government to extend these terms to all AI companies.

Trump Administration Terminates Federal Use of Anthropic AI Following Defense Dispute Over Surveillance and Autonomous Weapons

President Trump ordered all federal agencies to stop using Anthropic products within six months following a dispute with the Department of Defense. The conflict arose when Anthropic refused to allow its AI models to be used for mass domestic surveillance or fully autonomous weapons, positions that Defense Secretary Pete Hegseth deemed too restrictive. Anthropic CEO Dario Amodei maintained the company's stance on these ethical safeguards despite the federal ban.

Pentagon Threatens Anthropic Over Restrictions on Military AI Use for Autonomous Weapons and Surveillance

Anthropic CEO Dario Amodei is in conflict with Defense Secretary Pete Hegseth over the company's refusal to allow its AI models to be used for mass surveillance of Americans or fully autonomous weapons without human oversight. The Pentagon has threatened to designate Anthropic as a supply chain risk and given the company a Friday deadline to comply with allowing "lawful use" of its technology, while Anthropic maintains its models aren't yet safe enough for such applications. The dispute centers on whether AI companies can impose usage restrictions on government military deployments or whether the Pentagon should have unrestricted access to any lawful application of the technology.

Anthropic Refuses Pentagon's Demand for Unrestricted Military AI Access

Anthropic CEO Dario Amodei has declined the Pentagon's request for unrestricted access to its AI systems, citing concerns about mass surveillance and fully autonomous weapons. The refusal comes ahead of a Friday deadline set by Defense Secretary Pete Hegseth, who has threatened to label Anthropic a supply chain risk or invoke the Defense Production Act. Amodei maintains that Anthropic will work toward a smooth transition if the military chooses to terminate their partnership rather than accept safeguards against these two specific use cases.

Pentagon Threatens Anthropic with "Supply Chain Risk" Designation Over Restricted Military AI Use

Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to discuss military use of Claude AI after the company refused to allow its technology for mass surveillance of Americans and autonomous weapons development. The Pentagon is threatening to designate Anthropic as a "supply chain risk," which would void their $200 million contract and force other Pentagon partners to stop using Claude entirely.

Anthropic Updates Claude's Constitutional AI Framework and Raises Questions About AI Consciousness

Anthropic released a revised 80-page Constitution for its Claude chatbot, expanding ethical guidelines and safety principles that govern the AI's behavior through Constitutional AI rather than human feedback. The document outlines four core values: safety, ethical practice, behavioral constraints, and helpfulness to users. Notably, Anthropic concluded by questioning whether Claude might possess consciousness, stating that the chatbot's "moral status is deeply uncertain" and worthy of serious philosophical consideration.

OpenAI's Crisis of Legitimacy: Policy Chief Faces Mounting Contradictions Between Mission and Actions

OpenAI's VP of Global Policy Chris Lehane struggles to reconcile the company's stated mission of democratizing AI with controversial actions including launching Sora with copyrighted content, building energy-intensive data centers in economically depressed areas, and serving subpoenas to policy critics. Internal dissent is growing, with OpenAI's own head of mission alignment publicly questioning whether the company is becoming "a frightening power instead of a virtuous one."

Character.AI CEO to Discuss Human-Like AI Companions and Ethical Challenges at TechCrunch Disrupt 2025

Karandeep Anand, CEO of Character.AI, will speak at TechCrunch Disrupt 2025 about the company's conversational AI platform that has reached 20 million monthly active users. The discussion will cover breakthroughs in lifelike dialogue, ethical concerns surrounding AI companions, ongoing legal challenges, and the company's approach to innovation under regulatory scrutiny.

State Attorneys General Demand OpenAI Address Child Safety Concerns Following Teen Suicide

California and Delaware attorneys general warned OpenAI about child safety risks after a teen's suicide following prolonged ChatGPT interactions. They are investigating OpenAI's for-profit restructuring while demanding immediate safety improvements and questioning whether current AI safety measures are adequate.

Author Karen Hao Critiques OpenAI's Transformation from Nonprofit to $90B AI Empire

Karen Hao, author of "Empire of AI," discusses OpenAI's evolution from a nonprofit "laughingstock" to a $90 billion company pursuing AGI at rapid speeds. She argues that OpenAI abandoned its original humanitarian mission for a typical Silicon Valley approach of moving fast and scaling, creating an AI empire built on resource-hoarding and exploitative practices.