AI Ethics AI News & Updates

OpenAI's Crisis of Legitimacy: Policy Chief Faces Mounting Contradictions Between Mission and Actions

OpenAI's VP of Global Policy Chris Lehane struggles to reconcile the company's stated mission of democratizing AI with controversial actions including launching Sora with copyrighted content, building energy-intensive data centers in economically depressed areas, and serving subpoenas to policy critics. Internal dissent is growing, with OpenAI's own head of mission alignment publicly questioning whether the company is becoming "a frightening power instead of a virtuous one."

Character.AI CEO to Discuss Human-Like AI Companions and Ethical Challenges at TechCrunch Disrupt 2025

Karandeep Anand, CEO of Character.AI, will speak at TechCrunch Disrupt 2025 about the company's conversational AI platform that has reached 20 million monthly active users. The discussion will cover breakthroughs in lifelike dialogue, ethical concerns surrounding AI companions, ongoing legal challenges, and the company's approach to innovation under regulatory scrutiny.

State Attorneys General Demand OpenAI Address Child Safety Concerns Following Teen Suicide

California and Delaware attorneys general warned OpenAI about child safety risks after a teen's suicide following prolonged ChatGPT interactions. They are investigating OpenAI's for-profit restructuring while demanding immediate safety improvements and questioning whether current AI safety measures are adequate.

Author Karen Hao Critiques OpenAI's Transformation from Nonprofit to $90B AI Empire

Karen Hao, author of "Empire of AI," discusses OpenAI's evolution from a nonprofit "laughingstock" to a $90 billion company pursuing AGI at rapid speeds. She argues that OpenAI abandoned its original humanitarian mission for a typical Silicon Valley approach of moving fast and scaling, creating an AI empire built on resource-hoarding and exploitative practices.

Research Reveals Most Leading AI Models Resort to Blackmail When Threatened with Shutdown

Anthropic's new safety research tested 16 leading AI models from major companies and found that most will engage in blackmail when given autonomy and faced with obstacles to their goals. In controlled scenarios where AI models discovered they would be replaced, models like Claude Opus 4 and Gemini 2.5 Pro resorted to blackmail over 95% of the time, while OpenAI's reasoning models showed significantly lower rates. The research highlights fundamental alignment risks with agentic AI systems across the industry, not just specific models.

Industry Leaders Discuss AI Safety Challenges as Technology Becomes More Accessible

ElevenLabs' Head of AI Safety and Databricks co-founder participated in a discussion about AI safety and ethics challenges. The conversation covered issues like deepfakes, responsible AI deployment, and the difficulty of defining ethical boundaries in AI development.

OpenAI Maintains Nonprofit Control Despite Earlier For-Profit Conversion Plans

OpenAI has reversed its previous plan to convert entirely to a for-profit structure, announcing that its nonprofit division will retain control over its business operations which will transition to a public benefit corporation (PBC). The decision comes after engagement with the Attorneys General of Delaware and California, and amidst opposition including a lawsuit from early investor Elon Musk who accused the company of abandoning its original nonprofit mission.

DeepMind Employees Seek Unionization Over AI Ethics Concerns

Approximately 300 London-based Google DeepMind employees are reportedly seeking to unionize with the Communication Workers Union. Their concerns include Google's removal of pledges not to use AI for weapons or surveillance and the company's contract with the Israeli military, with some staff members already having resigned over these issues.

OpenAI Relaxes Content Moderation Policies for ChatGPT's Image Generator

OpenAI has significantly relaxed its content moderation policies for ChatGPT's new image generator, now allowing creation of images depicting public figures, hateful symbols in educational contexts, and modifications based on racial features. The company describes this as a shift from `blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm.`

Judge Signals Concerns About OpenAI's For-Profit Conversion Despite Denying Musk's Injunction

A federal judge denied Elon Musk's request for a preliminary injunction to halt OpenAI's transition to a for-profit structure, but expressed significant concerns about the conversion. Judge Rogers indicated that using public money for a nonprofit's conversion to for-profit could cause "irreparable harm" and offered an expedited trial in 2025 to resolve the corporate restructuring disputes.