AI Ethics AI News & Updates

OpenAI Maintains Nonprofit Control Despite Earlier For-Profit Conversion Plans

OpenAI has reversed its previous plan to convert entirely to a for-profit structure, announcing that its nonprofit division will retain control over its business operations which will transition to a public benefit corporation (PBC). The decision comes after engagement with the Attorneys General of Delaware and California, and amidst opposition including a lawsuit from early investor Elon Musk who accused the company of abandoning its original nonprofit mission.

DeepMind Employees Seek Unionization Over AI Ethics Concerns

Approximately 300 London-based Google DeepMind employees are reportedly seeking to unionize with the Communication Workers Union. Their concerns include Google's removal of pledges not to use AI for weapons or surveillance and the company's contract with the Israeli military, with some staff members already having resigned over these issues.

OpenAI Relaxes Content Moderation Policies for ChatGPT's Image Generator

OpenAI has significantly relaxed its content moderation policies for ChatGPT's new image generator, now allowing creation of images depicting public figures, hateful symbols in educational contexts, and modifications based on racial features. The company describes this as a shift from `blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm.`

Judge Signals Concerns About OpenAI's For-Profit Conversion Despite Denying Musk's Injunction

A federal judge denied Elon Musk's request for a preliminary injunction to halt OpenAI's transition to a for-profit structure, but expressed significant concerns about the conversion. Judge Rogers indicated that using public money for a nonprofit's conversion to for-profit could cause "irreparable harm" and offered an expedited trial in 2025 to resolve the corporate restructuring disputes.

OpenAI Reduces Warning Messages in ChatGPT, Shifts Content Policy

OpenAI has removed warning messages in ChatGPT that previously indicated when content might violate its terms of service. The change is described as reducing "gratuitous/unexplainable denials" while still maintaining restrictions on objectionable content, with some suggesting it's a response to political pressure about alleged censorship of certain viewpoints.

Musk Offers Conditional Withdrawal of $97.4B OpenAI Nonprofit Bid

Elon Musk has offered to withdraw his $97.4 billion bid to acquire OpenAI's nonprofit if the board agrees to preserve its charitable mission and halt conversion to a for-profit structure. The offer comes amid Musk's ongoing lawsuit against OpenAI and CEO Sam Altman, with OpenAI's attorneys characterizing Musk's bid as an improper attempt to undermine a competitor.

US and UK Decline to Sign Paris AI Summit Declaration as 61 Countries Commit to Ethical AI Development

At the Artificial Intelligence Action Summit in Paris, 61 countries, including China and India, signed a declaration focusing on ensuring AI is 'open, inclusive, transparent, ethical, safe, secure and trustworthy,' but the US and UK declined to sign. US Vice President JD Vance emphasized America's commitment to maintaining AI leadership and avoiding 'ideological bias,' while EU President Ursula von der Leyen defended the EU AI Act as providing unified safety rules while acknowledging the need to reduce red tape.

Google Removes Ban on AI for Weapons and Surveillance from Its Principles

Google has quietly removed a pledge to not build AI for weapons or surveillance from its website, replacing it with language about supporting "national security." This change comes amid ongoing employee protests over Google's contracts with the U.S. and Israeli militaries, with the Pentagon's AI chief recently confirming some company AI models are accelerating the military's kill chain.

EU AI Act Begins Enforcement Against 'Unacceptable Risk' AI Systems

The European Union's AI Act has reached its first compliance deadline, banning AI systems deemed to pose "unacceptable risk" as of February 2, 2025. These prohibited applications include AI for social scoring, emotion recognition in schools/workplaces, biometric categorization systems, predictive policing, and manipulation through subliminal techniques, with violations potentially resulting in fines up to €35 million or 7% of annual revenue.

Microsoft Establishes Advanced Planning Unit to Study AI's Societal Impact

Microsoft is creating a new Advanced Planning Unit (APU) within its Microsoft AI division to study the societal, health, and work implications of artificial intelligence. The unit will operate from the office of Microsoft AI's CEO Mustafa Suleyman and will combine research to explore future AI scenarios while making product recommendations and producing reports.