Policy and Regulation AI News & Updates

US and UK Decline to Sign Paris AI Summit Declaration as 61 Countries Commit to Ethical AI Development

At the Artificial Intelligence Action Summit in Paris, 61 countries, including China and India, signed a declaration focusing on ensuring AI is 'open, inclusive, transparent, ethical, safe, secure and trustworthy,' but the US and UK declined to sign. US Vice President JD Vance emphasized America's commitment to maintaining AI leadership and avoiding 'ideological bias,' while EU President Ursula von der Leyen defended the EU AI Act as providing unified safety rules while acknowledging the need to reduce red tape.

Trump Administration Prioritizes US AI Dominance Over Safety Regulations in Paris Summit Speech

At the AI Action Summit in Paris, US Vice President JD Vance delivered a speech emphasizing American AI dominance and deregulation over safety concerns. Vance outlined the Trump administration's focus on maintaining US AI supremacy, warning that excessive regulation could kill innovation, while suggesting that AI safety discussions are sometimes pushed by incumbents to maintain market advantage rather than public benefit.

AI Pioneer Andrew Ng Endorses Google's Reversal on AI Weapons Pledge

AI researcher and Google Brain founder Andrew Ng expressed support for Google's decision to drop its 7-year pledge not to build AI systems for weapons. Ng criticized the original Project Maven protests, arguing that American companies should assist the military, and emphasized that AI drones will "completely revolutionize the battlefield" while suggesting that America's AI safety depends on technological competition with China.

European Union Publishes Guidelines on AI System Classification Under New AI Act

The European Union has released non-binding guidance to help determine which systems qualify as AI under its recently implemented AI Act. The guidance acknowledges that no exhaustive classification is possible and that the document will evolve as new questions and use cases emerge, with companies facing potential fines of up to 7% of global annual turnover for non-compliance.

Google Removes Ban on AI for Weapons and Surveillance from Its Principles

Google has quietly removed a pledge to not build AI for weapons or surveillance from its website, replacing it with language about supporting "national security." This change comes amid ongoing employee protests over Google's contracts with the U.S. and Israeli militaries, with the Pentagon's AI chief recently confirming some company AI models are accelerating the military's kill chain.

EU AI Act Begins Enforcement Against 'Unacceptable Risk' AI Systems

The European Union's AI Act has reached its first compliance deadline, banning AI systems deemed to pose "unacceptable risk" as of February 2, 2025. These prohibited applications include AI for social scoring, emotion recognition in schools/workplaces, biometric categorization systems, predictive policing, and manipulation through subliminal techniques, with violations potentially resulting in fines up to €35 million or 7% of annual revenue.

OpenAI Partners with US National Labs for Nuclear Weapons Research

OpenAI has announced plans to provide its AI models to US National Laboratories for use in nuclear weapons security and scientific research. In collaboration with Microsoft, OpenAI will deploy a model on Los Alamos National Laboratory's supercomputer to be used across multiple research programs, including those focused on reducing nuclear war risks and securing nuclear materials and weapons.

India to Host Chinese DeepSeek AI Models on Local Servers Despite Historical Tech Restrictions

India's IT minister Ashwini Vaishnaw has announced plans to host Chinese AI lab DeepSeek's models on domestic servers, marking a rare allowance for Chinese technology in a country that has banned over 300 Chinese apps since 2020. The arrangement appears contingent on data localization, with DeepSeek's models to be hosted on India's new AI Compute Facility equipped with nearly 19,000 GPUs.

Anthropic CEO Calls for Stronger AI Export Controls Against China

Anthropic's CEO Dario Amodei argues that U.S. export controls on AI chips are effectively slowing Chinese AI progress, noting that DeepSeek's models match U.S. models from 7-10 months earlier but don't represent a fundamental breakthrough. Amodei advocates for strengthening export restrictions to prevent China from obtaining millions of chips for AI development, warning that without such controls, China could redirect resources toward military AI applications.