Regulatory Compliance AI News & Updates

xAI Fails to Deliver Promised AI Safety Framework by Self-Imposed Deadline

Elon Musk's AI company xAI has missed its May 10 deadline to publish a finalized AI safety framework, which was promised in February at the AI Seoul Summit. The company's initial draft framework was criticized for only applying to future models and lacking specifics on risk mitigation, while watchdog organizations have ranked xAI poorly for its weak risk management practices compared to industry peers.

Google's Gemini 2.5 Pro Safety Report Falls Short of Transparency Standards

Google published a technical safety report for its Gemini 2.5 Pro model several weeks after its public release, which experts criticize as lacking critical safety details. The sparse report omits detailed information about Google's Frontier Safety Framework and dangerous capability evaluations, raising concerns about the company's commitment to AI safety transparency despite prior promises to regulators.

Google Accelerates AI Model Releases While Delaying Safety Documentation

Google has significantly increased the pace of its AI model releases, launching Gemini 2.5 Pro just three months after Gemini 2.0 Flash, but has failed to publish safety reports for these latest models. Despite being one of the first companies to propose model cards for responsible AI development and making commitments to governments about transparency, Google has not released a model card in over a year, raising concerns about prioritizing speed over safety.