AI Safety AI News & Updates

Sutskever's Safe Superintelligence Startup Seeking Funding at $20B Valuation

Safe Superintelligence, founded by former OpenAI chief scientist Ilya Sutskever, is reportedly seeking funding at a valuation of at least $20 billion, quadrupling its previous $5 billion valuation from September. The startup, which has already raised $1 billion from investors including Sequoia Capital and Andreessen Horowitz, has yet to generate revenue and has revealed little about its technical work.

Meta Establishes Framework to Limit Development of High-Risk AI Systems

Meta has published its Frontier AI Framework that outlines policies for handling powerful AI systems with significant safety risks. The company commits to limiting internal access to "high-risk" systems and implementing mitigations before release, while halting development altogether on "critical-risk" systems that could enable catastrophic attacks or weapons development.

Microsoft Deploys DeepSeek's R1 Model Despite OpenAI IP Concerns

Microsoft has announced the availability of DeepSeek's R1 reasoning model on its Azure AI Foundry service, despite concerns that DeepSeek may have violated OpenAI's terms of service and potentially misused Microsoft's services. Microsoft claims the model has undergone rigorous safety evaluations and will soon be available on Copilot+ PCs, even as tests show R1 provides inaccurate answers on news topics and appears to censor China-related content.