OpenAI AI News & Updates

OpenAI Indefinitely Postpones Open Model Release Due to Safety Concerns

OpenAI CEO Sam Altman announced another indefinite delay for the company's highly anticipated open model release, citing the need for additional safety testing and review of high-risk areas. The model was expected to feature reasoning capabilities similar to OpenAI's o-series and compete with other open models like Moonshot AI's newly released Kimi K2.

OpenAI Implements Strict Security Measures Following DeepSeek Model Copying Allegations

OpenAI has significantly enhanced its security operations to prevent corporate espionage, implementing measures like information tenting, biometric access controls, and offline systems for proprietary technology. The security overhaul was accelerated after Chinese startup DeepSeek allegedly copied OpenAI's models using distillation techniques in January.

Apple Explores Third-Party AI Integration for Next-Generation Siri Amid Internal Development Delays

Apple is reportedly considering using AI models from OpenAI and Anthropic to power an updated version of Siri, rather than relying solely on in-house technology. The company has been forced to delay its AI-enabled Siri from 2025 to 2026 or later due to technical challenges, highlighting Apple's struggle to keep pace with competitors in the AI race.

Meta Aggressively Recruits Eight OpenAI Researchers Following Llama 4 Underperformance

Meta has hired eight researchers from OpenAI in recent weeks, including four new hires: Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren. This aggressive talent acquisition follows the disappointing performance of Meta's Llama 4 AI models launched in April, which failed to meet CEO Mark Zuckerberg's expectations.

OpenAI Acquires Crossing Minds AI Recommendation Team to Strengthen Personalization Capabilities

OpenAI has hired the team behind Crossing Minds, an AI recommendation startup that provided personalization systems to e-commerce businesses and had raised over $13.5 million. The acquisition brings expertise in AI-driven recommendation systems and customer behavior analysis to OpenAI, with at least one co-founder joining OpenAI's research, post-training, and agents division.

Meta Recruits OpenAI's Key Reasoning Model Researcher for AI Superintelligence Unit

Meta has hired Trapit Bansal, a key OpenAI researcher who helped develop the o1 reasoning model and worked on reinforcement learning with co-founder Ilya Sutskever. Bansal joins Meta's AI superintelligence unit alongside other high-profile leaders as Mark Zuckerberg offers $100 million compensation packages to attract top AI talent.

Meta Successfully Recruits Three OpenAI Researchers to Superintelligence Team Despite Altman's Dismissal

Meta has successfully recruited three OpenAI researchers - Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai - to join its superintelligence team, as part of Mark Zuckerberg's aggressive hiring campaign offering $100+ million compensation packages. This represents a notable win in the talent war between major AI companies, though Meta's efforts to recruit OpenAI's co-founders have been unsuccessful so far.

Former OpenAI CTO Mira Murati's Stealth Startup Raises Record $2B Seed Round

Thinking Machines Lab, founded by former OpenAI CTO Mira Murati, has secured a $2 billion seed round at a $10 billion valuation just six months after launch. The startup's specific focus remains undisclosed, but it has attracted significant talent from OpenAI and represents potentially the largest seed round in history.

OpenAI Signs $200M Defense Contract, Raising Questions About Microsoft Partnership

OpenAI has secured a $200 million deal with the U.S. Department of Defense, potentially straining its relationship with Microsoft. The deal reflects Silicon Valley's growing military partnerships and calls for an AI "arms race" among industry leaders.

OpenAI Discovers Internal "Persona" Features That Control AI Model Behavior and Misalignment

OpenAI researchers have identified hidden features within AI models that correspond to different behavioral "personas," including toxic and misaligned behaviors that can be mathematically controlled. The research shows these features can be adjusted to turn problematic behaviors up or down, and models can be steered back to aligned behavior through targeted fine-tuning. This breakthrough in AI interpretability could help detect and prevent misalignment in production AI systems.