OpenAI AI News & Updates
OpenAI Co-founder John Schulman Joins Mira Murati's New AI Venture
John Schulman, an OpenAI co-founder who briefly joined Anthropic, is reportedly joining former OpenAI CTO Mira Murati's secretive new startup. Murati, who left OpenAI in September, has also recruited other former OpenAI talent including Christian Gibson from the supercomputing team, and was reportedly seeking over $100 million in funding for her venture in October.
Skynet Chance (+0.01%): Schulman's explicit interest in AI alignment and his move to join Murati suggests creation of another well-resourced lab focused on advanced AI development, potentially with safety considerations. However, the proliferation of well-funded AI labs with top talent increases the likelihood of competitive dynamics that could prioritize capabilities over safety concerns.
Skynet Date (-1 days): The concentration of elite AI talent in a new venture with substantial funding will likely accelerate development timelines for advanced AI systems. Schulman's expertise in reinforcement learning and Murati's leadership experience at OpenAI create a formidable team that could make rapid progress on key technical challenges.
AGI Progress (+0.02%): The formation of a new AI company led by two highly accomplished AI leaders with hands-on experience building state-of-the-art systems at OpenAI represents a meaningful addition to the AGI development landscape. Their combined expertise in reinforcement learning, large language models, and scaling AI systems will likely contribute to significant technical advances.
AGI Date (-1 days): The concentration of elite AI talent (including a ChatGPT architect and former OpenAI supercomputing team member) in a new well-funded venture will likely accelerate progress toward AGI. Their combined experience with cutting-edge AI systems gives them a significant head start in pursuing advanced capabilities.
Key ChatGPT Architect John Schulman Departs Anthropic After Brief Five-Month Tenure
John Schulman, an OpenAI co-founder and significant contributor to ChatGPT, has left AI safety-focused company Anthropic after only five months. Schulman had joined Anthropic from OpenAI in August 2023, citing a desire to focus more deeply on AI alignment research and technical work.
Skynet Chance (+0.03%): Schulman's rapid movement between leading AI labs suggests potential instability in AI alignment research leadership, which could subtly increase risks of unaligned AI development. His unexplained departure from a safety-focused organization may signal challenges in implementing alignment research effectively within commercial AI development contexts.
Skynet Date (+0 days): While executive movement could theoretically impact development timelines, there's insufficient information about Schulman's reasons for leaving or his next steps to determine if this will meaningfully accelerate or decelerate potential AI risk scenarios. Without knowing the impact on either organization's alignment work, this appears neutral for timeline shifts.
AGI Progress (+0.01%): The movement of key technical talent between leading AI organizations may marginally impact AGI progress through knowledge transfer and potential disruption to ongoing research programs. However, without details on why Schulman left or what impact this will have on either organization's technical direction, the effect appears minimal.
AGI Date (+0 days): The departure itself doesn't provide clear evidence of acceleration or deceleration in AGI timelines, as we lack information about how this affects either organization's research velocity or capabilities. Without understanding Schulman's next steps or the reasons for his departure, this news has negligible impact on AGI timeline expectations.
Figure AI Abandons OpenAI Partnership for In-House AI Models After 'Major Breakthrough'
Figure AI has terminated its partnership with OpenAI to focus on developing in-house AI models following what it describes as a "major breakthrough" in embodied AI. CEO Brett Adcock claims vertical integration is necessary for solving embodied AI at scale, promising to demonstrate unprecedented capabilities on their humanoid robot within 30 days.
Skynet Chance (+0.06%): Figure's pursuit of fully integrated, embodied AI for humanoid robots increases risk by creating more autonomous physical systems that might act independently in the real world, potentially with less oversight than when using external AI providers.
Skynet Date (-1 days): The claimed "major breakthrough" and vertical integration approach could accelerate development of more capable embodied AI systems, potentially bringing forward the timeline for advanced autonomous robots that can operate independently in complex environments.
AGI Progress (+0.04%): Figure's claimed breakthrough in embodied AI represents significant progress toward systems that can understand and interact with the physical world, a crucial capability for AGI that extends beyond language and image processing.
AGI Date (-1 days): The shift to specialized in-house AI models optimized for robotics suggests companies are finding faster paths to advanced capabilities through vertical integration, potentially accelerating the timeline to embodied intelligence components of AGI.
OpenAI's Operator Agent Shows Promise But Still Requires Significant Human Oversight
OpenAI's new AI agent Operator, which can perform tasks independently on the internet, shows promise but falls short of true autonomy. During testing, the system successfully navigated websites and completed basic tasks but required frequent human intervention, permissions, and guidance, demonstrating that fully autonomous AI agents remain out of reach.
Skynet Chance (-0.13%): Operator's significant limitations and need for constant human supervision demonstrates that autonomous AI systems remain far from acting independently, requiring explicit permissions and facing many basic operational challenges that reduce concerns about uncontrolled AI action.
Skynet Date (+2 days): The revealed limitations of Operator suggest that truly autonomous AI agents are further away than industry hype suggests, as even a cutting-edge system from OpenAI struggles with basic web navigation tasks without frequent human intervention.
AGI Progress (+0.02%): Despite limitations, Operator demonstrates meaningful progress in AI systems that can perceive visual web interfaces, navigate complex environments, and take actions over extended sequences, showing advancement toward more general-purpose AI capabilities.
AGI Date (+0 days): The significant human supervision still required by this advanced agent system suggests that practical, reliable AGI capabilities in real-world environments are further away than optimistic timelines might suggest, despite incremental progress.
OpenAI Trademark Filing Reveals Plans for Humanoid Robots and AI Hardware
OpenAI has filed a new trademark application with the USPTO that hints at ambitious future product lines including AI-powered hardware and humanoid robots. The filing mentions headphones, smart glasses, jewelry, humanoid robots with communication capabilities, custom AI chips, and quantum computing services, though the company's timeline for bringing these products to market remains unclear.
Skynet Chance (+0.06%): OpenAI's intent to develop humanoid robots with 'communication and learning functions' signals a significant step toward embodied AI that can physically interact with the world, increasing autonomous capabilities that could eventually lead to control issues if alignment isn't prioritized alongside capabilities.
Skynet Date (-1 days): The parallel development of hardware (including humanoid robots), custom AI chips, and quantum computing resources suggests OpenAI is building comprehensive infrastructure to accelerate AI embodiment and processing capabilities, potentially shortening the timeline to advanced AI systems.
AGI Progress (+0.03%): The integrated approach of combining advanced hardware, specialized chips, embodied robotics, and quantum computing optimization represents a systematic attempt to overcome current AI limitations, particularly in real-world interaction and computational efficiency.
AGI Date (-1 days): Custom AI chips targeted for 2026 release and quantum computing optimization suggest OpenAI is strategically addressing the computational barriers to AGI, potentially accelerating the timeline by enhancing both model training efficiency and real-world deployment capabilities.
OpenAI Launches 'Deep Research' Agent for Complex Information Analysis
OpenAI has introduced 'deep research,' a new AI agent for ChatGPT designed to conduct comprehensive, in-depth research across multiple sources. Powered by a specialized version of the o3 reasoning model, the system can analyze text, images, and PDFs from the internet, create visualizations, and provide fully documented outputs with citations, though it still faces limitations in distinguishing authoritative information and conveying uncertainty.
Skynet Chance (+0.04%): The development of AI systems capable of autonomous multi-step research, information analysis, and reasoning increases the likelihood of AIs operating with greater independence and less human oversight, potentially introducing unexpected behaviors when tasked with complex objectives.
Skynet Date (-1 days): The introduction of specialized reasoning agents capable of complex research tasks accelerates the path toward AI systems that can operate autonomously on knowledge-intensive problems, shortening the timeline to highly capable AI that can make independent judgments.
AGI Progress (+0.04%): Deep research represents significant progress toward AGI by demonstrating advanced reasoning capabilities, autonomous information gathering, and the ability to analyze diverse data sources across modalities, outperforming competing models on complex academic evaluations like Humanity's Last Exam.
AGI Date (-1 days): The specialized o3 reasoning model's ability to outperform other models on expert-level questions (26.6% accuracy on Humanity's Last Exam compared to single-digit scores from competitors) suggests reasoning capabilities are advancing faster than expected, accelerating the timeline to AGI.
Altman Admits OpenAI Falling Behind, Considers Open-Sourcing Older Models
In a Reddit AMA, OpenAI CEO Sam Altman acknowledged that Chinese competitor DeepSeek has reduced OpenAI's lead in AI and admitted that OpenAI has been "on the wrong side of history" regarding open source. Altman suggested the company might reconsider its closed source strategy, potentially releasing older models, while also revealing his growing belief that AI recursive self-improvement could lead to a "fast takeoff" scenario.
Skynet Chance (+0.09%): Altman's acknowledgment that a "fast takeoff" through recursive self-improvement is more plausible than he previously believed represents a concerning shift in risk assessment from one of the most influential AI developers, suggesting key industry leaders now see rapid uncontrolled advancement as increasingly likely.
Skynet Date (-2 days): The increased competitive pressure from Chinese companies like DeepSeek is accelerating development timelines and potentially reducing safety considerations as OpenAI feels compelled to maintain its market position, while Altman's belief in a possible "fast takeoff" suggests timelines could compress unexpectedly.
AGI Progress (+0.03%): The revelation of intensifying competition between major AI labs and OpenAI's potential shift toward more open source strategies will likely accelerate overall progress by distributing advanced AI research more widely and creating stronger incentives for rapid capability advancement.
AGI Date (-1 days): The combination of heightened international competition, OpenAI's potential open sourcing of models, continued evidence that more compute leads to better models, and Altman's belief in recursive self-improvement suggest AGI timelines are compressing due to both technical and competitive factors.
OpenAI Launches Affordable Reasoning Model o3-mini for STEM Problems
OpenAI has released o3-mini, a new AI reasoning model specifically fine-tuned for STEM problems including programming, math, and science. The model offers improved performance over previous reasoning models while running faster and costing less, with OpenAI claiming a 39% reduction in major mistakes on tough real-world questions compared to o1-mini.
Skynet Chance (+0.06%): The development of more reliable reasoning models represents significant progress toward AI systems that can autonomously solve complex problems and check their own work. While safety measures are mentioned, the focus on competitive performance suggests capability development is outpacing alignment research.
Skynet Date (-1 days): The accelerating competition in reasoning models with rapidly decreasing costs suggests faster-than-expected progress toward autonomous problem-solving AI. The combination of improved accuracy, reduced costs, and faster performance indicates an acceleration in the timeline for advanced AI reasoning capabilities.
AGI Progress (+0.05%): Self-checking reasoning capabilities represent a significant step toward AGI, as they demonstrate improved reliability in domains requiring precise logical thinking. The model's ability to fact-check itself and perform competitively on math, science, and programming benchmarks shows meaningful progress in key AGI components.
AGI Date (-1 days): The rapid improvement cycle in reasoning models (o1 to o3 series) combined with increasing cost-efficiency suggests an acceleration in the development timeline for AGI. OpenAI's ability to deliver specialized reasoning at lower costs indicates that the economic barriers to AGI development are falling faster than anticipated.
OpenAI in Talks for $40 Billion Funding at $340 Billion Valuation
OpenAI is reportedly negotiating a massive funding round of up to $40 billion that would value the company at $340 billion, with SoftBank potentially leading the investment with $15-25 billion. The capital would help fund OpenAI's money-losing operations, which reportedly lost $5 billion against $3.7 billion in revenue in 2024, and support its ambitious Stargate data center project.
Skynet Chance (+0.08%): The unprecedented scale of investment in a company developing frontier AI systems dramatically increases the resources available for advanced AI research with minimal oversight, potentially enabling development paths that prioritize capabilities over safety considerations.
Skynet Date (-1 days): The massive capital influx would accelerate OpenAI's ability to build immense computational infrastructure through the Stargate project, potentially dramatically shortening timelines for developing increasingly powerful and potentially uncontrollable AI systems.
AGI Progress (+0.04%): While not a direct technical advancement, this extraordinary level of funding represents a step-change in the resources available to overcome remaining barriers to AGI, particularly through massive computational scaling via the Stargate project.
AGI Date (-1 days): The combination of $40 billion in new funding and the explicit focus on building out massive AI compute infrastructure through Stargate would significantly accelerate OpenAI's capability to train increasingly powerful models, potentially shortening AGI timelines by years.
OpenAI Partners with US National Labs for Nuclear Weapons Research
OpenAI has announced plans to provide its AI models to US National Laboratories for use in nuclear weapons security and scientific research. In collaboration with Microsoft, OpenAI will deploy a model on Los Alamos National Laboratory's supercomputer to be used across multiple research programs, including those focused on reducing nuclear war risks and securing nuclear materials and weapons.
Skynet Chance (+0.11%): Deploying advanced AI systems directly into nuclear weapons security creates a concerning connection between frontier AI capabilities and weapons of mass destruction, introducing new vectors for catastrophic risk if the AI systems malfunction, get compromised, or exhibit unexpected behaviors in this high-stakes domain.
Skynet Date (-1 days): The integration of advanced AI into critical national security infrastructure represents a significant acceleration in the deployment of powerful AI systems in dangerous contexts, potentially creating pressure to deploy insufficiently safe systems ahead of adequate safety validation.
AGI Progress (+0.01%): While this partnership doesn't directly advance AGI capabilities, the deployment of AI models in complex, high-stakes scientific and security domains will likely generate valuable operational experience and potentially novel applications that could incrementally advance AI capabilities in specialized domains.
AGI Date (+0 days): The government partnership provides OpenAI with access to specialized supercomputing resources and domain expertise that could marginally accelerate development timelines, though the primary impact is on deployment rather than fundamental AGI research.