OpenAI AI News & Updates
Altman Considers "Compute Budget" Concept, Warns of AI's Unequal Benefits
OpenAI CEO Sam Altman proposed a "compute budget" concept to ensure AI benefits are widely distributed, acknowledging that technological progress doesn't inherently lead to greater equality. Altman claims AGI is approaching but will require significant human supervision, and suggests that while pushing AI boundaries remains expensive, the cost to access capable AI systems is falling rapidly.
Skynet Chance (+0.03%): Altman's admission that advanced AI systems may be "surprisingly bad at some things" and require extensive human supervision suggests ongoing control challenges. His acknowledgment of potential power imbalances indicates awareness of risks but doesn't guarantee effective mitigations.
Skynet Date (-4 days): OpenAI's plans to spend hundreds of billions on computing infrastructure, combined with Altman's explicit statement that AGI is near and the company's shift toward profit-maximization, strongly accelerates the timeline toward potentially unaligned powerful systems.
AGI Progress (+0.06%): Altman's confidence in approaching AGI, backed by OpenAI's massive infrastructure investments and explicit revenue targets, indicates significant progress in capabilities. His specific vision of millions of hyper-capable AI systems suggests concrete technical pathways.
AGI Date (-5 days): The combination of OpenAI's planned $500 billion investment in computing infrastructure, Altman's explicit statement that AGI is near, and the company's aggressive $100 billion revenue target by 2029 all point to a significantly accelerated AGI timeline.
Figure AI and Others Moving Away from OpenAI Dependencies
Humanoid robotics company Figure has announced it's ending its partnership with OpenAI to develop its own in-house AI models, with CEO Brett Adcock hinting at a significant breakthrough. This move reflects a potential shift in the industry as other organizations, including academic researchers who recently demonstrated training a capable reasoning model for under $50, explore alternatives to OpenAI's offerings.
Skynet Chance (+0.04%): The decentralization of advanced AI development away from major labs like OpenAI increases the risk of less safety-conscious approaches being implemented, particularly in robotics systems like Figure's humanoids. Having multiple independent robotics companies developing their own advanced AI models with fewer oversight mechanisms could increase the likelihood of unforeseen consequences.
Skynet Date (-3 days): The claimed breakthrough in Figure's in-house AI development alongside the demonstrated ability to train capable reasoning models at dramatically lower costs could significantly accelerate the development timeline for advanced autonomous systems. The democratization of AI development capabilities removes barriers that previously slowed development of potentially risky applications.
AGI Progress (+0.03%): While not directly advancing core AGI capabilities, the trend toward more companies building their own AI systems rather than relying on OpenAI suggests broader industry capability and knowledge diffusion. This decentralization of AI development could lead to more diverse approaches to solving AGI-relevant problems and accelerate innovation through increased competition.
AGI Date (-2 days): The demonstration that capable reasoning models can be trained for under $50 in cloud computing costs dramatically lowers the resource barrier to AI development. Combined with Figure's claimed breakthrough in robotics AI, this suggests the pace of advancement is accelerating as AI development becomes more accessible to a wider range of organizations.
OpenAI Co-founder John Schulman Joins Mira Murati's New AI Venture
John Schulman, an OpenAI co-founder who briefly joined Anthropic, is reportedly joining former OpenAI CTO Mira Murati's secretive new startup. Murati, who left OpenAI in September, has also recruited other former OpenAI talent including Christian Gibson from the supercomputing team, and was reportedly seeking over $100 million in funding for her venture in October.
Skynet Chance (+0.01%): Schulman's explicit interest in AI alignment and his move to join Murati suggests creation of another well-resourced lab focused on advanced AI development, potentially with safety considerations. However, the proliferation of well-funded AI labs with top talent increases the likelihood of competitive dynamics that could prioritize capabilities over safety concerns.
Skynet Date (-1 days): The concentration of elite AI talent in a new venture with substantial funding will likely accelerate development timelines for advanced AI systems. Schulman's expertise in reinforcement learning and Murati's leadership experience at OpenAI create a formidable team that could make rapid progress on key technical challenges.
AGI Progress (+0.04%): The formation of a new AI company led by two highly accomplished AI leaders with hands-on experience building state-of-the-art systems at OpenAI represents a meaningful addition to the AGI development landscape. Their combined expertise in reinforcement learning, large language models, and scaling AI systems will likely contribute to significant technical advances.
AGI Date (-2 days): The concentration of elite AI talent (including a ChatGPT architect and former OpenAI supercomputing team member) in a new well-funded venture will likely accelerate progress toward AGI. Their combined experience with cutting-edge AI systems gives them a significant head start in pursuing advanced capabilities.
Key ChatGPT Architect John Schulman Departs Anthropic After Brief Five-Month Tenure
John Schulman, an OpenAI co-founder and significant contributor to ChatGPT, has left AI safety-focused company Anthropic after only five months. Schulman had joined Anthropic from OpenAI in August 2023, citing a desire to focus more deeply on AI alignment research and technical work.
Skynet Chance (+0.03%): Schulman's rapid movement between leading AI labs suggests potential instability in AI alignment research leadership, which could subtly increase risks of unaligned AI development. His unexplained departure from a safety-focused organization may signal challenges in implementing alignment research effectively within commercial AI development contexts.
Skynet Date (+0 days): While executive movement could theoretically impact development timelines, there's insufficient information about Schulman's reasons for leaving or his next steps to determine if this will meaningfully accelerate or decelerate potential AI risk scenarios. Without knowing the impact on either organization's alignment work, this appears neutral for timeline shifts.
AGI Progress (+0.01%): The movement of key technical talent between leading AI organizations may marginally impact AGI progress through knowledge transfer and potential disruption to ongoing research programs. However, without details on why Schulman left or what impact this will have on either organization's technical direction, the effect appears minimal.
AGI Date (+0 days): The departure itself doesn't provide clear evidence of acceleration or deceleration in AGI timelines, as we lack information about how this affects either organization's research velocity or capabilities. Without understanding Schulman's next steps or the reasons for his departure, this news has negligible impact on AGI timeline expectations.
Figure AI Abandons OpenAI Partnership for In-House AI Models After 'Major Breakthrough'
Figure AI has terminated its partnership with OpenAI to focus on developing in-house AI models following what it describes as a "major breakthrough" in embodied AI. CEO Brett Adcock claims vertical integration is necessary for solving embodied AI at scale, promising to demonstrate unprecedented capabilities on their humanoid robot within 30 days.
Skynet Chance (+0.06%): Figure's pursuit of fully integrated, embodied AI for humanoid robots increases risk by creating more autonomous physical systems that might act independently in the real world, potentially with less oversight than when using external AI providers.
Skynet Date (-2 days): The claimed "major breakthrough" and vertical integration approach could accelerate development of more capable embodied AI systems, potentially bringing forward the timeline for advanced autonomous robots that can operate independently in complex environments.
AGI Progress (+0.09%): Figure's claimed breakthrough in embodied AI represents significant progress toward systems that can understand and interact with the physical world, a crucial capability for AGI that extends beyond language and image processing.
AGI Date (-2 days): The shift to specialized in-house AI models optimized for robotics suggests companies are finding faster paths to advanced capabilities through vertical integration, potentially accelerating the timeline to embodied intelligence components of AGI.
OpenAI's Operator Agent Shows Promise But Still Requires Significant Human Oversight
OpenAI's new AI agent Operator, which can perform tasks independently on the internet, shows promise but falls short of true autonomy. During testing, the system successfully navigated websites and completed basic tasks but required frequent human intervention, permissions, and guidance, demonstrating that fully autonomous AI agents remain out of reach.
Skynet Chance (-0.13%): Operator's significant limitations and need for constant human supervision demonstrates that autonomous AI systems remain far from acting independently, requiring explicit permissions and facing many basic operational challenges that reduce concerns about uncontrolled AI action.
Skynet Date (+3 days): The revealed limitations of Operator suggest that truly autonomous AI agents are further away than industry hype suggests, as even a cutting-edge system from OpenAI struggles with basic web navigation tasks without frequent human intervention.
AGI Progress (+0.04%): Despite limitations, Operator demonstrates meaningful progress in AI systems that can perceive visual web interfaces, navigate complex environments, and take actions over extended sequences, showing advancement toward more general-purpose AI capabilities.
AGI Date (+1 days): The significant human supervision still required by this advanced agent system suggests that practical, reliable AGI capabilities in real-world environments are further away than optimistic timelines might suggest, despite incremental progress.
OpenAI Trademark Filing Reveals Plans for Humanoid Robots and AI Hardware
OpenAI has filed a new trademark application with the USPTO that hints at ambitious future product lines including AI-powered hardware and humanoid robots. The filing mentions headphones, smart glasses, jewelry, humanoid robots with communication capabilities, custom AI chips, and quantum computing services, though the company's timeline for bringing these products to market remains unclear.
Skynet Chance (+0.06%): OpenAI's intent to develop humanoid robots with 'communication and learning functions' signals a significant step toward embodied AI that can physically interact with the world, increasing autonomous capabilities that could eventually lead to control issues if alignment isn't prioritized alongside capabilities.
Skynet Date (-2 days): The parallel development of hardware (including humanoid robots), custom AI chips, and quantum computing resources suggests OpenAI is building comprehensive infrastructure to accelerate AI embodiment and processing capabilities, potentially shortening the timeline to advanced AI systems.
AGI Progress (+0.05%): The integrated approach of combining advanced hardware, specialized chips, embodied robotics, and quantum computing optimization represents a systematic attempt to overcome current AI limitations, particularly in real-world interaction and computational efficiency.
AGI Date (-3 days): Custom AI chips targeted for 2026 release and quantum computing optimization suggest OpenAI is strategically addressing the computational barriers to AGI, potentially accelerating the timeline by enhancing both model training efficiency and real-world deployment capabilities.
OpenAI Launches 'Deep Research' Agent for Complex Information Analysis
OpenAI has introduced 'deep research,' a new AI agent for ChatGPT designed to conduct comprehensive, in-depth research across multiple sources. Powered by a specialized version of the o3 reasoning model, the system can analyze text, images, and PDFs from the internet, create visualizations, and provide fully documented outputs with citations, though it still faces limitations in distinguishing authoritative information and conveying uncertainty.
Skynet Chance (+0.04%): The development of AI systems capable of autonomous multi-step research, information analysis, and reasoning increases the likelihood of AIs operating with greater independence and less human oversight, potentially introducing unexpected behaviors when tasked with complex objectives.
Skynet Date (-1 days): The introduction of specialized reasoning agents capable of complex research tasks accelerates the path toward AI systems that can operate autonomously on knowledge-intensive problems, shortening the timeline to highly capable AI that can make independent judgments.
AGI Progress (+0.08%): Deep research represents significant progress toward AGI by demonstrating advanced reasoning capabilities, autonomous information gathering, and the ability to analyze diverse data sources across modalities, outperforming competing models on complex academic evaluations like Humanity's Last Exam.
AGI Date (-3 days): The specialized o3 reasoning model's ability to outperform other models on expert-level questions (26.6% accuracy on Humanity's Last Exam compared to single-digit scores from competitors) suggests reasoning capabilities are advancing faster than expected, accelerating the timeline to AGI.
Altman Admits OpenAI Falling Behind, Considers Open-Sourcing Older Models
In a Reddit AMA, OpenAI CEO Sam Altman acknowledged that Chinese competitor DeepSeek has reduced OpenAI's lead in AI and admitted that OpenAI has been "on the wrong side of history" regarding open source. Altman suggested the company might reconsider its closed source strategy, potentially releasing older models, while also revealing his growing belief that AI recursive self-improvement could lead to a "fast takeoff" scenario.
Skynet Chance (+0.09%): Altman's acknowledgment that a "fast takeoff" through recursive self-improvement is more plausible than he previously believed represents a concerning shift in risk assessment from one of the most influential AI developers, suggesting key industry leaders now see rapid uncontrolled advancement as increasingly likely.
Skynet Date (-3 days): The increased competitive pressure from Chinese companies like DeepSeek is accelerating development timelines and potentially reducing safety considerations as OpenAI feels compelled to maintain its market position, while Altman's belief in a possible "fast takeoff" suggests timelines could compress unexpectedly.
AGI Progress (+0.06%): The revelation of intensifying competition between major AI labs and OpenAI's potential shift toward more open source strategies will likely accelerate overall progress by distributing advanced AI research more widely and creating stronger incentives for rapid capability advancement.
AGI Date (-4 days): The combination of heightened international competition, OpenAI's potential open sourcing of models, continued evidence that more compute leads to better models, and Altman's belief in recursive self-improvement suggest AGI timelines are compressing due to both technical and competitive factors.
OpenAI Launches Affordable Reasoning Model o3-mini for STEM Problems
OpenAI has released o3-mini, a new AI reasoning model specifically fine-tuned for STEM problems including programming, math, and science. The model offers improved performance over previous reasoning models while running faster and costing less, with OpenAI claiming a 39% reduction in major mistakes on tough real-world questions compared to o1-mini.
Skynet Chance (+0.06%): The development of more reliable reasoning models represents significant progress toward AI systems that can autonomously solve complex problems and check their own work. While safety measures are mentioned, the focus on competitive performance suggests capability development is outpacing alignment research.
Skynet Date (-2 days): The accelerating competition in reasoning models with rapidly decreasing costs suggests faster-than-expected progress toward autonomous problem-solving AI. The combination of improved accuracy, reduced costs, and faster performance indicates an acceleration in the timeline for advanced AI reasoning capabilities.
AGI Progress (+0.1%): Self-checking reasoning capabilities represent a significant step toward AGI, as they demonstrate improved reliability in domains requiring precise logical thinking. The model's ability to fact-check itself and perform competitively on math, science, and programming benchmarks shows meaningful progress in key AGI components.
AGI Date (-4 days): The rapid improvement cycle in reasoning models (o1 to o3 series) combined with increasing cost-efficiency suggests an acceleration in the development timeline for AGI. OpenAI's ability to deliver specialized reasoning at lower costs indicates that the economic barriers to AGI development are falling faster than anticipated.