Anthropic AI News & Updates
OpenAI Co-founder John Schulman Joins Mira Murati's New AI Venture
John Schulman, an OpenAI co-founder who briefly joined Anthropic, is reportedly joining former OpenAI CTO Mira Murati's secretive new startup. Murati, who left OpenAI in September, has also recruited other former OpenAI talent including Christian Gibson from the supercomputing team, and was reportedly seeking over $100 million in funding for her venture in October.
Skynet Chance (+0.01%): Schulman's explicit interest in AI alignment and his move to join Murati suggests creation of another well-resourced lab focused on advanced AI development, potentially with safety considerations. However, the proliferation of well-funded AI labs with top talent increases the likelihood of competitive dynamics that could prioritize capabilities over safety concerns.
Skynet Date (-1 days): The concentration of elite AI talent in a new venture with substantial funding will likely accelerate development timelines for advanced AI systems. Schulman's expertise in reinforcement learning and Murati's leadership experience at OpenAI create a formidable team that could make rapid progress on key technical challenges.
AGI Progress (+0.02%): The formation of a new AI company led by two highly accomplished AI leaders with hands-on experience building state-of-the-art systems at OpenAI represents a meaningful addition to the AGI development landscape. Their combined expertise in reinforcement learning, large language models, and scaling AI systems will likely contribute to significant technical advances.
AGI Date (-1 days): The concentration of elite AI talent (including a ChatGPT architect and former OpenAI supercomputing team member) in a new well-funded venture will likely accelerate progress toward AGI. Their combined experience with cutting-edge AI systems gives them a significant head start in pursuing advanced capabilities.
Key ChatGPT Architect John Schulman Departs Anthropic After Brief Five-Month Tenure
John Schulman, an OpenAI co-founder and significant contributor to ChatGPT, has left AI safety-focused company Anthropic after only five months. Schulman had joined Anthropic from OpenAI in August 2023, citing a desire to focus more deeply on AI alignment research and technical work.
Skynet Chance (+0.03%): Schulman's rapid movement between leading AI labs suggests potential instability in AI alignment research leadership, which could subtly increase risks of unaligned AI development. His unexplained departure from a safety-focused organization may signal challenges in implementing alignment research effectively within commercial AI development contexts.
Skynet Date (+0 days): While executive movement could theoretically impact development timelines, there's insufficient information about Schulman's reasons for leaving or his next steps to determine if this will meaningfully accelerate or decelerate potential AI risk scenarios. Without knowing the impact on either organization's alignment work, this appears neutral for timeline shifts.
AGI Progress (+0.01%): The movement of key technical talent between leading AI organizations may marginally impact AGI progress through knowledge transfer and potential disruption to ongoing research programs. However, without details on why Schulman left or what impact this will have on either organization's technical direction, the effect appears minimal.
AGI Date (+0 days): The departure itself doesn't provide clear evidence of acceleration or deceleration in AGI timelines, as we lack information about how this affects either organization's research velocity or capabilities. Without understanding Schulman's next steps or the reasons for his departure, this news has negligible impact on AGI timeline expectations.
Anthropic CEO Calls for Stronger AI Export Controls Against China
Anthropic's CEO Dario Amodei argues that U.S. export controls on AI chips are effectively slowing Chinese AI progress, noting that DeepSeek's models match U.S. models from 7-10 months earlier but don't represent a fundamental breakthrough. Amodei advocates for strengthening export restrictions to prevent China from obtaining millions of chips for AI development, warning that without such controls, China could redirect resources toward military AI applications.
Skynet Chance (+0.03%): Amodei's advocacy for limiting advanced AI development capabilities in countries with different value systems could reduce risks of misaligned AI being developed without adequate safety protocols, though his focus appears more on preventing military applications than on existential risks from advanced AI.
Skynet Date (+1 days): Stronger export controls advocated by Amodei could significantly slow the global proliferation of advanced AI capabilities, potentially extending timelines for high-risk AI development by constraining access to the computational resources necessary for training frontier models.
AGI Progress (-0.01%): While the article mainly discusses policy rather than technical breakthroughs, Amodei's analysis suggests DeepSeek's models represent expected efficiency improvements rather than fundamental advances, implying current AGI progress is following predictable trajectories rather than accelerating unexpectedly.
AGI Date (+1 days): The potential strengthening of export controls advocated by Amodei and apparently supported by Trump's commerce secretary nominee could moderately slow global AGI development by restricting computational resources available to some major AI developers, extending timelines for achieving AGI capabilities.