February 18, 2025 News
Sutskever's Safe Superintelligence Startup Nearing $1B Funding at $30B Valuation
Ilya Sutskever's AI startup, Safe Superintelligence, is reportedly close to raising over $1 billion at a $30 billion valuation, with VC firm Greenoaks Capital Partners leading the round with a $500 million investment. The company, co-founded by former OpenAI and Apple AI leaders, has no immediate plans to sell AI products and would reach approximately $2 billion in total funding.
Skynet Chance (-0.13%): A substantial investment in a company explicitly focused on AI safety, founded by respected AI leaders with deep technical expertise, represents meaningful progress toward reducing existential risks. The company's focus on safety over immediate product commercialization suggests a serious commitment to addressing superintelligence risks.
Skynet Date (-1 days): While substantial funding could accelerate AI development timelines, the explicit focus on safety by key technical leaders suggests they anticipate superintelligence arriving sooner than commonly expected, potentially leading to earlier development of crucial safety mechanisms.
AGI Progress (+0.08%): The massive valuation and investment signal extraordinary confidence in Sutskever's technical approach to advancing AI capabilities. Given Sutskever's pivotal role in breakthrough AI technologies at OpenAI, this substantial backing will likely accelerate progress toward more advanced systems approaching AGI.
AGI Date (-3 days): The extraordinary $30 billion valuation for a pre-revenue company led by a key architect of modern AI suggests investors believe transformative AI capabilities are achievable on a much shorter timeline than previously expected. This massive capital infusion will likely significantly accelerate development toward AGI.
Former OpenAI Leaders Launch Thinking Machines Lab to Build More Customizable AI
Former OpenAI CTO Mira Murati has launched Thinking Machines Lab, a startup focused on developing more customizable and capable AI systems that address key gaps in current AI technologies. The company, which includes OpenAI co-founder John Schulman and other high-profile AI researchers, aims to build frontier multimodal systems for applications in science and programming while emphasizing AI safety.
Skynet Chance (+0.01%): The emphasis on building highly capable frontier models increases potential risks, but the explicit focus on customizability, safety practices, and sharing alignment knowledge provides some counterbalance. Their stated commitment to understanding systems indicates awareness of control issues.
Skynet Date (-2 days): The formation of another highly-credentialed team pursuing frontier capabilities at the limits of AI, particularly with multimodal systems for science and programming, will likely accelerate development timelines toward more advanced systems with potentially unpredictable emergent behaviors.
AGI Progress (+0.06%): The assembly of key technical leaders from OpenAI, including those who helped develop ChatGPT and other breakthrough systems, focusing explicitly on frontier multimodal models represents a significant concentration of talent that will likely drive substantial technical progress toward more AGI-like capabilities.
AGI Date (-3 days): The emergence of another well-funded company founded by architects of today's most advanced AI systems, explicitly focused on frontier capabilities in domains like science and programming, will likely accelerate development timelines through additional competitive pressure and parallel research efforts.
OpenAI Plans Special Voting Rights to Safeguard Board Against Takeover Attempts
OpenAI is considering giving its nonprofit board special voting rights that would allow it to overrule major investors, protecting against hostile takeovers like the recent $97.4 billion offer from Elon Musk and investors. This move comes as OpenAI transitions from a capped-profit structure to a public benefit corporation by late 2026, with plans to separate its nonprofit arm with its own staff and leadership team.
Skynet Chance (-0.1%): This governance structure would preserve the nonprofit board's power to potentially prioritize safety over profit motives, reducing the likelihood of purely commercial interests driving risky AI development decisions. The board's ability to overrule investors could serve as a safeguard against misaligned AI development.
Skynet Date (+2 days): The proposed governance structure introduces additional constraints on rapid development decisions, potentially slowing the pace of capabilities deployment in favor of more deliberate oversight. This structured approach to corporate governance likely adds time to any pathway toward uncontrolled AI.
AGI Progress (0%): The news focuses exclusively on corporate structure and governance rather than research or technical capabilities, having negligible direct impact on AGI development progress. This reorganization affects who controls OpenAI but doesn't directly accelerate or decelerate technical capabilities.
AGI Date (+1 days): The proposed governance structure creates additional decision-making layers that could marginally slow the pace of aggressive capability deployment. Having a board with special voting rights might introduce more deliberation around major research directions, potentially extending timelines slightly.
xAI Launches Grok 3 Model Suite with Enhanced Reasoning Capabilities
Elon Musk's xAI has released its latest flagship AI model, Grok 3, trained with approximately 10 times more computing power than its predecessor using 200,000 GPUs. The release includes a family of models including Grok 3 Reasoning and Grok 3 mini, featuring specialized reasoning capabilities for mathematics, science, and programming, alongside a new DeepSearch feature for internet research.
Skynet Chance (+0.08%): Grok 3's significant scaling of compute resources (10x over predecessor, 200,000 GPUs) and emphasis on being "maximally truth-seeking" even when "at odds with political correctness" indicates reduced safety guardrails and increased autonomous reasoning capabilities. These developments push the frontier of LLM autonomy and reduce human oversight controls.
Skynet Date (-3 days): The massive compute investment (200,000 GPUs) and rapid advancement in reasoning capabilities demonstrate accelerating technical progress and compute scaling beyond expectations. The aggressive development timeline and reasoning capabilities being commercialized faster than anticipated suggest advancement toward AI risk scenarios is accelerating.
AGI Progress (+0.11%): The 10x increase in compute, superior benchmark performance over competitors like GPT-4o, and specialized reasoning capabilities represent substantial progress toward advanced AI capabilities. The claimed performance on challenging mathematics and scientific problems suggests meaningful improvements in core reasoning abilities central to AGI development.
AGI Date (-4 days): The rapid scaling of compute (200,000 GPUs), demonstrated improvements on reasoning benchmarks, and integration of reasoning with internet search indicate AI capabilities are advancing more quickly than previously expected. This massive investment and accelerated capabilities development suggest AGI timelines are compressing significantly.