July 25, 2025 News
Meta Appoints Former OpenAI Researcher as Chief Scientist of New AI Superintelligence Unit
Meta has named Shengjia Zhao, a former OpenAI researcher who contributed to ChatGPT, GPT-4, and the o1 reasoning model, as Chief Scientist of Meta Superintelligence Labs (MSL). The company has been aggressively recruiting top AI talent with eight and nine-figure compensation packages and is building a one-gigawatt computing cluster called Prometheus to support frontier AI model development. This represents Meta's major push to compete directly with OpenAI and Google in developing superintelligent AI systems.
Skynet Chance (+0.04%): The explicit focus on "superintelligence" and aggressive scaling of AI capabilities increases potential risks from more powerful AI systems. However, this represents expected competitive dynamics rather than a fundamental shift in safety approaches.
Skynet Date (-1 days): Meta's massive investment in computing infrastructure and talent acquisition from leading AI labs significantly accelerates the pace of frontier AI development. The one-gigawatt Prometheus cluster and recruitment of key researchers behind GPT-4 and o1 will likely speed up the timeline for advanced AI capabilities.
AGI Progress (+0.03%): Hiring the lead researcher behind OpenAI's reasoning models and building massive compute infrastructure represents significant progress toward AGI capabilities. The focus on AI reasoning models, which are considered a key step toward general intelligence, particularly advances this goal.
AGI Date (-1 days): The combination of top-tier talent from multiple leading AI labs and unprecedented computing resources will likely accelerate AGI development timelines. Meta's aggressive recruiting and infrastructure investments suggest they aim to compress development cycles significantly.
Trump's AI Action Plan Reduces Regulatory Oversight and Environmental Barriers for Tech Companies
President Trump unveiled an AI Action Plan that was shaped by Silicon Valley allies and is being celebrated by major AI companies. The plan aims to reduce environmental regulatory barriers for data center construction, limit state government oversight of AI development and safety, and prevent tech companies from developing what conservatives consider "woke" AI.
Skynet Chance (+0.04%): Reducing state government oversight of AI development and safety weakens regulatory guardrails that could help prevent uncontrolled AI development. The removal of safety oversight mechanisms increases the probability of inadequately governed AI systems.
Skynet Date (-1 days): Easier data center construction and reduced regulatory barriers will likely accelerate AI development timelines. However, the impact is moderate since the core technological challenges remain unchanged.
AGI Progress (+0.01%): The policy changes don't directly advance AGI capabilities but create a more favorable environment for AI research and development. The impact on actual technical progress toward AGI is minimal.
AGI Date (-1 days): Reduced environmental and regulatory barriers for data center construction will accelerate infrastructure development needed for large-scale AI training. This could meaningfully speed up the timeline for achieving AGI by removing bureaucratic bottlenecks.