AGI timeline AI News & Updates
OpenAI Targets Fully Autonomous AI Researcher by 2028, Superintelligence Within a Decade
OpenAI CEO Sam Altman announced the company is tracking towards achieving an intern-level AI research assistant by September 2026 and a fully automated "legitimate AI researcher" by 2028. Chief Scientist Jakub Pachocki stated that deep learning systems could reach superintelligence within a decade, with OpenAI planning massive infrastructure investments including 30 gigawatts of compute capacity costing $1.4 trillion to support these goals.
Skynet Chance (+0.09%): The explicit goal of creating autonomous AI researchers capable of independent scientific breakthroughs, coupled with pursuit of superintelligence "smarter than humans across critical actions," represents significant progress toward systems that could act beyond human control or oversight. The massive infrastructure commitment ($1.4 trillion) suggests these aren't aspirational goals but funded development plans.
Skynet Date (-2 days): OpenAI's concrete timeline (intern-level by 2026, full researcher by 2028, superintelligence within a decade) with massive financial backing ($1.4 trillion infrastructure) significantly accelerates the pace toward potentially uncontrollable advanced AI. The restructuring to remove non-profit limitations explicitly enables faster scaling and capital raising for these ambitious timelines.
AGI Progress (+0.06%): OpenAI's chief scientist publicly stating superintelligence is "less than a decade away" with concrete intermediate milestones (2026, 2028) represents a major assertion of rapid progress toward AGI. The technical approach combining algorithmic innovation with massive test-time compute scaling, plus demonstrated success matching top human performance in mathematics competitions, suggests tangible advancement.
AGI Date (-2 days): The specific timeline placing autonomous AI researchers at 2028 and superintelligence within a decade, backed by $1.4 trillion in committed infrastructure spending, dramatically accelerates expected AGI arrival compared to previous estimates. The corporate restructuring to enable unlimited capital raising removes a key constraint that previously slowed progress.
Anthropic CEO Claims AI Models Hallucinate Less Than Humans, Sees No Barriers to AGI
Anthropic CEO Dario Amodei stated that AI models likely hallucinate less than humans and that hallucinations are not a barrier to achieving AGI. He maintains his prediction that AGI could arrive as soon as 2026, claiming there are no hard blocks preventing AI progress. This contrasts with other AI leaders who view hallucination as a significant obstacle to AGI.
Skynet Chance (+0.06%): Dismissing hallucination as a barrier to AGI suggests willingness to deploy systems that may make confident but incorrect decisions, potentially leading to misaligned actions. However, this represents an optimistic assessment rather than a direct increase in dangerous capabilities.
Skynet Date (-2 days): Amodei's aggressive 2026 AGI timeline and assertion that no barriers exist suggests much faster progress than previously expected. The confidence in overcoming current limitations implies accelerated development toward potentially dangerous AI systems.
AGI Progress (+0.04%): The CEO's confidence that current limitations like hallucination are not fundamental barriers suggests continued steady progress toward AGI. His observation that "the water is rising everywhere" indicates broad advancement across AI capabilities.
AGI Date (-2 days): Maintaining a 2026 AGI timeline and asserting no fundamental barriers exist significantly accelerates expected AGI arrival compared to more conservative estimates. This represents one of the most aggressive timelines from a major AI company leader.