Current AI Risk Assessment

24.65%

Chance of AI Control Loss

November 30, 2035

Estimated Date of Control Loss

AGI Development Metrics?

75.59%

AGI Progress

December 16, 2029

Estimated Date of AGI

Risk Trend Over Time

Latest AI News (Last 3 Days)

February 28, 2026
+0.06% Risk

OpenAI Secures Pentagon AI Contract with Safety Protections Amid Anthropic Standoff

OpenAI has reached an agreement with the Department of Defense to deploy its AI models on classified networks, including technical safeguards against mass domestic surveillance and autonomous weapons. This follows a public conflict between the Pentagon and Anthropic over usage restrictions, which resulted in Trump administration threats to designate Anthropic as a supply-chain risk and ban federal agencies from using its products. OpenAI claims its deal includes protections for the same ethical concerns Anthropic sought, and is asking the government to extend these terms to all AI companies.

February 27, 2026
-0.19% Risk

Trump Administration Terminates Federal Use of Anthropic AI Following Defense Dispute Over Surveillance and Autonomous Weapons

President Trump ordered all federal agencies to stop using Anthropic products within six months following a dispute with the Department of Defense. The conflict arose when Anthropic refused to allow its AI models to be used for mass domestic surveillance or fully autonomous weapons, positions that Defense Secretary Pete Hegseth deemed too restrictive. Anthropic CEO Dario Amodei maintained the company's stance on these ethical safeguards despite the federal ban.

Pentagon Threatens Anthropic Over Restrictions on Military AI Use for Autonomous Weapons and Surveillance

Anthropic CEO Dario Amodei is in conflict with Defense Secretary Pete Hegseth over the company's refusal to allow its AI models to be used for mass surveillance of Americans or fully autonomous weapons without human oversight. The Pentagon has threatened to designate Anthropic as a supply chain risk and given the company a Friday deadline to comply with allowing "lawful use" of its technology, while Anthropic maintains its models aren't yet safe enough for such applications. The dispute centers on whether AI companies can impose usage restrictions on government military deployments or whether the Pentagon should have unrestricted access to any lawful application of the technology.

OpenAI Secures $110B Funding Round as ChatGPT User Base Reaches 900M Weekly Active Users

OpenAI announced that ChatGPT has reached 900 million weekly active users and 50 million paying subscribers, with January and February 2026 projected to be record months for new subscriptions. The company simultaneously disclosed a massive $110 billion private funding round led by Amazon ($50B), Nvidia ($30B), and SoftBank ($30B), valuing OpenAI at $730 billion pre-money. The funding round remains open for additional investors.

State Legislator Faces Silicon Valley Backlash Over AI Safety Regulation Efforts

New York State Assemblymember Alex Bores sponsored the RAISE Act, New York's first AI safety law, and became a target of a Silicon Valley lobbying group spending $125 million on attack ads. The episode discusses the broader regulatory battle occurring as communities block data center construction and debates polarize between "doomers versus boomers." Bores is attempting to navigate a middle path on AI regulation while running for U.S. Congress.

AI Industry Employees Rally Behind Anthropic's Resistance to Pentagon Demands for Unrestricted Military AI Access

Anthropic is resisting Pentagon demands for unrestricted access to its AI technology, specifically opposing use for domestic mass surveillance and autonomous weaponry. Over 300 Google and 60 OpenAI employees have signed an open letter supporting Anthropic's stance, urging their companies to maintain these boundaries. The Pentagon has threatened to invoke the Defense Production Act or label Anthropic a supply chain risk if the company doesn't comply by Friday's deadline.

OpenAI Secures Historic $110B Funding Round, Led by Amazon, Nvidia, and SoftBank

OpenAI announced a $110 billion private funding round with investments from Amazon ($50B), Nvidia ($30B), and SoftBank ($30B), against a $730 billion pre-money valuation. The funding includes major infrastructure partnerships with Amazon and Nvidia, with significant portions likely provided as compute services rather than cash. The round remains open for additional investors, with $35 billion of Amazon's investment potentially contingent on OpenAI achieving AGI or completing an IPO by year-end.

February 26, 2026
-0.06% Risk

Anthropic Refuses Pentagon's Demand for Unrestricted Military AI Access

Anthropic CEO Dario Amodei has declined the Pentagon's request for unrestricted access to its AI systems, citing concerns about mass surveillance and fully autonomous weapons. The refusal comes ahead of a Friday deadline set by Defense Secretary Pete Hegseth, who has threatened to label Anthropic a supply chain risk or invoke the Defense Production Act. Amodei maintains that Anthropic will work toward a smooth transition if the military chooses to terminate their partnership rather than accept safeguards against these two specific use cases.

Trace Secures $3M to Enable Enterprise AI Agent Deployment Through Context Engineering

Trace, a Y Combinator-backed startup, has raised $3 million to solve AI agent adoption challenges in enterprises by building knowledge graphs that provide agents with necessary context about corporate environments and processes. The platform maps existing tools like Slack and email to create workflows that delegate tasks between AI agents and human workers. The company positions its approach as "context engineering" rather than prompt engineering, aiming to become the infrastructure layer for AI-first companies.

Figma Integrates OpenAI's Codex to Bridge Design and Development Workflows

Figma has partnered with OpenAI to integrate Codex, an AI coding tool, allowing users to seamlessly transition between design and code environments. This follows a similar integration with Anthropic's Claude Code and aims to enable both designers and engineers to work more fluidly across visual and code-based interfaces. OpenAI reports over a million weekly Codex users, with its MacOS app downloaded a million times in its first week.

See More AI News

AI Risk Assessment Methodology

Our risk assessment methodology leverages a sophisticated analysis framework to evaluate AI development and its potential implications:

Data Collection

We continuously monitor and aggregate AI news from leading research institutions, tech companies, and policy organizations worldwide. Our system analyzes hundreds of developments daily across multiple languages and sources.

Impact Analysis

Each news item undergoes rigorous assessment through:

  • Technical Evaluation: Analysis of computational advancements, algorithmic breakthroughs, and capability improvements
  • Safety Research: Progress in alignment, interpretability, and containment mechanisms
  • Governance Factors: Regulatory developments, industry standards, and institutional safeguards

Indicator Calculation

Our indicators are updated using a Bayesian probabilistic model that:

  • Assigns weighted impact scores to each analyzed development
  • Calculates cumulative effects on control loss probability and AGI timelines
  • Accounts for interdependencies between different technological trajectories
  • Maintains historical trends to identify acceleration or deceleration patterns

This methodology enables data-driven forecasting while acknowledging the inherent uncertainties in predicting transformative technological change.