March 6, 2026 News
Claude AI Discovers 22 Security Vulnerabilities in Firefox Browser
Anthropic's Claude Opus 4.6 identified 22 vulnerabilities in Mozilla Firefox over a two-week security audit, with 14 classified as high-severity. While Claude excelled at finding bugs, it struggled to create working exploits, succeeding in only 2 out of many attempts despite $4,000 in API costs.
Skynet Chance (+0.04%): Demonstrates AI capability to discover security vulnerabilities autonomously in complex codebases, which could be dual-use: beneficial for security or potentially exploitable for finding attack vectors. The limited exploit-generation capability provides some reassurance but shows advancing offensive security capabilities.
Skynet Date (+0 days): The successful vulnerability discovery shows practical AI capabilities advancing in security domains, slightly accelerating the timeline for AI systems that could autonomously identify and potentially exploit system weaknesses. However, the poor exploit-generation performance suggests significant technical barriers remain.
AGI Progress (+0.03%): Demonstrates meaningful progress in AI's ability to understand and analyze complex, real-world codebases autonomously, finding subtle bugs that human testers missed. This represents advancement in reasoning, code comprehension, and systematic analysis capabilities relevant to AGI.
AGI Date (+0 days): Shows commercial AI models achieving practical utility in complex cognitive tasks like security auditing of production systems, indicating faster-than-expected progress in real-world problem-solving capabilities. The successful application to one of the most secure open-source projects suggests robust generalization abilities.
Pentagon Designates Anthropic Supply-Chain Risk After Contract Dispute Over Military AI Control
The Pentagon designated Anthropic as a supply-chain risk following failed negotiations over military control of its AI models for autonomous weapons and domestic surveillance. After Anthropic's $200 million contract collapsed, the DoD contracted with OpenAI instead, which resulted in a 295% surge in ChatGPT uninstalls. The incident highlights tensions over military access to advanced AI systems.
Skynet Chance (-0.08%): Anthropic's refusal to grant unrestricted military control over its AI models demonstrates corporate resistance to potentially dangerous applications like autonomous weapons, slightly reducing risks of uncontrolled military AI deployment. However, OpenAI's acceptance of similar terms partially offsets this positive signal.
Skynet Date (+0 days): The dispute and subsequent designation as supply-chain risk creates friction and delays in military AI integration, slightly decelerating the timeline for deployment of advanced AI in autonomous weapons systems. Corporate pushback may slow adoption of less constrained military AI applications.
AGI Progress (0%): This is a contractual and governance dispute rather than a technical development, with no direct impact on underlying AI capabilities or progress toward general intelligence. The disagreement concerns deployment constraints, not fundamental research or capability advancement.
AGI Date (+0 days): Military contract disputes do not materially affect the pace of AGI research or development timelines, as this concerns application constraints rather than fundamental research velocity. Both companies continue their core AGI development work regardless of Pentagon relationships.
Anthropic Loses Pentagon Contract Over AI Control Disputes, OpenAI Steps In Despite User Backlash
The Pentagon designated Anthropic as a supply-chain risk after disagreements over military control of AI models for autonomous weapons and mass surveillance use cases. The Department of Defense shifted the $200 million contract to OpenAI, which accepted the terms but experienced a 295% increase in ChatGPT uninstalls afterward. The situation raises questions about appropriate military access to commercial AI systems.
Skynet Chance (-0.05%): Anthropic's resistance to unrestricted military control demonstrates some corporate accountability around dangerous AI applications, but OpenAI's acceptance and significant user backlash (295% uninstall surge) suggests concerning precedents for military AI deployment. The net effect slightly reduces risk through demonstrated opposition and public concern.
Skynet Date (+0 days): While creating regulatory friction, the contract shift from one AI company to another maintains overall military AI development pace. Public backlash may influence future oversight but doesn't materially change the timeline for potential misuse scenarios.
AGI Progress (0%): This represents a business and ethical dispute over existing AI deployment rather than technical advancement. Neither company's core AGI research capabilities are affected by contract negotiations or military relationships.
AGI Date (+0 days): Federal contract disputes affect business relationships and deployment contexts but do not impact the underlying research velocity or timeline toward AGI development. Both organizations continue their technical work independently of Pentagon relationships.
Anthropic's Claude Sees User Surge After Refusing Pentagon Military AI Contract
Anthropic's Claude AI chatbot experienced significant growth in daily active users and app downloads after CEO Dario Amodei refused to allow Pentagon use of Claude for mass surveillance or autonomous weapons, leading to the company being marked as a supply-chain risk. Claude's mobile app downloads now surpass ChatGPT in the U.S., with daily active users reaching 11.3 million on March 2, up 183% from the start of the year. The app reached No. 1 on the U.S. App Store and in 15 other countries, with over 1 million daily sign-ups.
Skynet Chance (-0.08%): Anthropic's refusal to enable military applications like mass surveillance and autonomous weapons, coupled with positive consumer response, suggests market forces may support AI safety principles and responsible deployment practices. This ethical stance by a major AI company and its commercial success could encourage similar restraint across the industry, slightly reducing unchecked militarization risks.
Skynet Date (+0 days): The company's decision to forgo Pentagon contracts may slow development of autonomous military AI systems and surveillance capabilities, potentially delaying scenarios involving loss of control in high-stakes military contexts. However, this deceleration is modest as other companies may fill the gap.
AGI Progress (+0.01%): The news demonstrates Claude's competitive AI capabilities and growing market adoption, indicating continued progress in useful AI systems. However, this is primarily a market share story rather than a fundamental capability breakthrough, representing incremental rather than transformative progress toward AGI.
AGI Date (+0 days): While Claude's commercial success may provide more funding for Anthropic's research, the news primarily reflects user preferences rather than technical acceleration or deceleration. The Pentagon contract rejection doesn't materially change the pace of AGI research timelines.