cybersecurity AI News & Updates

Anthropic's Mythos AI Model Revolutionizes Firefox Vulnerability Detection

Anthropic's Mythos model has significantly enhanced Firefox's cybersecurity by discovering thousands of high-severity bugs, including some over a decade old, with Mozilla reporting a 13x increase in bug fixes compared to the previous year. The AI system excels at finding complex sandbox vulnerabilities that traditionally commanded $20,000 bounties, though human engineers are still required to write the actual patches. The advancement marks a turning point for AI security tools, which previously suffered from high false positive rates.

OpenAI Restricts Access to GPT-5.5 Cyber Tool Despite Criticizing Anthropic's Similar Approach

OpenAI is limiting access to its new cybersecurity tool, GPT-5.5 Cyber, releasing it only to "critical cyber defenders" through an application process, despite CEO Sam Altman previously criticizing Anthropic for taking the same approach with its Mythos tool. The tool can perform penetration testing, vulnerability identification, and malware reverse engineering, with concerns about potential misuse by malicious actors. OpenAI is consulting with the U.S. government to eventually expand access to verified cybersecurity professionals.

Anthropic's Mythos Cybersecurity AI Tool Reportedly Accessed by Unauthorized Group

An unauthorized group has allegedly gained access to Anthropic's Mythos, a powerful AI cybersecurity tool designed for enterprise security but potentially dangerous in wrong hands. The group reportedly accessed the tool through a third-party vendor on the same day it was announced, using knowledge of Anthropic's model naming conventions. Anthropic is investigating but has found no evidence of system compromise so far.

Anthropic Briefs Trump Administration on Unreleased Mythos AI Model with Advanced Cybersecurity Capabilities

Anthropic co-founder Jack Clark confirmed the company briefed the Trump administration on its new Mythos AI model, which possesses powerful cybersecurity capabilities deemed too dangerous for public release. This engagement occurs despite Anthropic's ongoing lawsuit against the Department of Defense over restrictions on military access to its AI systems. The company is also monitoring potential AI-driven employment impacts, particularly in early graduate employment across select industries.

U.S. Treasury and Federal Reserve Push Major Banks to Test Anthropic's Mythos Cybersecurity Model Despite Ongoing Government Conflict

Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell encouraged major bank executives to use Anthropic's new Mythos AI model for detecting security vulnerabilities, with several major banks now reportedly testing it. This comes despite Anthropic's ongoing legal battle with the Trump administration over DoD supply-chain risk designation and concerns about the model being exceptionally capable at finding vulnerabilities. U.K. financial regulators are also discussing risks posed by Mythos.

Anthropic Restricts Mythos Cybersecurity Model to Enterprise Clients, Raising Questions About Motives

Anthropic has limited the release of its new AI model Mythos, claiming it is highly capable of finding security exploits, and will only share it with large enterprises like AWS and JPMorgan Chase rather than releasing it publicly. While Anthropic cites cybersecurity concerns, critics suggest the restricted release may also serve to protect against model distillation by competitors and create an enterprise revenue flywheel. Some AI security startups claim they can replicate Mythos's capabilities using smaller open-weight models, questioning whether the restriction is primarily about safety.

Anthropic Releases Mythos: Powerful Frontier AI Model for Cybersecurity Vulnerability Detection

Anthropic has released a limited preview of Mythos, described as one of its most powerful frontier AI models, to over 40 partner organizations including Amazon, Apple, Microsoft, and Cisco for defensive cybersecurity work. The model has reportedly identified thousands of zero-day vulnerabilities in software systems, some dating back one to two decades. While designed as a general-purpose model with strong coding and reasoning capabilities, concerns exist about potential weaponization by bad actors to exploit rather than fix vulnerabilities.

OpenAI Seeks New Head of Preparedness Amid Growing AI Safety Concerns

OpenAI is hiring a new Head of Preparedness to manage emerging AI risks, including cybersecurity vulnerabilities and mental health impacts. The position comes after the previous head was reassigned and follows updates to OpenAI's safety framework that may relax protections if competitors release high-risk models. The move reflects increasing concerns about AI capabilities in security exploitation and the psychological effects of AI chatbots.

AI Browser Agents Face Critical Security Vulnerabilities Through Prompt Injection Attacks

New AI-powered browsers from OpenAI and Perplexity feature agents that can perform tasks autonomously by navigating websites and filling forms, but they introduce significant security risks. Cybersecurity experts warn that these agents are vulnerable to "prompt injection attacks" where malicious instructions hidden on webpages can trick agents into exposing user data or performing unauthorized actions. While companies have introduced safeguards, researchers note that prompt injection remains an unsolved security problem affecting the entire AI browser category.

OpenAI Launches Atlas: AI-Powered Browser with Autonomous Agent Mode Debuts Despite Security Vulnerabilities

OpenAI has released Atlas, a ChatGPT-powered web browser that enables natural language navigation and features an autonomous "agent mode" for completing tasks independently. The launch represents a significant entry into the browser market but is marred by an unresolved security vulnerability that could potentially expose user passwords, emails, and other sensitive information.