September 29, 2025 News
California Enacts First-in-Nation AI Transparency and Safety Bill SB 53
California Governor Gavin Newsom signed SB 53, establishing transparency requirements for major AI labs including OpenAI, Anthropic, Meta, and Google DeepMind regarding safety protocols and critical incident reporting. The bill also provides whistleblower protections and creates mechanisms for reporting AI-related safety incidents to state authorities. This represents the first state-level frontier AI safety legislation in the U.S., though it received mixed industry reactions with some companies lobbying against it.
Skynet Chance (-0.08%): Mandatory transparency and incident reporting requirements for major AI labs create oversight mechanisms that could help identify and address dangerous AI behaviors earlier, while whistleblower protections enable internal concerns to surface. These safety guardrails moderately reduce uncontrolled AI risk.
Skynet Date (+0 days): The transparency and reporting requirements may slightly slow frontier AI development as companies implement compliance measures, though the bill was designed to balance safety with continued innovation. The modest regulatory burden suggests minimal timeline deceleration.
AGI Progress (-0.01%): The bill focuses on transparency and safety reporting rather than restricting capabilities research or compute resources, suggesting minimal direct impact on technical AGI progress. Compliance overhead may marginally slow operational velocity at affected labs.
AGI Date (+0 days): Additional regulatory compliance requirements and incident reporting mechanisms may introduce modest administrative overhead that slightly decelerates the pace of frontier AI development. However, the bill's intentional balance between safety and innovation limits its timeline impact.
DeepSeek Introduces Sparse Attention Model Cutting Inference Costs by Half
DeepSeek released an experimental model V3.2-exp featuring "Sparse Attention" technology that uses a lightning indexer and fine-grained token selection to dramatically reduce inference costs for long-context operations. Preliminary testing shows API costs can be cut by approximately 50% in long-context scenarios, addressing the critical challenge of server costs in operating pre-trained AI models. The open-weight model is freely available on Hugging Face for independent verification and testing.
Skynet Chance (-0.03%): Lower inference costs make AI deployment more economically accessible and sustainable, potentially enabling better monitoring and alignment research through reduced resource barriers. However, it also enables broader deployment of powerful models, creating a minor mixed effect on control mechanisms.
Skynet Date (+0 days): Reduced inference costs enable more sustainable AI scaling and wider deployment, but this is primarily an efficiency gain rather than a capability breakthrough that would accelerate uncontrolled AI development. The modest deceleration reflects that economic sustainability may slow rushed deployment.
AGI Progress (+0.02%): The sparse attention breakthrough represents meaningful architectural progress in making transformer models more efficient at handling long-context operations, addressing a fundamental limitation in current AI systems. This optimization enables more practical deployment of advanced capabilities needed for AGI.
AGI Date (+0 days): Cutting inference costs by half significantly reduces economic barriers to scaling and deploying advanced AI systems, enabling more organizations to experiment with and advance long-context AI applications. This efficiency breakthrough accelerates the practical timeline for developing and deploying AGI-relevant capabilities.
OpenAI Launches In-Chat Shopping with Instant Checkout, Open-Sources Agentic Commerce Protocol
OpenAI has introduced "Instant Checkout" allowing ChatGPT users in the U.S. to complete purchases from Etsy and Shopify merchants directly within conversations, using payment methods like Apple Pay, Google Pay, Stripe, or credit cards. The feature aims to create frictionless shopping experiences and positions OpenAI as a potential new gatekeeper in e-commerce, challenging Google and Amazon's dominance in retail discovery. OpenAI is also open-sourcing its Agentic Commerce Protocol (ACP) to enable broader merchant integration and potentially establish itself as the architect of AI-powered commerce ecosystems.
Skynet Chance (+0.01%): This deployment demonstrates AI agents acting with increased autonomy in the real world (handling transactions and financial information), which incrementally advances capabilities that could become harder to control at scale. However, the application remains narrowly scoped to commerce with human oversight, posing minimal direct existential risk.
Skynet Date (+0 days): The deployment of autonomous AI agents in real-world commercial applications with access to payment systems slightly accelerates the timeline for AI systems operating independently in consequential domains. The open-sourcing of the protocol could further speed adoption of agentic systems across the economy.
AGI Progress (+0.01%): This represents practical deployment of agentic AI capabilities that can understand user intent, navigate complex multi-step processes, and coordinate between systems autonomously. The integration of reasoning, decision-making, and action execution in a real-world domain demonstrates meaningful progress toward more general AI systems.
AGI Date (+0 days): The successful commercialization and scaling of AI agents handling complex real-world tasks accelerates practical AGI development by providing data, infrastructure, and economic incentives for building more capable autonomous systems. Open-sourcing the protocol could further accelerate ecosystem development and iteration speed.
Anthropic Releases Claude Sonnet 4.5 with Advanced Autonomous Coding Capabilities
Anthropic launched Claude Sonnet 4.5, a new AI model claiming state-of-the-art coding performance that can build production-ready applications autonomously. The model has demonstrated the ability to code independently for up to 30 hours, performing complex tasks like setting up databases, purchasing domains, and conducting security audits. Anthropic also claims improved AI alignment with lower rates of sycophancy and deception, along with better resistance to prompt injection attacks.
Skynet Chance (+0.04%): The model's ability to autonomously execute complex multi-step tasks for extended periods (30 hours) with real-world capabilities like purchasing domains represents increased autonomous AI agency, though improved alignment claims provide modest mitigation. The leap toward "production-ready" autonomous systems operating with minimal human oversight incrementally increases control risks.
Skynet Date (-1 days): Autonomous coding capabilities for 30+ hours and real-world task execution accelerate the development of increasingly autonomous AI systems. However, the improved alignment features and focus on safety mechanisms provide some countervailing deceleration effects.
AGI Progress (+0.03%): The ability to autonomously complete complex, multi-hour software development tasks including infrastructure setup and security audits demonstrates significant progress toward general problem-solving capabilities. This represents a meaningful step beyond narrow coding assistance toward more general autonomous task completion.
AGI Date (-1 days): The rapid advancement in autonomous coding capabilities and the model's ability to handle extended, multi-step tasks suggests faster-than-expected progress in AI agency and reasoning. The commercial availability and demonstrated real-world application accelerates the timeline toward more general AI systems.
OpenAI Deploys GPT-5 Safety Routing System and Parental Controls Following Suicide-Related Lawsuit
OpenAI has implemented a new safety routing system that automatically switches ChatGPT to GPT-5-thinking during emotionally sensitive conversations, following a wrongful death lawsuit after a teenager's suicide linked to ChatGPT interactions. The company also introduced parental controls for teen accounts, including harm detection systems that can alert parents or potentially contact emergency services, though the implementation has received mixed reactions from users.
Skynet Chance (-0.08%): The implementation of safety routing systems and harm detection mechanisms represents proactive measures to prevent AI systems from causing harm through misaligned responses. These safeguards directly address the problem of AI systems validating dangerous thinking patterns, reducing the risk of uncontrolled harmful outcomes.
Skynet Date (+1 days): The focus on implementing comprehensive safety measures and taking time for careful iteration (120-day improvement period) suggests a more cautious approach to AI deployment. This deliberate pacing of safety implementations may slow the timeline toward more advanced but potentially riskier AI systems.
AGI Progress (+0.01%): The deployment of GPT-5-thinking with advanced safety features and contextual routing capabilities demonstrates progress in creating more sophisticated AI systems that can handle complex, sensitive situations. However, the primary focus is on safety rather than general intelligence advancement.
AGI Date (+0 days): While the safety implementations show technical advancement, the emphasis on cautious rollout and extensive safety testing periods may slightly slow the pace toward AGI. The 120-day iteration period and focus on getting safety right suggests a more measured approach to AI development.