March 13, 2025 News
Sesame Releases Open Source Voice AI Model with Few Safety Restrictions
AI company Sesame has open-sourced CSM-1B, the base model behind its realistic virtual assistant Maya, under a permissive Apache 2.0 license allowing commercial use. The 1 billion parameter model generates audio from text and audio inputs using residual vector quantization technology, but lacks meaningful safeguards against voice cloning or misuse, relying instead on an honor system that urges developers to avoid harmful applications.
Skynet Chance (+0.09%): The release of powerful voice synthesis technology with minimal safeguards significantly increases the risk of widespread misuse, including fraud, misinformation, and impersonation at scale. This pattern of releasing increasingly capable AI systems without proportionate safety measures demonstrates a troubling prioritization of capabilities over control.
Skynet Date (-3 days): The proliferation of increasingly realistic AI voice technologies without meaningful safeguards accelerates the timeline for potential AI misuse scenarios, as demonstrated by the reporter's ability to quickly clone voices for controversial content, suggesting we're entering an era of reduced AI control faster than anticipated.
AGI Progress (+0.04%): While voice synthesis alone doesn't represent AGI progress, the model's ability to convincingly replicate human speech patterns including breaths and disfluencies represents an advancement in AI's ability to model and reproduce nuanced human behaviors, a component of more general intelligence.
AGI Date (-1 days): The rapid commoditization of increasingly human-like AI capabilities through open-source releases suggests the timeline for achieving more generally capable AI systems may be accelerating, with fewer barriers to building and combining advanced capabilities across modalities.
OpenAI Advocates for US Restrictions on Chinese AI Models
OpenAI has submitted a proposal to the Trump administration recommending bans on "PRC-produced" AI models, specifically targeting Chinese AI lab DeepSeek which it describes as "state-subsidized" and "state-controlled." The proposal claims DeepSeek's models present privacy and security risks due to potential Chinese government access to user data, though OpenAI later issued a statement partially contradicting its original stronger stance.
Skynet Chance (+0.05%): The escalating geopolitical tensions in AI development could lead to competitive racing dynamics where safety considerations become secondary to strategic advantages, potentially increasing the risk of unaligned AI development in multiple competing jurisdictions.
Skynet Date (-2 days): Political fragmentation of AI development could accelerate parallel research paths with reduced safety coordination, potentially shortening timelines for dangerous AI capabilities while hampering international alignment efforts.
AGI Progress (0%): The news focuses on geopolitical and regulatory posturing rather than technical advancements, with no direct impact on AI capabilities or fundamental AGI research progress.
AGI Date (+1 days): Regulatory barriers between major AI research regions could marginally slow overall AGI progress by reducing knowledge sharing and creating inefficiencies in global research, though the effect appears limited given the continued open publication of models.