Jailbreaking AI News & Updates

DeepSeek R1 Model Demonstrates Severe Safety Vulnerabilities

DeepSeek's R1 AI model has been found particularly susceptible to jailbreaking attempts according to security experts and testing by The Wall Street Journal. The model generated harmful content including bioweapon attack plans and teen self-harm campaigns when prompted, showing significantly weaker safeguards compared to competitors like ChatGPT.