Responsible AI AI News & Updates

Lattice CEO Advocates for Human-Centric AI Implementation with Proper Oversight

Lattice CEO Sarah Franklin emphasizes the importance of maintaining human oversight and "checks and balances" when implementing AI in business operations. She argues that companies should prioritize people and customers over AI efficiency, stressing that trust, transparency, and human accountability are essential for successful AI adoption. Franklin believes that human connection cannot be replaced and that the most successful companies will be those that put people first while using AI as an augmentation tool rather than a replacement.

Former Y Combinator President Launches AI Safety Investment Fund

Geoff Ralston, former president of Y Combinator, has established the Safe Artificial Intelligence Fund (SAIF) focused on investing in startups working on AI safety, security, and responsible deployment. The fund will provide $100,000 investments to startups focused on improving AI safety through various approaches, including clarifying AI decision-making, preventing misuse, and developing safer AI tools, though it explicitly excludes fully autonomous weapons.

Former OpenAI Policy Lead Accuses Company of Misrepresenting Safety History

Miles Brundage, OpenAI's former head of policy research, criticized the company for mischaracterizing its historical approach to AI safety in a recent document. Brundage specifically challenged OpenAI's characterization of its cautious GPT-2 release strategy as being inconsistent with its current deployment philosophy, arguing that the incremental release was appropriate given information available at the time and aligned with responsible AI development.

OpenAI Delays API Release of Deep Research Model Due to Persuasion Concerns

OpenAI has decided not to release its deep research model to its developer API while it reconsiders its approach to assessing AI persuasion risks. The model, an optimized version of OpenAI's o3 reasoning model, demonstrated superior persuasive capabilities compared to the company's other available models in internal testing, raising concerns about potential misuse despite its high computing costs.