March 19, 2025 News

OpenAI Releases Premium o1-pro Model at Record-Breaking Price Point

OpenAI has released o1-pro, an enhanced version of its reasoning-focused o1 model, to select API developers. The model costs $150 per million input tokens and $600 per million output tokens, making it OpenAI's most expensive model to date, with prices far exceeding GPT-4.5 and the standard o1 model.

OpenAI's Noam Brown Claims Reasoning AI Models Could Have Existed Decades Earlier

OpenAI's AI reasoning research lead Noam Brown suggested at Nvidia's GTC conference that certain reasoning AI models could have been developed 20 years earlier if researchers had used the right approach. Brown, who previously worked on game-playing AI including Pluribus poker AI and helped create OpenAI's reasoning model o1, also addressed the challenges academia faces in competing with AI labs and identified AI benchmarking as an area where academia could make significant contributions despite compute limitations.

California AI Policy Group Advocates Anticipatory Approach to Frontier AI Safety Regulations

A California policy group co-led by AI pioneer Fei-Fei Li released a 41-page interim report advocating for AI safety laws that anticipate future risks, even those not yet observed. The report recommends increased transparency from frontier AI labs through mandatory safety test reporting, third-party verification, and enhanced whistleblower protections, while acknowledging uncertain evidence for extreme AI threats but emphasizing high stakes for inaction.

Researchers Propose "Inference-Time Search" as New AI Scaling Method with Mixed Expert Reception

Google and UC Berkeley researchers have proposed "inference-time search" as a potential new AI scaling method that involves generating multiple possible answers to a query and selecting the best one. The researchers claim this approach can elevate the performance of older models like Google's Gemini 1.5 Pro to surpass newer reasoning models like OpenAI's o1-preview on certain benchmarks, though AI experts express skepticism about its broad applicability beyond problems with clear evaluation metrics.

AI Researchers Challenge AGI Timelines, Question LLMs' Path to Human-Level Intelligence

Several prominent AI leaders including Hugging Face's Thomas Wolf, Google DeepMind's Demis Hassabis, Meta's Yann LeCun, and former OpenAI researcher Kenneth Stanley are expressing skepticism about near-term AGI predictions. They argue that current large language models (LLMs) face fundamental limitations, particularly in creativity and generating original questions rather than just answers, and suggest new architectural approaches may be needed for true human-level intelligence.