Scaling Laws AI News & Updates

Researchers Propose "Inference-Time Search" as New AI Scaling Method with Mixed Expert Reception

Google and UC Berkeley researchers have proposed "inference-time search" as a potential new AI scaling method that involves generating multiple possible answers to a query and selecting the best one. The researchers claim this approach can elevate the performance of older models like Google's Gemini 1.5 Pro to surpass newer reasoning models like OpenAI's o1-preview on certain benchmarks, though AI experts express skepticism about its broad applicability beyond problems with clear evaluation metrics.

OpenAI Launches GPT-4.5 Orion with Diminishing Returns from Scale

OpenAI has released GPT-4.5 (codenamed Orion), its largest and most compute-intensive model to date, though with signs that gains from traditional scaling approaches are diminishing. Despite outperforming previous GPT models in some areas like factual accuracy and creative tasks, it falls short of newer AI reasoning models on difficult academic benchmarks, suggesting the industry may be approaching the limits of unsupervised pre-training.