benchmarking AI News & Updates

New Benchmark Reveals AI Agents Still Far From Replacing White-Collar Workers

A new benchmark called Apex-Agents tests leading AI models on real white-collar tasks from consulting, investment banking, and law, revealing that even the best models achieve only about 24% accuracy. The models struggle primarily with multi-domain information tracking across different tools and platforms, a core requirement of professional knowledge work. Despite current limitations, researchers note rapid year-over-year improvement, with accuracy potentially quintupling from previous years.

Laude Institute Launches Slingshots Grant Program to Accelerate AI Research and Evaluation

The Laude Institute announced its first Slingshots grants program, providing fifteen AI research projects with funding, compute resources, and engineering support. The initial cohort focuses heavily on AI evaluation challenges, including projects like Terminal Bench, ARC-AGI, and new benchmarks for code optimization and white-collar AI agents.

OpenAI's GPT-5 Shows Near-Human Performance Across Professional Tasks in New Economic Benchmark

OpenAI released GDPval, a new benchmark testing AI models against human professionals across 44 occupations in nine major industries. GPT-5 performed at or above human expert level 40.6% of the time, while Anthropic's Claude Opus 4.1 achieved 49%, representing significant progress from GPT-4o's 13.7% score just 15 months prior.

Former Intel CEO Pat Gelsinger Launches Flourishing AI Benchmark for Human Values Alignment

Former Intel CEO Pat Gelsinger has partnered with faith tech company Gloo to launch the Flourishing AI (FAI) benchmark, designed to test how well AI models align with human values. The benchmark is based on The Global Flourishing Study from Harvard and Baylor University and evaluates AI models across seven categories including character, relationships, happiness, meaning, health, financial stability, and faith.