Mixture-of-Experts AI News & Updates

DeepSeek Updates Prover V2 for Advanced Mathematical Reasoning

Chinese AI lab DeepSeek has released an upgraded version of its mathematics-focused AI model Prover V2, built on their V3 model with 671 billion parameters using a mixture-of-experts architecture. The company, which previously made Prover available for formal theorem proving and mathematical reasoning, is reportedly considering raising outside funding for the first time while continuing to update its model lineup.

Alibaba Launches Qwen3 Models with Advanced Reasoning Capabilities

Alibaba has released Qwen3, a family of AI models with sizes ranging from 0.6 billion to 235 billion parameters, claiming performance competitive with top models from Google and OpenAI. The models feature hybrid reasoning capabilities, supporting 119 languages and using a mixture of experts (MoE) architecture for computational efficiency.

Meta Launches Advanced Llama 4 AI Models with Multimodal Capabilities and Trillion-Parameter Variant

Meta has released its new Llama 4 family of AI models, including Scout, Maverick, and the unreleased Behemoth, featuring multimodal capabilities and more efficient mixture-of-experts architecture. The models boast improvements in reasoning, coding, and document processing with expanded context windows, while Meta has also adjusted them to refuse fewer controversial questions and achieve better political balance.