inference optimization AI News & Updates

DeepSeek Introduces Sparse Attention Model Cutting Inference Costs by Half

DeepSeek released an experimental model V3.2-exp featuring "Sparse Attention" technology that uses a lightning indexer and fine-grained token selection to dramatically reduce inference costs for long-context operations. Preliminary testing shows API costs can be cut by approximately 50% in long-context scenarios, addressing the critical challenge of server costs in operating pre-trained AI models. The open-weight model is freely available on Hugging Face for independent verification and testing.

Spanish Startup Raises $215M for AI Model Compression Technology Reducing LLM Size by 95%

Spanish startup Multiverse Computing raised €189 million ($215M) Series B funding for its CompactifAI technology, which uses quantum-computing inspired compression to reduce LLM sizes by up to 95% without performance loss. The company offers compressed versions of open-source models like Llama and Mistral that are 4x-12x faster and reduce inference costs by 50%-80%, enabling deployment on devices from PCs to Raspberry Pi. Founded by quantum physics professor Román Orús and former banking executive Enrique Lizaso Olmos, the company claims 160 patents and serves 100 customers globally.