Diffusion Models AI News & Updates

Inception Raises $50M to Develop Faster Diffusion-Based AI Models for Code Generation

Inception, a startup led by Stanford professor Stefano Ermon, has raised $50 million in seed funding to develop diffusion-based AI models for code and text generation. Unlike autoregressive models like GPT, Inception's approach uses iterative refinement similar to image generation systems, claiming to achieve over 1,000 tokens per second with lower latency and compute costs. The company has released its Mercury model for software development, already integrated into several development tools.

Stanford Professor's Startup Develops Revolutionary Diffusion-Based Language Model

Inception, a startup founded by Stanford professor Stefano Ermon, has developed a new type of AI model called a diffusion-based language model (DLM) that claims to match traditional LLM capabilities while being 10 times faster and 10 times less expensive. Unlike sequential LLMs, these models generate and modify large blocks of text in parallel, potentially transforming how language models are built and deployed.