Multimodal AI AI News & Updates

Google Deploys Veo 3 Video Generation AI Model to Global Gemini Users

Google has rolled out its Veo 3 video generation model to Gemini users in over 159 countries, allowing paid subscribers to create 8-second videos from text prompts. The service is limited to 3 videos per day for AI Pro plan subscribers, with image-to-video capabilities planned for future release.

Google Launches Real-Time Voice Conversations with AI-Powered Search

Google has introduced Search Live, enabling back-and-forth voice conversations with its AI Mode search feature using a custom version of Gemini. Users can now engage in free-flowing voice dialogues with Google Search, receiving AI-generated audio responses and exploring web links conversationally. The feature supports multitasking and background operation, with plans to add real-time camera-based queries in the future.

Google Integrates Project Astra's Real-Time Multimodal AI Across Search and Developer APIs

Google announced Project Astra will power new real-time, multimodal AI experiences across Search, Gemini, and developer tools through its Live API. The technology enables low-latency voice and visual interactions, with plans for smart glasses partnerships with Samsung and Warby Parker, though no launch date is set.

Amazon Releases Nova Premier: High-Context AI Model with Mixed Benchmark Performance

Amazon has launched Nova Premier, its most capable AI model in the Nova family, which can process text, images, and videos with a context length of 1 million tokens. While it performs well on knowledge retrieval and visual understanding tests, it lags behind competitors like Google's Gemini on coding, math, and science benchmarks and lacks reasoning capabilities found in models from OpenAI and DeepSeek.

OpenAI Releases Advanced AI Reasoning Models with Enhanced Visual and Coding Capabilities

OpenAI has launched o3 and o4-mini, new AI reasoning models designed to pause and think through questions before responding, with significant improvements in math, coding, reasoning, science, and visual understanding capabilities. The models outperform previous iterations on key benchmarks, can integrate with tools like web browsing and code execution, and uniquely can "think with images" by analyzing visual content during their reasoning process.

Google Plans to Combine Gemini Language Models with Veo Video Generation Capabilities

Google DeepMind CEO Demis Hassabis announced plans to eventually merge their Gemini AI models with Veo video-generating models to create more capable multimodal systems with better understanding of the physical world. This aligns with the broader industry trend toward "omni" models that can understand and generate multiple forms of media, with Hassabis noting that Veo's physical world understanding comes largely from training on YouTube videos.

Meta Launches Advanced Llama 4 AI Models with Multimodal Capabilities and Trillion-Parameter Variant

Meta has released its new Llama 4 family of AI models, including Scout, Maverick, and the unreleased Behemoth, featuring multimodal capabilities and more efficient mixture-of-experts architecture. The models boast improvements in reasoning, coding, and document processing with expanded context windows, while Meta has also adjusted them to refuse fewer controversial questions and achieve better political balance.

Microsoft Enhances Copilot with Web Browsing, Action Capabilities, and Improved Memory

Microsoft has significantly upgraded its Copilot AI assistant with new capabilities including performing actions on websites, remembering user preferences, analyzing real-time video, and creating podcast-like content summaries. These features, similar to those offered by competitors like OpenAI's Operator and Google's Gemini, allow Copilot to complete tasks such as booking tickets and reservations across partner websites.

Elon Musk's xAI Acquires Hotshot to Accelerate Video Generation Capabilities

Elon Musk's AI company, xAI, has acquired Hotshot, a startup specializing in AI-powered video generation technologies similar to OpenAI's Sora. The acquisition positions xAI to integrate video generation capabilities into its Grok platform, with Musk previously indicating that a "Grok Video" model could be released within months.

Baidu Unveils Ernie 4.5 and Ernie X1 Models with Multimodal Capabilities

Chinese tech giant Baidu has launched two new AI models - Ernie 4.5, featuring enhanced emotional intelligence for understanding memes and satire, and Ernie X1, a reasoning model claimed to match DeepSeek R1's performance at half the cost. Both models offer multimodal capabilities for processing text, images, video, and audio, with plans for a more advanced Ernie 5 model later this year.