Snippets.

// AI Briefing

March 19, 2026

AI Briefing

Mistral just made GitHub Copilot sweat, the EU started handing out real fines, and a Berkeley paper might let you run 70B models on your laptop.

Mistral Releases Codex 2 as Fully Open-Source Model
01Open SourceTechCrunch

Mistral Releases Codex 2 as Fully Open-Source Model

Mistral unveiled Codex 2, a 32B parameter code generation model released under the Apache 2.0 license. The model matches GPT-4o on HumanEval benchmarks while running on a single A100 GPU. This matters because it gives companies a production-grade code assistant they can self-host without vendor lock-in, directly challenging GitHub Copilot's dominance.

Google Announces Project Astra Integration in Android Studio
02ProductThe Verge

Google Announces Project Astra Integration in Android Studio

Google revealed that Project Astra, its multimodal AI assistant, will be deeply integrated into Android Studio by Q3 2026. Developers will be able to point their camera at a whiteboard sketch and have Astra generate the corresponding Jetpack Compose UI code. The integration signals Google's strategy to make AI-first development the default workflow, not an add-on.

EU AI Act Enforcement Begins for High-Risk Systems

EU AI Act Enforcement Begins for High-Risk Systems

The first enforcement wave of the EU AI Act took effect today, requiring companies deploying high-risk AI systems in healthcare, law enforcement, and employment to register their systems and complete conformity assessments. Non-compliance carries fines up to 3% of global revenue. This marks the world's first binding AI regulation with real enforcement teeth.

04ResearchArXiv

Researchers Achieve 10x Inference Speedup with Speculative Decoding Breakthrough

A team from UC Berkeley published a paper demonstrating a new speculative decoding technique that achieves 10x faster inference on 70B+ parameter models without quality degradation. The technique uses a cascade of draft models with adaptive verification, making real-time LLM responses feasible on consumer hardware. The research is already being integrated into vLLM and llama.cpp.

Anthropic Partners with Thomson Reuters for Legal AI
05IndustryTechCrunch

Anthropic Partners with Thomson Reuters for Legal AI

Anthropic announced a partnership with Thomson Reuters to build AI tools for legal professionals using Claude. The collaboration will embed Claude into Westlaw and Practical Law, enabling lawyers to research case law, draft documents, and analyze contracts with AI assistance. This is notable as one of the largest enterprise AI deployments in a regulated industry.

Test Your Understanding

Quiz

1 / 7

Why is Mistral's Codex 2 release significant for enterprises?

Let's talk on WhatsApp
Today's AI Briefing5 stories
May 8, 2026

Summary

Anthropic just quietly became the highest-revenue AI lab on the planet, hitting $30B ARR while spending a fraction of what OpenAI burns on training. Google is internally testing a 24/7 personal AI agent called Remy that can make purchases on your behalf, Nvidia bet half a billion on fiber optics with Corning to wire up the next wave of AI data centers, and Arm's earnings reveal that agentic AI is about to quadruple the CPU demand nobody saw coming. Oh, and Congress unanimously agreed on something for once: banning AI companions for kids.

Read full summary & take a quiz →

Top Stories

Anthropic Hits $30B ARR, Officially Surpasses OpenAI in Revenue While Spending 4x Less on Training

Google Internally Tests 'Remy,' a 24/7 Personal AI Agent That Can Make Purchases on Your Behalf

Nvidia Invests $500M in Corning, Plans 3 New US Factories and Up to $3.2B Total to Wire AI Data Centers with Fiber Optics

Senate Unanimously Advances GUARD Act to Ban AI Companion Chatbots for Minors, House Files Companion Bill

Arm Posts Record FY2026 Results, Launches AGI CPU as Agentic AI Set to Quadruple Data Center CPU Demand

9 quiz questions inside