Hello, and happy Sunday! This week, Every’s head of platform Willie Williams kicks off a new section—Jagged Frontier—where he goes further out on the AI frontier than we usually venture, returning to a few big ideas from fresh angles each time. First, though, a mini-Vibe Check on OpenAI’s warp-speed Codex-Spark. New models are coming out so quickly that sometimes it’s hard even for us to keep pace. We’re off on Monday for Presidents’ Day in the U.S.—we’ll be back in your inbox on Tuesday.—Kate Lee
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Mini-Vibe Check: OpenAI’s Codex-Spark is so fast it’ll blow your hair back
GPT-5.3-Codex-Spark was slinging code so fast on our livestream on Thursday, Cora general manager Kieran Klaassen and Every CEO Dan Shipper couldn’t get a word in edgewise.
OpenAI’s new model generates ~1,000 tokens per second. For context, Anthropic’s latest heavy-duty model Opus 4.6 runs at about 95.
The AI industry has spent the last year optimizing for intelligence—smarter models, deeper reasoning, longer thinking chains. Spark goes in the other direction. It’s not as sharp as Opus 4.6 or GPT-5.3 Codex on reasoning, so it’s not as reliable on complex tasks. But then again, how smart does a model need to be if it gets you what you need before you lose your train of thought?
What it is
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Front-row access to the future of AI
In-depth reviews of new models on release day
Playbooks and guides for putting AI to work
Prompts and use cases for builders
Bundle of AI software