Hello, and happy Sunday! This week we hosted our first Claude Code Camp and published pieces that we hope will make you think deeply about AI. Speaking of thinking… We’re off on our quarterly Think Week next week, so we’ll be back with new work on July 28. In the meantime, scroll down for a report from Alex Duffy on the view of AI from Europe, a talk he gave about AI benchmarks, and everything we published this week. Plus: We're looking for a new producer to work with Dan Shipper on our podcast AI & I —could it be you?—Kate Lee
Was this newsletter forwarded to you? Sign up to get it in your inbox.
Walking into the Louvre for RAISE, Europe's biggest AI conference, I got the sense that the choice of venue was meant as a message: European AI belongs among history's greatest achievements. The French certainly know how to make a statement.
I'm not here to argue with that ambition. But after two days of panels, pitches, and champagne-fueled networking, my clearest takeaway wasn't about European AI's arrival. Instead, some deeper truths about AI stuck out to me: Speed beats perfection. Focus beats features. Domain expertise is the only way to compete with traditional technical fluency.
Speed through focus
The most compelling story came from Tao Zhang, the chief product officer and co-founder of Monica, a Chinese team now based in Singapore that’s building Manus, a tool that enables anyone to independently carry out complex real-world tasks with AI.
Zhang started by noticing that people were constantly copy-pasting into and out of ChatGPT to add context or export responses. It was just one friction point. They built a Chrome extension to fix it, got traction, and quickly moved to the obvious next step: a full browser. But instead of launching a product immediately, they dogfooded religiously. Day after day using their own browser, something felt wrong. They'd watch the AI work through tasks, never knowing when it would finish, and realizing they could have done it faster themselves.
Meanwhile, Zhang was watching the AI-powered coding assistant Cursor explode. Even his wife, who’s not a software engineer, was using it for data analysis—she just typed what she needed into the chat box and hit "accept all" on whatever code appeared. That was the moment that he realized: Forget the browser. Build Cursor for everybody.
This pivot happened in weeks, not months; the browser took seven months to build, Manus took two. When an engineer asked if they should add a model selector, Zhang said no: Regular people don't care about models. They care about results. "We should be the experts," he said.
That combination of speed and focus shows up everywhere. The United States has pursued a lax approach to AI regulation, which has allowed companies to invest, train models, and build products without needing to cut through red tape or bureaucracy. That creates problems—xAI’s Grok proclaiming its love of Hitler and shipping female anime AI companions is a tough look, to say the least, on the same day the company landed a $200 million Department of Defense contract thanks to the capabilities of their models, which have gone from non-existent to state of the art in two years. You can’t argue with the velocity: AI-native apps like Windsurf and Cursor have grown revenue faster than any software category in history.
While American and Asian first movers are making record-breaking revenue thanks to immense investment in infrastructure, in Europe—which has a steep regulatory environmen—they are still having conversations about GPU access.
The market's verdict is clear: Move fast and solve real problems, or get lapped by someone who is. Manus gets this. Its roadmap is hyper-focused on extending how long AI can work independently—two hours today, 24 soon—and making it possible for anyone to access that capability.
This is what winning looks like in AI. Your first idea probably isn't your best, but you'll only discover that if you're moving fast enough to try multiple approaches, while staying focused enough to recognize when something works. Manus followed this playbook to raise $50 million and quintuple its valuation after launching its product and generating a 3.5 million-person waitlist in March.
A new definition of the 'technical founder'
Every venture capitalist I heard speak at RAISE mentioned some version of the same thing: The non-technical founder is dead. But there are some important wrinkles to that, though.
Zhang and his two Manus cofounders all code, but their real advantage is the ability to zoom into implementation details, then immediately zoom out to user problems. When they killed their browser idea, they didn't need three meetings and a technical consultant. They just knew.
Alexandra Mysoor, CEO of Alix, chose another path. Her company uses AI to make estate settlement easier. She's not writing code; instead, she spent more than two years understanding every friction point families face. That expertise lets her spot which AI applications matter, and which sound clever but solve nothing. When her team builds features, she knows immediately if they work because she's lived the problem.
DeepSeek's founding team came from the world of finance, not traditional AI research—they were quantitative traders. Their edge was understanding complex systems and having the skills to build their hunches directly. Whether it's code, law, medicine, or teaching, the real meaning of “technical” is whether you can personally validate whether something works. So in a sense, the “non-technical” founder is dead. In another, AI has made it possible to consider more founders “technically” proficient.
Your unfair head start
That speed advantage compounds in another way most American technologists don't fully appreciate. English was always the coin of the realm in big tech, but it’s different when the technology is language itself.
Given all that data, and more broadly, the pervasiveness of English on the internet—language models perform best in English—and will continue to generate more and better data in English. It's AI's reserve currency. Just as global finance runs on dollars regardless of your local currency, global AI, for the most part, runs on English regardless of your native language.
For American builders, it’s compound interest you're already earning. Your default language is AI's default language, and your first customers expect AI-native features because OpenAI has already broken open the market. This advantage isn't inherently permanent—China is building Chinese-first models, and Europe will eventually get infrastructure. But it’s worth using before the physics change.
The view from the Louvre
The French know how to run an event—champagne at every break, proper espresso, a diverse crowd. The European AI scene may still be small—they call €50 million in funding a “mega-round” while American companies raise $100 million seed rounds—but there are bright spots. Mistral, one of Europe's brightest AI successes, competes by open-sourcing models and selling consulting services, proving you can build something substantial by adapting to local constraints.
Still, there were many similarities across the Atlantic. The same venture capitalists writing checks—investors from Sequoia and Andreessen Horowitz were everywhere. The same acknowledgment that American investors provide most growth capital.
Standing in that ancient, world-renowned museum, surrounded by artifacts of human ingenuity, the parallels felt obvious. The builders who created the Louvre mastered their era's most powerful tools. Today's most powerful tool rewards the same things theirs did.
But these days, geography matters less than physics. Move fast enough to test multiple ideas. Stay focused enough to solve real problems. Build where the ecosystem advantages compound in your favor. And remember—the landscape, and where opportunity lies, changes fast, so capitalize on them while they last.
Knowledge base
"How I Use Claude Code to Ship Like a Team of Five" by Kieran Klaassen/Source Code: Kieran hasn't typed a function in weeks, yet he's shipping code faster than ever. Claude Code has transformed him from a programmer into an engineering manager overnight—running a team of AI developers who never sleep, never complain about his nitpicks, and occasionally outsmart him. Read this if you're wondering how AI will change what it means to build software—even if you don’t know how to code. 🖥 Watch video demonstrations of Claude Code and Kieran in action through links in the article.
"Vibe Check: OpenAI Enters the Browser Wars With ChatGPT Agent" by Dan Shipper/Vibe Check: What do you get when deep research (smart but can't do) and Operator (can do but not that smart) have an AI baby? ChatGPT Agent—OpenAI's new tool that can both think deeply AND use a browser. Read this for a hands-on look at how the major AI labs are battling for browser dominance.
"The Magic Minimum for AI Agents" by Dan Shipper/Chain of Thought: Forget what you know about successful software businesses: AI agents are rewriting the rules. Unlike traditional software, agents can be proactive, working quietly in the background until they've done something valuable. That’s part of the “magic minimum,” as Dan puts it. Read this if you want to understand why the universe of viable software businesses is about to explode with specialized AI agents that earn their keep without demanding your daily attention.
"Why Aggregators Ate the Internet" by Alex Komoroske/Thesis: There's a bug in the internet’s operating system called the “same-origin paradigm.” It’s the idea that each destination on the web is, in fact, its own universe—and hopping between universes is tricky. But AI and new technologies like Confidential Compute might finally break this cycle. Read this from former Google and Stripe executive Alex Komoroske if you want to understand why the AI revolution might put power back in your hands.
"Vibe Check: Grok 4 Aced Its Exams. The Real World Is a Different Story." by Rhea Purohit/Vibe Check: xAI's Grok 4 is crushing AI benchmarks and outperforming competitors on tests like ARC-AGI, but it's struggling where it matters most—the real world. Despite its advanced reasoning capabilities, the model appears overtrained on benchmarks, lacks essential tooling, and has safety issues that undermine developer trust. Read this if you want to understand why Every’s team thinks benchmark performance doesn't always translate to practical value.
Collaborative filtering
When memes are the benchmarks. Traditional AI benchmarks are hitting saturation—models ace standardized tests but still struggle with real-world challenges. In a recent talk at the AI Engineer World's Fair in San Francisco, Alex Duffy argues that benchmarks are “memes” in the evolutionary sense: ideas that spread and ultimately shape what AI systems get good at. His AI Diplomacy benchmark, where models negotiate and form alliances in a strategy game, revealed surprising personality traits across different AI systems.
Work at Every
Freelance podcast producer. We're seeking a freelance podcast producer for Every's AI & I weekly podcast about how the smartest people in the world use AI. You'll handle the full production pipeline, book guests, create social clips, write sponsor scripts, and grow the show across YouTube, Spotify, and other platforms. Applicants should have two-plus years of podcast production experience, proven growth track record, proficiency using Descript, and a data-driven mindset. Learn more in the job description, and to apply, email kate@every.to with your LinkedIn profile, your portfolio, and why you want to work at Every.
Alex Duffy is the head of AI training at Every Consulting and a staff writer. You can follow him on X at @alxai_ and on LinkedIn, and Every on X at @every and on LinkedIn.
We build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.