Was this newsletter forwarded to you? Sign up to get it in your inbox.
There’s a video game called Overcooked that feels a lot like my workday with AI. You play line cooks in a chaotic kitchen, sprinting between stations while orders pile up and the clock ticks down. One player chops onions, another stirs soup, a third dashes to the sink for clean dishes—all while the printer keeps spitting out new tickets. Just thinking about it makes my heart rate spike.
It’s also how I feel managing multiple models.
At one “station” I've got GPT-5 pulling sources for an essay. At another, I'm having Claude review a draft. Meanwhile, research for a new AI editorial workflow simmers like a stew in a crockpot, and I'm also updating our Source Code style guide with some insights from the latest published piece. AI makes this particular brand of controlled chaos possible. And for that, I'm grateful—and a little overwhelmed.
I've always hopped between projects when I get stuck. But AI changes the tempo. The model pushes one task forward while I'm setting up the next. My job now: choosing what gets attention right now and deciding what "done" means for this pass.
In other words, I’m a manager, but instead of junior humans, my direct reports are LLMs. This is the allocation economy, where value comes from deploying attention strategically across multiple processes rather than diving deep into one. The old paradigm assumed you were either building or coordinating—never both at once. AI breaks that assumption.
It also turns the volume up on a problem seasoned multitaskers know only too well. Every model handoff is a context reset, and those resets come with a cost. Master the pivots and you multiply your output. Miss them and you drop plates. Here's what I've learned about the boundaries that separate chaotic productivity from plain-old chaos.
Make your team AI‑native
Scattered tools slow teams down. Every Teams gives your whole organization full access to Every and our AI apps—Sparkle to organize files, Spiral to write well, Cora to manage email, and Monologue for smart dictation—plus our daily newsletter, subscriber‑only livestreams, Discord, and course discounts. One subscription to keep your company at the AI frontier. Trusted by 200+ AI-native companies—including The Browser Company, Portola, and Stainless.
Makers versus managers versus model managers
In 2009, Paul Graham published an essay called "Maker’s Schedule, Manager’s Schedule." In it, he argues that makers and managers need fundamentally different calendars. Makers (programmers, writers, designers) need long, uninterrupted blocks of time to build momentum and enter flow state. Managers operate in hourlong chunks, their days pre-fragmented by meetings. When these two schedules collide, makers lose—a single meeting can shatter an entire day's productivity.
For 15 years, we treated Graham's divide as gospel: Protect the makers, and let the managers coordinate. Companies built entire cultures around this—no-meeting Wednesdays, focus-time blocks, elaborate systems to guard deep work from the tyranny of the calendar.
But model management scrambles these neat categories. I deliberately fracture focus time now, betting that the compound returns from parallel AI processes outweigh the switching costs. The maker's sacred flow state becomes a luxury I can't afford when three models are waiting for direction. My actual writing time has dropped 40 percent, but my weekly output has tripled. At least, it has when the kitchen is running smoothly.
When the plates are spinning
Here’s what things look like on a good day:
I kick off the day feeding ChatGPT a transcript from our Claude Code Camp event—pull the best quotes, give me structure, find the through-line. While that churns, I crack open a draft of this very essay and dictate revisions through Monologue. A tweaked framing here. That analogy is trying too hard—rein it in. Then I flip to Claude: Analyze the latest published Source Code pieces—what patterns are we missing in the style guide? By the time I circle back to Claude Code Camp, there's a sample introduction and an outline waiting for my review.
I start something, let the model process, move another task forward, come back. The work carries its own momentum. I re-enter without losing ground.
By lunch, each track has advanced. The Claude Code Camp writeup has structure. The essay has a cleaner opening. The headline analysis surfaced patterns I can teach to our AI editor, like “pair a provocative claim with a future outcome” and “combine personal experiments with concrete results.”
Small inputs create forward motion. The models amplify those small inputs into substantial progress; my attention decides when to step back in. That dynamic—automation moving in the background, judgment applied in short bursts—defines my best workdays now.
Oh no, here come the switching costs
Not every day lands clean. The model manager dream sometimes crashes into human limits.
A research thread I started in the morning gets buried under new chats. I open a draft and can't remember why I cut that paragraph or what the ending was supposed to do. A message pings: "What's the status of that task?" I click back to the research—neat notes I never integrated, sitting there like artifacts from someone else's workday.
My human brain is the bottleneck. Each pivot extracts its price: a priority sliding out of view, a thread gone cold, that nagging sense of juggling too much. Sooner or later, you have to start imposing some semblance of order.
My system isn’t perfect, but here are the tactics I’ve found that keep the overhead manageable:
- Compartmentalize rigorously. Separate chats for each project, so context waits where I left it and I don’t get work streams confused.
- Leave breadcrumbs. Before I pivot to a new task, I drop a one-line note in the chat window about what comes next, so it’s easier to pick up where I left off.
- Limit active tasks. Never have more than three things spinning at once; beyond that, the re-entry tax gets brutal.
- Capture immediately. When something reaches a usable state, I transplant it to a permanent home (typically a Google Doc) immediately. Leave it floating in the chat and it's lost.
The switching costs never disappear. But with these boundaries, I can keep the operation running instead of drowning in my own clever parallel processing scheme.
Another day in the allocation economy
AI shifted my role from maker to model manager. Run this way, one person can carry the load of a small shop—though some days I'm not sure if that's a feature or a bug.
The implications are practical: Treat attention like a budget.. Cap the number of active lanes. Write that one-line breadcrumb that reminds you where you left off. Integrate outputs you’re happy with before continuing the chat.
In Overcooked terms, the tickets keep coming. My job is to pick the next move, attend to the station that matters, and send the plate. The pace stays workable when I choose deliberately and check in before everything burns.
Maybe that's all this is—learning to cook in a kitchen where half the appliances run themselves, but you still have to know when to flip the burger.
Katie Parrott is a staff writer and AI editorial lead at Every. You can read more of her work in her newsletter.
To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.
We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.