Each person on the team has tailored their stack to their individual tastes  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

Source Code

Inside the AI Workflows of Every’s Six Engineers

Each person on the team has tailored their stack to their individual tastes

by Rhea Purohit

Midjourney/Every illustration.

Was this newsletter forwarded to you? Sign up to get it in your inbox.


Working alongside the engineers at Every, I sometimes wonder: What do they actually do all day? Building software, just like writing, is a creative act, and by definition, that means the process is messy. When I write, I go between Google Docs and whichever LLM I’m leaning on at the moment (currently, GPT-5). But what does that look like for the people building software?

Sure, I hear about the products they’re shipping in standup, and I get snippets of their workflows when we run Vibe Checks. But those moments are always in isolation, scattered whispers of a bigger conversation.

So I asked: What does the workflow of each of our engineers really look like? What stack have they built that makes it possible for six people to run four AI products, a consulting business, and a daily newsletter read by more than 100,000 people?

Experimenting at the edge: Yash Poojary, general manager of Sparkle

Yash Poojary used to be the kind of developer who insisted on doing everything from one lone laptop. A few weeks ago, he caved and added a Mac Studio—Apple’s high-performance desktop—to his setup. “I wanted to use my laptop for everything,” he admits, “but I felt bottleneck[ed] for testing things faster.”

The upgrade has paid off. Now he runs Claude Code on one machine and Codex on the other, feeding them the same prompt and codebase to see how they respond. He’s finding that the two models have distinct personalities. Claude Code is the “friendly developer,” great at breaking things down and explaining its reasoning, while Codex is the “technical developer,” more literal, more precise, and often able to land the right solution on the first try.

Uploaded image

Stop leaving money on the table

A careful investor knows that the real edge is keeping uninvested cash in a high-yield account—so your money works just as hard as you do. Try Wealthfront's Cash Account, which gives you 3.75% APY from program banks on your uninvested cash. No monthly fees, no minimum balance required, and instant withdrawals to eligible accounts 24/7. Let your short-term cash work hard while you figure out your next move.

Right now, you can get an 0.65% boost for three months on up to $150,000 for a total 4.40% variable APY when you open your first Cash Account. Go to wealthfront.com/every to sign up today. Promo terms and conditions apply.

Want to sponsor Every? Click here.

Yash also recently launched a new version of Sparkle, our AI file organizer, complete with a redesigned interface that he worked on in Figma. Back in the dark ages (aka five months ago), Yash would take screenshots of the design and paste them into Claude so it could write the code. Now, with a Figma MCP integration, Claude can plug directly into the Figma file so it can read the design system itself—the colors, spacing, components—and translate that into working code. It saves steps and keeps Claude working from the real source of truth.

Outside of agents, Yash leans on Warp—a modern version of the developer’s command line, the text-based interface developers use to control their computers. Every time he pushes code, he jots down two lines about what he learned in a “learnings doc” and stores them in the cloud. After a few days, he has a rolling memory of recent context to feed back into his AI tools.

Even with all this experimentation, Yash emphasizes the importance of guardrails. He structures his day around one big task and a handful of smaller background ones, and he’s careful not to let AI-generated suggestions derail him. As he puts it: “The problem with CLIs [command line interfaces] is it’s easy to get derailed and lose focus on what you’re actually trying to build… so building guardrails into the system is essential.”

One way he’s doing that is with AgentWatch, an app he built that pings him when a Claude Code session finishes, letting him run multiple sessions simultaneously without losing track of them. Yash—and a smattering of others—have been using it of late; if you give it a try, DM him.

He’s also split his day into two modes: Mornings are for focused execution—just Codex and Claude Code, no new tools allowed—so shipping doesn’t stall. Afternoons are for exploration, when he experiments with new agents, apps, and features. That separation between “build” and “discover” has removed the productivity drag he used to feel when testing new tools.

Every illustrations.
Every illustrations.

Orchestrating the loop: Kieran Klaassen, general manager of Cora

For Kieran, everything with Cora starts with a plan generated in Claude Code with a set of custom agents and workflows. He scopes programming plans at three levels, depending on the feature:

  1. Small features: simple enough to one-shot
  2. Medium features: span a few files and go through a review step (usually by Kieran)
  3. Large features: complex builds that require manual typing, deeper research, and lots of back-and-forth

The point of planning, he says, is to ground the work in truth—best practices, known solutions online, and reliable context pulled in through Context 7 MCP, a tool that pulls up-to-date, version-specific documentation and code examples straight from the official source and places them directly into your prompt.

Once the plan is set, it gets sent to GitHub. From there, he uses a work command—a prompt that takes the plan and turns it into coding tasks for the AI agent. For most projects, Claude Code is his go-to, because it gives him more control and autonomy. But he’ll sometimes turn to Codex or the agentic coding tool Amp for more traditional or “nerdier” features.

After the work is done, he has a command that reviews the code. Here, too, Claude often leads, though he also uses a mix of other AI tools, including Cursor and Charlie. The process loops until Kieran decides that the feature is ready to ship.

Uploaded image

Turning complexity into milestones: Danny Aziz, general manager of Spiral

Danny Aziz’s current workflow runs almost entirely inside Droid—the command line interface owned by Factory, a startup building coding agents, that lets him use Anthropic and OpenAI models side by side. About 70 percent of his work happens here, relying on GPT-5 Codex for the big feature builds, and then switches to Anthropic models to refine and nail down the details.

During his planning phase, Danny spends time talking with GPT-5 Codex to make implementation plans concrete and specific—asking it about second- and third-order consequences of his choices, and having it turn those insights into milestones for the project. For example, if the agent implements a feature, but in a way that slows the app down because of how it pulls data from the database, Danny wants to catch that in advance.

Droid was instrumental in helping Danny build the brand-new version of Spiral. Other tools have largely fallen away. “I don’t use Cursor anymore,” he says. “I haven’t opened it in months.” Instead, his main interface is Warp, where he can split the screen into different views and switch quickly between tasks. Behind it, he uses Zed—a fast, lightweight code editor—for reviewing plan files and specific bits of code.

As for his physical work setup, Danny keeps it simple: A majority of the time he’s on a single monitor or just his laptop. The only time he adds a second desktop is when he’s deep in the throes of implementing a design, and having the Figma file side-by-side with the build makes it easier to lock the visuals in.

Uploaded image

Making process the source of truth: Naveen Naidu, general manager of Monologue

For Naveen, everything begins with the project management tool Linear. Feature requests come in from everywhere—Discord, email, Featurebase, live user calls—but they all end up in the same place. “If it’s not in Linear, it doesn’t exist,” he says. Every ticket carries links back to the original source, so he can always trace who asked and why.

Over the past few weeks, Naveen has migrated from Claude Code to Codex for his day-to-day work.

From there, Naveen shifts into planning mode, which he runs in two different ways. For small bugs or quick improvements, he adds context directly to the Linear ticket and then copies it into Codex Cloud to kick off an agent task—no fancy MCP integration, just manual copy-paste. For bigger features, though, he steps outside Linear and into Codex CLI, where he writes a local plan.md—a simple text file that serves as the blueprint for the project. It lays out the steps, scope, and decisions, and becomes the authoritative spec he iterates on with agents as the work unfolds.

Execution also happens on two tracks. In Codex cloud, he brainstorms approaches and generates draft pull requests, usually not to merge, but to explore ideas, surface edge cases, and get potential fixes in parallel. He prefers the cloud because it lets him kick off background tasks asynchronously, whether from the iOS ChatGPT app or on the web.

Once he’s confident in a direction, he moves to Codex CLI for the real build, refining plan.md and letting the agent drive file edits step by step in Ghostty, his terminal of choice, all the while keeping a close eye on the agent’s work. Along the way, he uses Xcode for native macOS development and Cursor for backend work. MCP integrations with Linear, Figma, and Sentry keep issues, designs, and error tracking wired into the loop.

Review is its own discipline for Naveen. First, he runs Codex’s built-in /review command, which gives him an automated scan for obvious bugs or issues. Then he double-checks the changes himself by comparing the “before” and “after” versions of the code side by side. And when it’s a bug fix, he goes one step further: looking at the error logs in Sentry both before and after the change, to make sure the problem is happening less often.

One tool woven through Naveen’s stack is Monologue, a speech-to-text app he built himself, incubated at Every, and launched just last month. He uses it to dictate prompts, write ticket descriptions, and update his plans—turning his thoughts into context for his agents. You can give it a try.

Uploaded image

Perfecting what works: Andrey Galko, engineering lead

Andrey Galko keeps his workflow simple. He’s not the kind of developer who chases every shiny new tool—and in AI, there are a lot. If something works, he sticks with it. For a long time, that meant using Cursor, which he still calls the best user experience out there. But when the company changed its pricing, he started hitting the monthly usage limit in just a week, and was forced to look elsewhere.

He found his answer in Codex (and would’ve probably kept paying for Cursor if the former hadn’t been released). For quite some time, Andrey says, OpenAI’s models generated suboptimal code. They’d produce snippets that technically worked but weren’t consistent with the existing codebase, skipped steps, and felt “lazy.” Then came GPT-4.5 and GPT-5, and things changed: The models started to read code and could complete tasks all the way to a functional MVP.

Codex was always good at non-visual logic—the behind-the-scenes rules and processes that make software run, as opposed to the user interface you click on—and when GPT-5-Codex arrived, it finally got good at the user interface, too. Claude might still produce more creative (and sometimes too creative) UIs, but Andrey finds little need to switch between the two anymore. “I applaud the people at OpenAI for becoming a real menace to Anthropic’s code generation reign,” he says.

Uploaded image

Focusing on one thing: Nityesh Agarwal, engineer at Cora

Nityesh Agarwal likes to keep things tight, focused, and clean. His entire agentic stack runs on a MacBook Air M1—no big monitors necessary. “I’m the kind of developer who doesn’t like changing my tools often,” he says. “I like to focus on one thing at a time.”

That one thing is Claude Code. He runs it on the Max plan and uses it for all of his AI-assisted coding. Before he writes a single line, he spends hours researching the codebase and sketching out a detailed plan for how everything should work—with Claude’s help. Once he starts coding he stays in a single terminal, laser-focused on the task at hand. “I’ve realized that what works best for me is to give 100 percent attention to the one thing that Claude is working on,” he says. If a research question pops up, he might spin up a quick session in a separate tab, but as a rule, he avoids juggling multiple agents. He prefers to watch Claude’s work “like a hawk,” finger on the Escape key, ready to step in the moment something looks off.

Lately, he’s actually shortened Claude’s leash, often interrupting it mid-process to ask for explanations. It slows things down, but it pays off in two ways: Claude hallucinates less, and Nityesh feels like he’s sharpening his own developer skills. “I realize that I’ve placed too much of my trust in Anthropic, which leaves me vulnerable,” he admits. When Claude glitched for two days, he tried other tools, but none of them matched what he was used to. “Claude Code has spoiled me,” he says. “So now I just pray it never goes rogue again.”

Another key part of Nityesh’s workflow is GitHub, which has become an interface for how he works with Claude Code. For Cora, the AI email assistant that Nityesh works on, the engineering team reviews pull requests that Claude Code creates. They leave line-by-line comments in GitHub, then have Claude Code fetch and read those comments into the terminal so the team (which includes both the human engineers and Claude Code) can make the required fixes together.

In terms of other tools, Nityesh calls Cursor and Warp “solid nice-to-haves,” though he wouldn’t mind if he couldn’t access them anymore tomorrow.

Uploaded image


Rhea Purohit is a contributing writer for Every focused on research-driven storytelling in tech. You can follow her on X at @RheaPurohit1 and on LinkedIn, and Every on X at @every and on LinkedIn.

We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Get paid for sharing Every with your friends. Join our referral program.

For sponsorship opportunities, reach out to sponsorships@every.to.

Subscribe

What did you think of this post?

Amazing Good Meh Bad

Get More Out Of Your Subscription

Try our AI tools for ultimate productivity

AI Tools Showcase
Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders
Sparks Bundle of AI software
Sparkle Sparkle: Organize your Mac with AI
Cora Cora: The most human way to do email
Spiral Spiral: Repurpose your content endlessly
Monologue Monologue: Effortless voice dictation for your Mac

You received this email because you signed up for emails from Every. No longer interested in receiving emails from us? Click here to unsubscribe.

221 Canal St 5th floor, New York, NY 10013