Noah Brier cofounded Percolate in 2011 and learned the CEO’s hardest job: keeping a whole company pointed in the same direction. Now, at his AI consultancy Alephic—and in his own work, where he uses Claude Code as a second brain—he’s facing that same problem with agents in the mix. AI was supposed to make coordination easier. Instead, Noah argues, it has created new coordination problems of its own. In this piece, he pushes back on the “software factory” metaphor and offers a framework, drawn from Stewart Brand‘s pace layers, for getting carbon and silicon to build the same thing.—Kate Lee
Strong DM is a software company whose three-person AI team calls their system for autonomous code generation a “Software Factory.” Entrepreneur Dan Shapiro‘s widely circulated framework for AI coding culminates in “the Dark Factory,” named after a Japanese robotics plant that runs with the lights off. Factory.ai, which has raised millions from Sequoia and Khosla Ventures, has built an entire business around the metaphor—its autonomous coding agents are called Droids.
I’ve been incorporating many of StrongDM’s concepts about agentic software development into our work at Alephic, the consulting company I co-founded—but I have one fundamental disagreement: I think factory is the wrong metaphor.
If the hardest problem is making something people want, then the process of building software looks a lot more like Andy Warhol‘s factory than Henry Ford‘s. Both are focused on throughput, but Ford’s is focused on mechanization and stamping out identical cars with as little variance as possible. Warhol, on the other hand, was concerned with ensuring all work aligned with a single creative vision.
Ford’s factory—or more specifically, the assembly lines inside it—was designed to eliminate imperfections. Six Sigma, the quality methodology made famous by General Electric and beloved of manufacturers, is literally a measure of the defect rate. Quality starts with deciding what to build. This is why product-market fit is the lingua franca of startups: If you haven’t built something the market needs, nothing else—including the quality of your code—matters.
Too much of the industry treats software as a problem to be optimized and solved. That may be true for code writing and testing, but the better metaphor is staring us in the face: It’s a software company, not a software factory.
Just as in the days before AI, the hardest problem for a business is still creating this vision and alignment around it—how to keep an entire team of humans, and now humans and agents (and humans with agents), building toward the same vision, from the system architecture down to the individual lines of code. As I’ve learned long before agents existed, achieving this is much more akin to building a startup than assembling a car. What follows is my attempt at a framework for keeping an entire system of humans and agents building the same thing.
Stop shipping AI on vibes
A lot of people are using AI blind. They build with LLMs, deploy prompt changes, and hope they improve things. There aren’t any measurement frameworks. Braintrust fixes this. They just launched an Evals Course, which is a hands-on curriculum for moving past guesswork. Learn how to build eval datasets from production logs, manage agent workflows, and analyze failures systematically. Based on real workflows from teams at Notion, Stripe, and Ramp. Try it now.
The alignment problem isn’t new—and AI didn’t solve it
I ran into this alignment problem years ago, when I cofounded the company Percolate, a content marketing platform, in 2011. As we grew the business from zero to 100 people in less than three years, my job as CEO shifted from building the product to building a company capable of building the product. My agents were people, and my job was to design the system they worked within. Culture, I concluded, was one of the strongest levers I had.
As Ben Horowitz put it, culture is “how your company makes decisions when you’re not there.” This was exactly what I needed: documents, tools, and rituals that helped each individual make the best possible decision without having to run every decision up the chain. I probably spent half my time on this, building a living culture document, running onboarding sessions for every new hire, and developing internal tools that automatically routed knowledge to the right people.
Every new technology promises to solve these coordination problems. But of course, nothing is that simple. What they do in reality is reshape the landscape around them and, in the process, create new problems that didn’t exist before. AI is no different.
Open-source software offers an early glimpse of the kind of unexpected problems that AI can create: Whereas the primary challenge a few years ago was finding maintainers willing to contribute code on goodwill alone, today’s challenge is sifting through hundreds of crappy AI-generated pull requests flooding GitHub.
Now, 15 years later, my audience at Alephic is not just the humans who work with me. Those humans are often paired with agents, and, increasingly, the agents themselves are delivering work independently. Yet the core problem is identical.
If you’ve used a coding agent for more than a week, you’ve already experienced this: The code works, but it often feels written by someone most definitely not you—ignoring obvious abstractions and stylistic norms that are present in the codebase. It looks, in other words, like a new engineer on the team who hasn’t been properly onboarded. We write onboarding documents and do training for our human colleagues, but most people don’t do this for agents. Yet.
Become a paid subscriber to Every to unlock this piece and learn about:
- A new framework for AI engineering inspired by Stewart Brand’s pace layers
- How Noah is using this framework to achieve alignment between humans and agents
- When the work of one engineer should become the standard across an organization
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Front-row access to the future of AI
In-depth reviews of new models on release day
Playbooks and guides for putting AI to work
Prompts and use cases for builders
Bundle of AI software
