by Stephan SchmidtHappy 🌞 Sunday, Welcome to my opinionated newsletter. This week’s insights - ⚡ Why Working Fast Changes Everything
- 💀 Engineering As We Know It Is Already Dead
- 🔍 Will AI Make Formal Verification Finally Matter?
- 🎖️ What the Military Knows About Dangerous Software
- 🧹 Just Try HTMX Already
- 🤖 Should AI Workers Pay Taxes Too?
- 🐧 The Linux Kernel Is Just a Program (Mind = Blown)
- 😩 Why Vibe Coding Feels So Depressing
- 🧠 The Go Feature That Got Away: Memory Arenas
- 📉 Did Software Just Get 90% Cheaper?
- 🛑 AI Should Only Move As Fast As We Can Verify
- 🎪 NVIDIA’s Circular Money Machine
- 😱 Why Your CTO Might Start Coding Again
- 🪟 Forget Servant Leadership — Try Transparency
Good reading, have a nice Sunday ❤️ and a great week, Stephan CTO-Coach and CTO-veteran PS: I had an article where the URL worked yesterday but does not work today.
Removed my opinion from the newsletter, but wanted to leave
you with a quote from that article:
“Jobs are a spiritual accident. I don’t mean work. [..] I mean jobs.” | Need support as an engineering manager? Thought about coaching?
Let's talk—I helped many CTOs and engineering
leaders with growth and making the right decisions under pressure, I can help you too. |
If you only read one thingSpeed matters: Why working quickly is more important than it seems (6 minute read) “If you work quickly, the cost of doing something new will seem lower in your mind.”
Must read. You write faster email answers, others will also reply faster.
It might be possible to pull up a team by being fast yourself.
“If customers find out that you take two months to frame photos, they’ll go to another frame shop.”
Or if candidates find out you take a month to schedule an interview ..
“As for writing, well, I have been working on this little blog post, on and off, no joke, for six years.”
:-) https://jsomers.net/blog/speed-matters
Stories I’ve enjoyed this weekBelieve the Checkbook (3 minute read) "‘AI will write all the code; engineering as you know it is finished.’
Boards repeat it. CFOs love it. [..] The Bun acquisition blows a hole in that story."
Interesting but wrong. The core of the fallacy,
if we need at least one engineer, AI is not replacing
engineers. Which is irrelevant for most engineers if
90% of jobs are lost, 90% of engineers will need to look
for a new profession. Just this week talked to a CEO who
sees 80% of the engineers going in the near future.
At least it doesn’t mention
Jevons-Paradox, next time someone does, I’m crying.
Some good advice in the article though,
“Treat AI as force multiplication for your highest-judgment people. [..] and smell risk before it hits.”
But engineering is over. Yesterday I told an AI to write code for an agent
that should write code. Build the machine that builds the machine.
Turtles all the way down. https://robertgreiner.com/believe-the-checkbook/
Prediction: AI will make formal verification go mainstream — Martin Kleppmann’s blog (4 minute read) Proponents of formal verification claim that AI will drive verifications, because
“AI-generated code needs formal verification so that we can skip human review and still be sure that it works”
In their narrative formal verification of code was too time consuming and too complicated,
with AI both problems go away (Seems like everyone sees their moment coming with AI, also - of course -
the Lisp people again, that bunch resurfaces every 10 years, without success - of course.
And don’t be mad at me for this, I did Lisp before most people here were born)
Not sure about that narrative. Yes could be useful for the AI to have a second thinking process. Second artifact. Double accounting.
As it currently does with tests (Are you in the AI-TDD camp? Why? The TDD people also think they are back with AI, sigh)
On the other hand, who verifies the AI-generated formal specification? Other AIs? Against fuzzy requirements?
I rather believe people will trust AI generated code more and more over time until
it becomes a non-issue. https://martin.kleppmann.com/2025/12/08/ai-formal-verification.html
Military Standard on Software Control Levels (3 minute read) “The mil-std-882e standard specifies levels of software control, i.e. how dangerous the software can be based on what it is responsible for. [..]
The most alarming case is when the software has direct control of something that can be immediately dangerous if the software does the wrong thing.”
Might be interesting for AI. Also an interesting, short, read. Also something to name drop in discussions. https://entropicthoughts.com/mil-std-882e-software-control
Please Just Fucking Try HTMX (2 minute read) Pardon the language. Again in a project, where I lend a helping hand, discussing if we need React.
On the one hand, AI can auto generate most of React
by matching an API + Figma. On the other hand, HTMX just works
very well with AI because it’s one code base.
Not sure where this is going, for now I’m still
clinging to HTMX. https://pleasejusttryhtmx.com/
If AI replaces workers, should it also pay taxes? (8 minute read) Should it? https://english.elpais.com/technology/2025-11-30/if-ai-replaces-workers-should-it-also-pay-taxes.html
The Linux kernel is just a program (9 minute read) This one amazed me. Working 35 years with Linux now, the kernel always was
a magical being. The article shows it’s just a program.
And you can write and then execute your own init program, see mum, no Linux!
(In Go for my pleasure) fmt.Println("Hello from Go init!")
https://serversfor.dev/linux-inside-out/the-linux-kernel-is-just-a-program/
Vibe Coding is Mad Depressing (3 minute read) “And the last thing that made me snapped was, all this vibed source code were located inside one file ContentView.”
This says more about your code and your prompting than about AIs. AIs will add code most closely matching your existing code.
For greenfield they will add to that one file. The author just added another data point to
my Meta Token Theory (Must read!).
In short: The AI is constrained by the existing code, the prompts (yours + the agent system prompts),
and its training. It then pops code into existence, like quantum particles, that is most
probable. It’s not an engineer that makes decisions based on the code and their training.
It will not add good code to bad code. https://law.gmnz.xyz/vibe-coding-is-mad-depressing/
Golang’s Big Miss on Memory Arenas (7 minute read) If you haven’t read about arenas for memory management, here is your chance.
The one thing in Go that would be nice (I’ve dropped all my other wishes like Rust errors, Go is just fine)
are arenas: You allocate memory in arenas and then drop that memory all at a convenient time all at once.
Excellent for web applications (or after each iteration of your gaming loop).
You create a new arena at the beginning of a request, then the whole request - your code and system libs - would allocate
through the arena, at the end just free all the memory. No memory leaks, and no garbage collection
overhead like in Go - without the complicated borrow system of Rust. Zig has arenas, but one would need to change
all APIs in an incompatible way to introduce them in Go - Go anathema, rightly so.
(and the reason Go is great for AIs to write code in - don’t want to lose that one just for arenas). var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
defer arena.deinit();
// loop
{
...
// at the end of the loop or request, just reinit, see mum, no GC!
// no no borrow checker! ❤️
_ = arena.reset(.retain_capacity)
}
https://avittig.medium.com/golangs-big-miss-on-memory-arenas-f1375524cc90
Has the cost of building software just dropped 90%? (7 minute read) Jevons Paradox - no0ooooooooooooooooooooooo. I’m crying now.
But good points. Except “A project that would have taken a month now takes a week. The thinking time is roughly the same - the implementation time collapsed”
Thinking time with Claude Code collapsed too - at least for me. Perhaps because I use CC a lot for thinking
or because I’m a slow thinker (just kidding!). https://martinalderson.com/posts/has-the-cost-of-software-just-dropped-90-percent/
AI should only run as fast as we can catch up (8 minute read) “it’s the problem of reliable engineering - how can we make AI work reliably.”
No - the problem is how can we get AI to a level that we can live with the results.
But one point of the article is important: For now engineers are better AI drivers
because they know what can go wrong with code. This will be no longer a problem
in the future, perhaps in 5 years, and engineers will lose another advantage
they currently enjoy. Most of the stuff about “Verification Engineering”
in the article: yes might be for now, find ways that you can verify AI generated code
so that you live with the results AIs produce. But it’s also transitionary. Like
context engineering (that said an important part in a current
AI project is verifying with an AI - not humans - that the AI is working as expected - I
advise you to also do this, as a side effect it trains engineers
in using AIs - and to trust them). https://higashi.blog/2025/12/07/ai-verification/
Deep Dive into NVIDIA’s Virtuous Cycle (8 minute read) Why this will collapse. - NVIDIA pledges billions to OpenAI.
- OpenAI signs a massive $300 billion cloud contract with Oracle.
- Oracle turns around and places a $40 billion order for NVIDIA’s GB200 GPUs.
Creating demand in circles. Like Baron Munchausen who pulled himself out of the swamp
by grabbing his hair. https://philippeoger.com/pages/deep-dive-into-nvidias-virtuous-cycle
Why Your CTO Might Start Coding Again (12 minute read) “Managers who had once been developers made a rational trade: they gave up the craft of coding for the leverage of orchestration. One good manager could make ten developers more effective. One good CTO could make a hundred or a thousand developers more effective. [..] Then agentic AI made code production nearly free, and a funny thing happened: all that “overhead” turned out to be the actual work.”
Amazing, I know! Managers are not the evil in the machine, but the real work! Coding was accidental. “I’ve never looked at every line of code my developers wrote. I’ve never validated every test case. I trusted the process — PRs, code review, QA, test coverage. That’s how I knew the work was good. Cool, do that with bots.”
Nothing much to add to that thought except, as CTO I also didn’t look at code but was accountable to the CEO. With AI, nothing changed. Zing! https://davegriffith.substack.com/p/why-your-cto-might-start-coding-again
Transparent Leadership Beats Servant Leadership (3 minute read) First, I don’t like servant leadership. Recently told a client I have a 0.2%
chance of a heart attack whenever I hear that term. Too many engineering managers, ex-engineers,
think: Oh I’m servant so now I’m a leader. Easy. Doubly so when I’m an introvert.
NO! Leadership is hard. Leadership is simple: You tell people
where everyone needs to go then you lead people there. That’s it.
Transparent leadership from the article is better than servant leadership.
BUT: Why tie all the good things in the article to leadership? Why not just tie it to a manager?
Coaching? Manager! Training? Manager! I don’t know why everything needs to be leadership.
It’s not like: leader good, manager bad. That only leads (ha!) to managers
who are not leading. Don’t muddy what leadership means. You point out where
everyone needs to go, then you lead people there. Simple. https://entropicthoughts.com/transparent-leadership-beats-servant-leadership
What is Stephan doing?Working fractional with several clients, helping them transition into software engineering with AIs.
Sometimes someone from the outside has more clout, saying the same things the CTO is saying. And I
can leverage what I’ve learned from many CTO clients. And it’s nice working with teams again.
Really missing that in periods when I’m only coaching and workshopping. |