TL;DR: Why does AI writing still sound like AI writing, even as the models get smarter? In his first piece since joining Every as Spiral’s general manager, Marcus Moretti explains why the answer is more complicated than you’d think. The most reliable fingerprints of your personal style come from the words you write subconsciously: articles, pronouns, and function words that emerge in a distinctive pattern as you focus on the meaning of a sentence. His piece explores what new research in machine learning and stylometry—the study of style—means for the future of writing tools like Spiral. If you want to go deeper, Spiral has several updates, including creating a writing style from your website or X account (even taking post engagement into account) and a cleaner, faster editor.—Kate Lee
Was this newsletter forwarded to you? Sign up to get it in your inbox.
OpenAI models demonstrate Ph.D.-level knowledge across physics, biology, and chemistry. Anthropic staff have claimed its Opus 4.5 model “largely solved coding.”
Yet AI writing remains stubbornly detectable: “It’s not an idea. It’s a breakthrough.” “Delve.” Lists of threes with no “and.”
If you’re a regular Every reader, you may already know why this is. LLMs are trained on an unfathomable amount of words and learn generally how to speak. Post-training, which refines a model after initial training on large datasets, makes the models friendlier and safer, so they end up speaking in a kind of generic politeness. Ted Chiang’s description from a few years ago remains apt: “ChatGPT is a blurry JPEG of the web”—a tool that approximates human insight without ever landing on the mark.
I’m interested in the relationship between LLMs and writing style because I’m the general manager of Spiral, Every’s AI co-writer. Writing sessions in Spiral begin as a chat: You describe what you intend to write, and Spiral helps you hone your message and gather relevant research. Then it produces one or more drafts, offering several approaches for your piece.
Our aim is for Spiral’s written output to reflect your personal writing style, not the generic politeness of the foundational model. To this end, I’ve been reading papers on natural language processing, linguistic forensics, and stylometry—the study of writing styles. It wasn’t until I started working on Spiral that I became aware of the century-plus history of stylometry, or of the fastidiousness with which researchers have catalogued the elements of style. In recent years, researchers in these fields have flocked to LLMs, finding new ways to expand our understanding of human writing. Here are some findings that I found interesting and even counterintuitive, and that provide a hint as to where AI writing might be headed.
Looking for an AI notetaker for your meetings?
Granola is a lot more. Most AI note-takers just transcribe what was said and send you a summary after the call. Granola is an AI notepad. And that difference matters. You start with a clean, simple notepad. You jot down what matters to you and, in the background, Granola transcribes the meeting. When the meeting ends, Granola uses your notes to generate clearer summaries, action items, and next steps, all from your point of view.
Then comes the powerful part: You can chat with your notes. Use Recipes (pre-made prompts) to write follow-up emails, pull out decisions, prep for your next meeting, or turn conversations into real work in seconds. Think of it as a super-smart notes app that actually understands your meetings.
Download Granola and try it for your next meeting. Three months’ free with the code EVERY.
Subconscious decisions define writing styles
Stylometry has had a few moments of glory. In the 1800s, stylometrists gave sold-out lectures about whether William Shakespeare wrote those plays. In the 1960s, two stylometrists isolated Alexander Hamilton’s contributions to The Federalist Papers based largely on the presence of the word “upon.”
In the 2020s, LLMs have introduced new ways of studying style. Last year, two Cornell University researchers systematically manipulated text snippets to see how it affected LLMs’ ability to guess their authors. They removed an attribute of the text one at a time—such as proper nouns or capitalization—and measured the effect on attribution accuracy.
They found that removing the more functional features of the text caused the models to misattribute authorship more often, proving that those features are most helpful for attribution. In particular, removing “stop words” made it a lot harder to guess who wrote something. In natural language processing, stop words are common, functional words like articles (“a,” “the”) or pronouns (“I,” “she”). These words are often filtered out of text analysis because they don’t convey much meaning, but it turns out that they appear in patterns that can help identify who wrote something. This is why Hamilton’s use of “upon” tipped off those researchers to his Federalist contributions.
Things like stop words and word order turn out to be some of the most distinctive markers of someone’s writing style. These purely functional aspects of writing mostly reflect subconscious decisions. When we write, we focus on choosing meaningful words, and our subconscious tends to fill in the rest. But the way our subconscious contributes to our sentences is to be distinctive...
Become a paid subscriber to Every to unlock this piece and learn about:
- Why human writers are measurably twice as unpredictable as AI—and what that means for AI writing
- How AI is rewriting the language of academic scholarship
- Whether or not Marcus wrote this piece with AI
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Front-row access to the future of AI
In-depth reviews of new models on release day
Playbooks and guides for putting AI to work
Prompts and use cases for builders
Bundle of AI software
