AGI is intelligence too valuable to shut off  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

Chain of Thought

Toward a Definition of AGI

AGI is intelligence too valuable to shut off

by Dan Shipper

Midjourney/Every illustration.

Was this newsletter forwarded to you? Sign up to get it in your inbox.


When an infant is first born, they are completely dependent on their caregivers to survive. They can’t eat, move, or play on their own. As they grow, they learn to tolerate increasingly longer separations.

Gradually, the caregiver occasionally and intentionally fails to meet their needs: The baby cries in their crib at night, but the parent waits to see if they’ll self-soothe. The toddler wants attention, but the parent is on the phone. These small, manageable disappointments—what the psychologist D.W. Winnicott called "good-enough parenting"—teach the child that they can survive brief periods of independence.

Over months and years, these periods extend from seconds to minutes to hours, until eventually the child is able to function independently.

AI is following the same pattern.

Today we treat AI like a static tool we pick up when needed and set aside when done. We turn it on for specific tasks—writing an email, analyzing data, answering questions—then close the tab. But as these systems become more capable, we'll find ourselves returning to them more frequently, keeping sessions open longer, and trusting them with more continuous workflows. We already are.

So here’s my definition of AGI:

AGI (artificial general intelligence) is achieved when it makes economic sense to keep your agent running continuously.

In other words, we’ll have AGI when we have persistent agents that continue thinking, learning, and acting autonomously between your interactions with them—like a human being does.

I like this definition because it’s empirically observable: Either people decide it’s better to never turn off their agents or they don’t. It avoids the philosophical rigmarole inherent to trying to define what true general intelligence is. And it avoids the problems of the Turing Test and OpenAI’s definition of AGI.

Make email your superpower

Not all emails are created equal—so why does our inbox treat them all the same? Cora is the most human way to email, turning your inbox into a story so you can focus on what matters and getting stuff done instead of on managing your inbox. Cora drafts responses to emails you need to respond to and briefs the rest.

Try Cora today

Want to sponsor Every? Click here.

In the Turing Test, a system is AGI when it can fool a human judge into thinking it is human. The problem with the Turing Test is that it sets up moveable goalposts: If I interacted with GPT-4 10 years ago, I would have thought it was human. Today, I’d simply ask it to build a website for me from scratch, and I’d know instantly it was not human.

OpenAI’s definition of AGI—which is AI that can outperform humans at most economically valuable work—suffers from the same problem. What constitutes economically valuable work constantly changes. We will invent new economically valuable work that we perform in conjunction with AI. These hybrid roles then become the new benchmark that AI will need to learn to do before it counts as AGI. So the definition is an ever-receding target.

By contrast, the definition I proposed—AGI is achieved when it makes economic sense to keep your agent running continuously—is a binary, irreversible, and immovable threshold: Once we are running our agents 24/7, we’ve hit it, and there’s no going back. (After all, we can’t uninvent it.)

I like this definition because in order to meet it we will need to develop a lot of necessary but hard-to-define components of AGI:

  1. Continuous learning: The agent must learn from experience without explicit user prompting.
  2. Memory management: The agent needs sophisticated ways to store, retrieve, and forget information efficiently over extended periods.
  3. Generating, exploring, and achieving goals: The agent requires the open-ended ability to define new, useful goals and maintain them across days, weeks, or months, while adapting to changing circumstances.
  4. Proactive communication: The agent should reach out when it has updates, questions, or requires input, rather than only responding when summoned. It must also be able to be interrupted and redirected by the user.
  5. Trust and reliability: The agent must be safe and reliable. Users will not keep agents running unless they are confident the system will not cause harm or make costly errors autonomously.

While I've described these capabilities, I'm deliberately avoiding the trap of trying to specify exact technical criteria for each one. What precisely constitutes “continuous learning” or “trust” is difficult to pin down.

Instead, my AGI definition entails that all of these capabilities are present to some extent

And these capabilities already are present in limited ways: ChatGPT, for example, has rudimentary forms of memory and proactive communication.

The length of time during which AI can run on its own is increasing gradually and consistently. When GPT-3 first came out, the primary use case for AI was the GitHub Copilot—the best it could do was complete the line of code you were already writing.

ChatGPT lengthened the amount of time the AI could run from the amount required for you to press “tab” to complete a line of code to the time required to deliver a full response in a chat conversation. Now, agentic tools like Claude Code, deep research, and Codex can run for between 5-20 minutes at a stretch.

The trajectory is clear: from seconds to minutes to hours, and to days and beyond.

Eventually, the cognitive and economic costs of starting fresh each time will outweigh the benefits of turning AI off.


Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

We build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Get paid for sharing Every with your friends. Join our referral program.

Subscribe



What did you think of this post?

Amazing Good Meh Bad

Get More Out Of Your Subscription

Try our AI tools for ultimate productivity

AI Tools Showcase
Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders
Sparks Bundle of AI software
Sparkle Sparkle: Organize your Mac with AI
Cora Cora: The most human way to do email
Spiral Spiral: Repurpose your content endlessly

You received this email because you signed up for emails from Every. No longer interested in receiving emails from us? Click here to unsubscribe.

221 Canal St 5th floor, New York, NY 10013