This week in AI: Inside Elon's multi-billion compute deal with Anthropic, why investors are pricing Claude as infrastructure, and the quiet networking play behind OpenAI's busiest release week.
͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­
Forwarded this email? Subscribe here for more

SpaceXAI's Spice Trade, Anthropic Targets the Trillion, and OpenAI's Stack Sweep

Alex Banks
May 10
 
READ IN APP
 
Listen to post · 13:40

Hey friends 👋 Happy Sunday.

Here’s your weekly dose of AI and insight.


Every Wednesday, Signal Pro members get a step-by-step AI workflow they can apply immediately. No fluff, just practical guides to upskill you and your team. If you’re only reading the Sunday issue, you’re getting half the picture. Upgrade to Pro today.

Upgrade to Pro


AI Highlights

My top-3 picks of AI news this week.

Elon Musk and Dario Amodei / The Signal Newsletter Graphic
CEO of SpaceXAI, Elon Musk and CEO of Anthropic, Dario Amodei / The Signal Newsletter Graphic
SpaceXAI

1. SpaceXAI’s Spice Trade

In February 2026, SpaceX officially acquired and absorbed Elon Musk’s artificial intelligence startup, xAI, to form SpaceXAI. This week, the company signed a multi-billion-dollar agreement with Anthropic, handing over the full compute capacity of their Colossus 1 data centre to serve Claude inference.

  • Anthropic gets the keys: SpaceXAI is giving Anthropic every NVIDIA GPU at Colossus 1: 220,000+ chips and 300+ megawatts of capacity, online within the month. Claude Code’s five-hour rate limits have doubled across paid plans, peak-hour throttling on Pro and Max is gone, and Opus API limits jumped considerably.

  • Colossus 1 was idle: SpaceXAI moved frontier training to Colossus 2, a 1.5GW cluster roughly 3x the power and 2.5x the GPUs of Colossus 1. Leasing Colossus 1 monetises hardware that would otherwise depreciate in place while funding the next buildout.

  • The unit economics: Jamin Ball at Altimeter modelled Colossus 1 at standard rental rates as roughly $5 billion of annual revenue for SpaceXAI. Applying Dario Amodei’s training-inference framework from Dwarkesh, Anthropic could turn that $5B compute spend into ~$15B of inference revenue at 60-70% margins.

Alex’s take: Three months ago Elon called Anthropic "misanthropic"; last week he called them "highly competent" after meeting their senior team. It’s wild to think that demand for Claude is so far ahead of supply that renting from a rival becames a rational move. I honestly see it as a big win-win: SpaceXAI keeps Colossus 1 earning while it pours everything into Colossus 2, Anthropic’s inference is bolstered and rate-limits are massively improved helping uptime. Perhaps my favourite insight from this announcement was as one commentator put it, borrowing from Frank Herbert's Dune, "he who controls the spice controls the universe." Compute IS the spice now, SpaceXAI is in an incredibly strong position for the long-game, especially as they have the platform for orbital datacentres next.


Anthropic

2. Anthropic Targets the Trillion

Anthropic used Code with Claude this week to ship a wave of product updates, pushing Claude deeper into Microsoft’s productivity suite, expanding its agent platform, and rolling out templates aimed at financial services workflows.

  • Microsoft 365 GA: Excel, PowerPoint, and Word hit general availability, with Outlook moving into public beta, meaning Claude carries the full conversation context across Microsoft apps.

  • Dreaming in Managed Agents: Live from Code with Claude, dreaming launched as a research preview, with outcomes, multi-agent orchestration, and webhooks all moving into public beta.

  • Financial services templates: Ready-to-run agent templates for pitches, valuation reviews, and month-end close, installable as plugins in Cowork and Claude Code, or runnable in production as Managed Agents.

Alex’s take: Interestingly enough, these updates have happened in the same week the Financial Times reported Anthropic is in talks to raise up to $50 billion at a near-$1 trillion valuation, which would put it above OpenAI's $852 billion mark. The raise is about compute. Anthropic is targeting that capital almost entirely to expand capacity after supply constraints dented customer service in recent weeks, and the SpaceXAI, Google, Broadcom, and AWS deals signed in the past two months already commit hundreds of billions in future costs. Revenue has had a 5x jump from a $9 billion run-rate at the end of last year to an expected $45 billion imminently.


OpenAI

3. OpenAI’s Stack Sweep

OpenAI ran an unusually busy release week, with daily announcements stretching across its consumer, developer, and infrastructure offerings.

  • Smarter ChatGPT replies: GPT-5.5 Instant is rolling out across ChatGPT with clearer, more concise responses in a warmer tone, addressing one of the most common complaints about answer length.

  • Voice agents that reason: GPT-Realtime-2 brings GPT-5-class reasoning to voice in the API, with companion models GPT-Realtime-Translate handling live translation across 70+ languages and GPT-Realtime-Whisper for streaming transcription.

  • Codex moves into the browser: Codex now runs directly in Chrome on macOS and Windows through a plugin, working in parallel across tabs without taking over the browser.

Alex’s take: Outside of OpenAI’s consumer launches this week, they also co-developed Multipath Reliable Connection (MRC) with NVIDIA, AMD, Broadcom, Intel, and Microsoft, then open-sourced it through the Open Compute Project. What this means practically speaking is that GPU traffic now spreads across hundreds of paths and reroutes around failures in microseconds, keeping thousands of GPUs in lockstep through congestion that used to stall training runs. An open protocol like this gives OpenAI portability across hardware vendors and weakens NVIDIA's grip. NVIDIA still wins on the hardware today, but OpenAI is making sure no single supplier controls the layer frontier training depends on.

POLL
Which highlight caught your attention?
SpaceXAI
Anthropic
OpenAI

Content I Enjoyed

Microsoft 2026 Work Trend Index Annual Report and Microsoft CEO Satya Nadella / The Signal Newsletter Graphic
Microsoft 2026 Work Trend Index Annual Report and Microsoft CEO Satya Nadella / The Signal Newsletter Graphic

The Transformation Paradox

Microsoft just released its 2026 Work Trend Index Annual Report, drawing on a survey of 20,000 AI users across 10 countries. What immediately stuck out like a sore thumb was that the “AI gap” is actually an “organisational gap”. Workers are very much ready for AI, yet their companies are not.

Organisational factors like culture, manager support, and talent practices account for twice the AI impact of individual mindset and behaviour (67% vs 32%). Org AI culture alone is 2.5x stronger than any individual factor. Practically, this means the company you work at matters insurmountably more than how good you are with AI.

Microsoft calls this the Transformation Paradox. Only 19% of AI users sit in the Frontier zone, where individual capability and organisational readiness reinforce each other. 65% fear falling behind if they don’t adapt, yet 45% say it feels safer to focus on current goals than redesign their work. Just 13% say they’re rewarded for reinvention when results don’t immediately follow.

What separates the top 16%, the Frontier Professionals, is judgment. Importantly, 86% of respondents treat AI output as a starting point rather than a final answer. These are the individuals who refuse to outsource their thinking—honing critical, independent thought, informed by the AI, not taking the AI’s answer verbatim.

Another interesting point to highlight is that the firms pulling ahead are building what the report calls “Owned Intelligence”. This takes the form of institutional know-how that quietly compounds over time, is unique to each firm, is especially hard to replicate, and, most importantly, provides rich context for the AI systems you use.


Idea I Learned

Sam Altman / The Most Interesting Thing in AI podcast

Sam Altman No Longer Believes in UBI

In 2019, Sam Altman put $14 million of his own money into a study on universal basic income (UBI). He helped raise $60 million for the largest experiment of its kind: $1,000 a month for three years, paid to low-income Americans. At the time, he said it was impossible to have equality of opportunity without some form of guaranteed income.

Recently, in an interview with The Atlantic’s Nicholas Thompson, he changed his position. “I no longer believe in universal basic income as much as I once did.” Cash payments may be useful, but don’t get at what society is going to need next.

When we look back at the data from his own experiment, we see that recipients actually spent more overall, but there was no direct evidence of better healthcare access or improvements to physical and mental health. Three years of guaranteed income produced no measurable lift in wellbeing.

In place of UBI, Altman is now backing collective ownership. Rather than fixed cash, he wants people to hold a slice of AI compute they could use, sell, or trade. His reasoning is that AI is shifting the balance between labour and capital, and a fixed cheque only addresses one side of that shift. OpenAI’s recent industrial policy white paper goes further, proposing a Public Wealth Fund that would give every citizen a stake in AI-driven economic growth.

Elon Musk has gone the opposite way. In April, he posted on X calling for “universal HIGH income via checks issued by the Federal government,” arguing inflation won’t follow because AI and robotics will produce goods and services far in excess of any new money supply. His broader pitch is abundance—a future where productivity is so high that money becomes, in his framing, like oxygen, still there but not something you have to think about.

What’s undeniable, even from Altman's own data, is that work delivers structure, identity, and the feeling of being needed, and a cheque can’t replicate any of that. AI displacement is set to hit white-collar and blue-collar work simultaneously, faster than any prior shift. Whatever form the redistribution takes, it might keep the lights on. The harder question is what fills the hours when work no longer does, and where meaning comes from once a job stops providing it.


Quote to Share

Jim Fan on robotics’ endgame:

X avatar for @DrJimFan
Jim Fan
@DrJimFan
I promise this will be the best 20 min you spend today! Robotics: Endgame, the sequel to my last year's Sequoia AI Ascent talk, "Physical Turing Test". I laid out the roadmap for solving Physical AGI as a simple parallel to the LLM success story. Be a good scientist, copy
Image
2:32 PM · May 8, 2026 · 371K Views
106 Replies · 427 Reposts · 2.89K Likes

Jim Fan, who leads the embodied AI research group at NVIDIA, is one of my favourite voices on robotics. His Sequoia AI Ascent talk last week is 20 minutes and well worth a watch.

There’s one particular finding from the talk that deserves the spotlight. Dexterity now has its own scaling law. In plain English, that means the more human video you feed in, the better the robot’s hands get, on a clean, predictable curve. Language models cracked this same pattern six years ago, and it’s what unlocked the leap from clunky next-token prediction tools to useful, conversational forms via ChatGPT.

If that holds for robotics, it changes everything. NVIDIA’s lab trained their latest model on 21,000 hours of footage shot from a human’s point of view, with only four hours of someone actually puppeteering a robot in the mix. That’s less than 0.1% of the training data coming from a robot. The rest involves humans wearing cameras just doing things.

Tesla ran the same playbook for self-driving. Instead of paying people to collect driving data, every Tesla on the road quietly feeds the system. The data collection runs in the background, and robotics is now trying to pull off the same trick.

There is a bit of a catch, though. Language models had the entire internet to learn from. Robots have to deal with the messy physical world, where things slip, break, and behave in ways no simulation fully captures. Even Jim puts a robot that moves indistinguishably from a human two to three years out, which isn’t that far away when you think about it.

Still, if dexterity really follows a scaling law, the timeline for useful robots compresses insanely fast. Whoever captures the most first-person human video holds the flywheel and a serious advantage.

Source: Jim Fan on X


Question to Ponder

“Why is Chinese AI so much cheaper than American AI, and should that worry US providers?”

I had a great discussion with a subscriber this week about why Chinese AI is so much cheaper than American AI.

Following DeepSeek's V4 release at the end of April, the South China Morning Post reported that the cost per conversation on GPT-5.5 is now roughly 32x that of DeepSeek-V4. This makes for a great headline, but the reality underneath is far messier than the number suggests.

The key difference is that price and cost are not the same thing.

SemiAnalysis found that Huawei's CloudMatrix 384, the rack system powering most Chinese AI workloads, draws 4.1x the electricity of Nvidia’s equivalent, the GB200 NVL72, to deliver the same amount of compute. In practical terms, every token generated on Chinese hardware costs more in energy than the same token on Nvidia. The cheap consumer pricing is being propped up by state subsidies and near-zero margins to compensate.

Capacity is the other constraint at hand. The Institute for Progress, drawing on SemiAnalysis and Bernstein projections, puts US production at 6.89 million B300-equivalents in 2026, while Huawei stays between 62,000 and 160,000. China's chip stack is operating at roughly 1% of American output next year. Their aggressive pricing is the result of operating as a smaller producer, with the incentive to gain market share quickly.

Who buys at that price? Western governments will not touch Chinese AI for sensitive contracts, but the global south, the Middle East, and Southeast Asia almost certainly will. Whichever stack those regions adopt captures the next billion users—this is the next frontier of competition.


Already a subscriber? Get your whole team on board. Signal Pro group subscriptions give everyone access to weekly AI workflows and tutorials, practical upskilling that pays for itself. It’s the kind of thing L&D budgets were made for. Share this with your manager today.

Get a group subscription

POLL
What was your favourite part of The Signal this week?
AI Highlights
Content
Idea
Quote
Question

💡 If you enjoyed this issue, share it with a friend.

Refer a friend

See you next week,
Alex Banks

P.S. South Korea’s First robot pledges to Buddhism.

Invite your friends and earn rewards

If you enjoy The Signal, share it with your friends and earn rewards when they subscribe.

Invite Friends

 
Share
 
 
Like
Comment
Restack
 

© 2026 Alex Banks
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe

Start writing