Every Wednesday, Signal Pro members get a step-by-step AI workflow they can apply immediately. No fluff, just practical guides to upskill you and your team. If you’re only reading the Sunday issue, you’re getting half the picture. Upgrade to Pro today. AI HighlightsMy top-3 picks of AI news this week. SpaceXAI1. SpaceXAI’s Spice TradeIn February 2026, SpaceX officially acquired and absorbed Elon Musk’s artificial intelligence startup, xAI, to form SpaceXAI. This week, the company signed a multi-billion-dollar agreement with Anthropic, handing over the full compute capacity of their Colossus 1 data centre to serve Claude inference.
Anthropic2. Anthropic Targets the TrillionAnthropic used Code with Claude this week to ship a wave of product updates, pushing Claude deeper into Microsoft’s productivity suite, expanding its agent platform, and rolling out templates aimed at financial services workflows.
OpenAI3. OpenAI’s Stack SweepOpenAI ran an unusually busy release week, with daily announcements stretching across its consumer, developer, and infrastructure offerings.
Content I EnjoyedThe Transformation ParadoxMicrosoft just released its 2026 Work Trend Index Annual Report, drawing on a survey of 20,000 AI users across 10 countries. What immediately stuck out like a sore thumb was that the “AI gap” is actually an “organisational gap”. Workers are very much ready for AI, yet their companies are not. Organisational factors like culture, manager support, and talent practices account for twice the AI impact of individual mindset and behaviour (67% vs 32%). Org AI culture alone is 2.5x stronger than any individual factor. Practically, this means the company you work at matters insurmountably more than how good you are with AI. Microsoft calls this the Transformation Paradox. Only 19% of AI users sit in the Frontier zone, where individual capability and organisational readiness reinforce each other. 65% fear falling behind if they don’t adapt, yet 45% say it feels safer to focus on current goals than redesign their work. Just 13% say they’re rewarded for reinvention when results don’t immediately follow. What separates the top 16%, the Frontier Professionals, is judgment. Importantly, 86% of respondents treat AI output as a starting point rather than a final answer. These are the individuals who refuse to outsource their thinking—honing critical, independent thought, informed by the AI, not taking the AI’s answer verbatim. Another interesting point to highlight is that the firms pulling ahead are building what the report calls “Owned Intelligence”. This takes the form of institutional know-how that quietly compounds over time, is unique to each firm, is especially hard to replicate, and, most importantly, provides rich context for the AI systems you use. Idea I LearnedSam Altman No Longer Believes in UBIIn 2019, Sam Altman put $14 million of his own money into a study on universal basic income (UBI). He helped raise $60 million for the largest experiment of its kind: $1,000 a month for three years, paid to low-income Americans. At the time, he said it was impossible to have equality of opportunity without some form of guaranteed income. Recently, in an interview with The Atlantic’s Nicholas Thompson, he changed his position. “I no longer believe in universal basic income as much as I once did.” Cash payments may be useful, but don’t get at what society is going to need next. When we look back at the data from his own experiment, we see that recipients actually spent more overall, but there was no direct evidence of better healthcare access or improvements to physical and mental health. Three years of guaranteed income produced no measurable lift in wellbeing. In place of UBI, Altman is now backing collective ownership. Rather than fixed cash, he wants people to hold a slice of AI compute they could use, sell, or trade. His reasoning is that AI is shifting the balance between labour and capital, and a fixed cheque only addresses one side of that shift. OpenAI’s recent industrial policy white paper goes further, proposing a Public Wealth Fund that would give every citizen a stake in AI-driven economic growth. Elon Musk has gone the opposite way. In April, he posted on X calling for “universal HIGH income via checks issued by the Federal government,” arguing inflation won’t follow because AI and robotics will produce goods and services far in excess of any new money supply. His broader pitch is abundance—a future where productivity is so high that money becomes, in his framing, like oxygen, still there but not something you have to think about. What’s undeniable, even from Altman's own data, is that work delivers structure, identity, and the feeling of being needed, and a cheque can’t replicate any of that. AI displacement is set to hit white-collar and blue-collar work simultaneously, faster than any prior shift. Whatever form the redistribution takes, it might keep the lights on. The harder question is what fills the hours when work no longer does, and where meaning comes from once a job stops providing it. Quote to ShareJim Fan on robotics’ endgame: Jim Fan, who leads the embodied AI research group at NVIDIA, is one of my favourite voices on robotics. His Sequoia AI Ascent talk last week is 20 minutes and well worth a watch. There’s one particular finding from the talk that deserves the spotlight. Dexterity now has its own scaling law. In plain English, that means the more human video you feed in, the better the robot’s hands get, on a clean, predictable curve. Language models cracked this same pattern six years ago, and it’s what unlocked the leap from clunky next-token prediction tools to useful, conversational forms via ChatGPT. If that holds for robotics, it changes everything. NVIDIA’s lab trained their latest model on 21,000 hours of footage shot from a human’s point of view, with only four hours of someone actually puppeteering a robot in the mix. That’s less than 0.1% of the training data coming from a robot. The rest involves humans wearing cameras just doing things. Tesla ran the same playbook for self-driving. Instead of paying people to collect driving data, every Tesla on the road quietly feeds the system. The data collection runs in the background, and robotics is now trying to pull off the same trick. There is a bit of a catch, though. Language models had the entire internet to learn from. Robots have to deal with the messy physical world, where things slip, break, and behave in ways no simulation fully captures. Even Jim puts a robot that moves indistinguishably from a human two to three years out, which isn’t that far away when you think about it. Still, if dexterity really follows a scaling law, the timeline for useful robots compresses insanely fast. Whoever captures the most first-person human video holds the flywheel and a serious advantage. Source: Jim Fan on X Question to Ponder“Why is Chinese AI so much cheaper than American AI, and should that worry US providers?” I had a great discussion with a subscriber this week about why Chinese AI is so much cheaper than American AI. Following DeepSeek's V4 release at the end of April, the South China Morning Post reported that the cost per conversation on GPT-5.5 is now roughly 32x that of DeepSeek-V4. This makes for a great headline, but the reality underneath is far messier than the number suggests. The key difference is that price and cost are not the same thing. SemiAnalysis found that Huawei's CloudMatrix 384, the rack system powering most Chinese AI workloads, draws 4.1x the electricity of Nvidia’s equivalent, the GB200 NVL72, to deliver the same amount of compute. In practical terms, every token generated on Chinese hardware costs more in energy than the same token on Nvidia. The cheap consumer pricing is being propped up by state subsidies and near-zero margins to compensate. Capacity is the other constraint at hand. The Institute for Progress, drawing on SemiAnalysis and Bernstein projections, puts US production at 6.89 million B300-equivalents in 2026, while Huawei stays between 62,000 and 160,000. China's chip stack is operating at roughly 1% of American output next year. Their aggressive pricing is the result of operating as a smaller producer, with the incentive to gain market share quickly. Who buys at that price? Western governments will not touch Chinese AI for sensitive contracts, but the global south, the Middle East, and Southeast Asia almost certainly will. Whichever stack those regions adopt captures the next billion users—this is the next frontier of competition. Already a subscriber? Get your whole team on board. Signal Pro group subscriptions give everyone access to weekly AI workflows and tutorials, practical upskilling that pays for itself. It’s the kind of thing L&D budgets were made for. Share this with your manager today.
💡 If you enjoyed this issue, share it with a friend.
Invite your friends and earn rewardsIf you enjoy The Signal, share it with your friends and earn rewards when they subscribe. |