Every Wednesday, Signal Pro members get a step-by-step AI workflow they can apply immediately. No fluff, just practical guides to upskill you and your team. If you’re only reading the Sunday issue, you’re getting half the picture. Upgrade to paid today. AI HighlightsMy top-3 picks of AI news this week. OpenAI1. OpenAI Is CookingOpenAI packed four major launches into a single week, reasserting momentum on every surface after months of narrative pressure from Anthropic and Google.
Anthropic2. The Anthropic SweepReach was the theme of Anthropic’s week. Claude showed up in new corners of the workplace, picked up a proper memory layer for developers building production agents, and landed in a new set of consumer apps outside of work.
SpaceX3. SpaceX Courts CursorSpaceX announced a deep partnership with Cursor, pairing Cursor’s product and distribution with SpaceX’s training compute to build the world’s best coding and knowledge work AI.
Content I EnjoyedAltman’s AntidoteSam Altman recently took the stage in San Francisco to unveil World ID 4.0, the most ambitious iteration yet of his iris-scanning identity venture (formerly Worldcoin). In a world drowning in AI-generated content, we need “full-stack proof of human” infrastructure to tell real people apart from machines. Altman says we’re already heading there. The integrations that were announced really bring this to life. Tinder is rolling out verified-human badges in the US. Zoom built “Deep Face,” which cross-checks three things on every call: the iris-scanned image from the original Orb verification, a live selfie on the participant’s device, and the video frame everyone else sees. DocuSign is attaching proof-of-human to digital signatures. Shopify, Okta, AWS, Vercel and Visa are all building on top. 18 million people across 160 countries have already scanned their irises at an Orb. Sitting underneath the consumer announcements is AgentKit, a system that lets AI agents carry cryptographic proof they’re acting on behalf of a verified human. Vercel’s “human in the loop” workflow is already live, and Okta is planning a product called Human Principal that lets API builders enforce policies based on whether a real person stands behind an agent. Once autonomous agents start executing transactions at scale, they’ll need a way to prove a real person stands behind them. Altman wants World ID to be that rail, taking a toll on every bot-executed transaction made on a human’s behalf. The obvious tension is that the person selling humanity’s verification layer runs the company that did more than anyone to contaminate the internet with synthetic content in the first place. And on a somewhat related front, Mythos—Anthropic’s model that was “too dangerous to release”—was accessed on day one by four people in a private Discord. They guessed the endpoint URL from Anthropic’s naming conventions, worked the pattern out from a leak in the Mercor breach three weeks earlier, and used a contractor's legitimate evaluation credentials to log in. They have reportedly been using the supposedly world-ending model to build simple websites. Frontier AI security is still being held together with string. Idea I LearnedWhat Was Anthropic Thinking?This week, Anthropic quietly removed Claude Code from its $20 Pro plan. The pricing page pushed agentic coding up to Max at $100/month as the new entry point. Gergely Orosz spotted the change and posted the updated page. Within hours, critics called it “borderline suicidal” for a company whose reputation is built on coding. The economic logic makes sense, with inference costs for coding agents being severe, and reallocating compute from $20 users to higher-LTV customers improves unit economics overnight. Every major lab has a capacity problem, and rationing at the entry tier is one way to solve it. Interestingly enough, Claude Code reappeared on the Pro pricing page within 24 hours. This suggests the change was a pricing elasticity test rather than a permanent cut, positioned as a “fake door” probe to see who would churn and who would pay up. But that creates its own problem. Anthropic markets safety and integrity as core product differentiators. A silent test on the tier where most developers begin their journey sits awkwardly against that positioning and continues to erode developer relations. Hours later, OpenAI's Rohan Varma parodied the move by posting that OpenAI was running "a small test for ~100% of Codex users" with top models unlocked across every plan, free and paid, mocking Anthropic's fake-door language. This is the vibe shift playing out. Whilst the company has continued to ship consistently, the last few months have been a slow accumulation of bruises for Anthropic’s brand. Opus 4.6 was quietly nerfed in February, and users called the company out publicly when response quality dropped. Trust is supposed to be the moat, yet quietly downgrading models and then quietly testing whether developers will swallow a 5x price hike runs against the brand Anthropic has spent the last 3 years building. It lands in the week OpenAI has its cleanest counter-narrative in some time. Quote to ShareAuguste Compt on the UK's Tech Town announcement: The UK government named Barnsley its first official Tech Town back in February. £500,000 in seed funding over 18 months, with a brief to act as a national blueprint for AI across schools, the NHS, and local businesses. Even spread across the borough’s ~250,000 residents, the per-person budget works out to pennies per month. Anthropic, on the other hand, are paying £630K as a base salary for a single Research Engineer in London. Stock pushes total comp close to a million pounds per engineer. The company is fitting out an 800-person King’s Cross office while OpenAI doubles its London headcount past 500. Britain has two AI economies running in parallel. King’s Cross is a frontier lab cluster funded by US capital. Barnsley is a public service rollout coordinated with Microsoft, Cisco, Google, and Adobe, who supply the actual AI underneath. I want this to work. Britain has every ingredient to be a serious AI player. But we’re in a world right now where one AI engineer’s salary is more than an entire town’s tech investment. What’s missing is policy ambition matched to the scale of what’s actually happening in King’s Cross. Source: Auguste Compt on X Question to Ponder“I keep hearing that AI competitors are quietly using each other’s models and infrastructure to build their own products. Is this actually happening?” Yes, and the evidence is quickly stacking up. The Information reported this week that Google has assembled a strike team, co-led by returning Sergey Brin (Google Co-Founder) and DeepMind CTO Koray Kavukcuoglu, to close the gap on coding models. Brin told staff that every Gemini engineer must use internal agents for complex, multi-step tasks, and the eventual goal is what he calls “AI takeoff” with AI that can improve itself. Steve Yegge’s follow-up post, corroborated by anonymous Googlers, highlights that DeepMind engineers reportedly use Claude as a daily tool, whereas most of the rest of Google does not. When leadership floated removing Claude across the board, DeepMind objected hard enough that some engineers threatened to walk. This is a clear forcing function for engineers to build a better coding product. Also, paying for thousands of engineers to have enterprise access to a competitor’s tool is a tall ask. But that’s exactly the situation at the lab building Google’s frontier model. This same pattern is repeating itself consistently across the ecosystem. Anthropic runs Claude on Google’s TPUs in a multi-year, multi-billion-dollar deal. AWS poured billions into Anthropic and built Kiro (competitor to Cursor and Claude Code) around Claude. Microsoft funded OpenAI, then opened GitHub Copilot to multiple models, Claude included. If the people building these models reach for whichever tool works best (even a competitor’s), there’s no point as a consumer being loyal to just one. Use what works, and make your workflows fluid between models. The labs are already doing this. Already a subscriber? Get your whole team on board. Signal Pro group subscriptions give everyone access to weekly AI workflows and tutorials, practical upskilling that pays for itself. It’s the kind of thing L&D budgets were made for. Share this with your manager today.
💡 If you enjoyed this issue, share it with a friend.
Invite your friends and earn rewardsIf you enjoy The Signal, share it with your friends and earn rewards when they subscribe. |