| In partnership with |  |
| |
Kevin O'Leary (Yeah, the Shark Tank Guy) just got approved to build a 40,000-acre data center in Utah's Box Elder County. That's 2.5x the size of Manhattan for an eventually 9 gigawatts of power (more than double Utah's current statewide electricity use), running entirely off-grid via a private natural gas pipeline. | Projected effect on the state's total carbon emissions: roughly a 50% increase. | About 1,100 locals filled the county fairgrounds in opposition, citing water scarcity, grid strain, doubled greenhouse emissions, and Great Salt Lake damage. The commission voted yes anyway. The meeting got so loud, the commissioners walked out, then projected the rest of the meeting back into the room from a separate space. | When 1,800 written objections came in on the water-rights change, Commissioner Boyd Bingham told the public, "for hell's sake, grow up." The audience yelled "cowards" and "people over profit" back at him. | O'Leary's response on X: most of the protesters were "paid activists" bused in from out of state, and some of the online opposition was being amplified "by AI." Locals: “Why yes, sir, the AI that YOU are building is amplifying our anger!” | What’s a tech optimist to think? Datacenters are pretty much public enemy #1 right now for politically charged anti-AI folks… and TBH, powering them entirely with methane spewing natural gas turbines aint helping! | But as we’ve written elsewhere, there are alternatives… Colossus, the most methane-spewing datacenter in question, is trying to replace its gas turbines with solar power to pair with its batteries, for example. Right now it uses gas turbines and batteries, but storing up solar power could eventually replace the turbine part, with the batteries covering for when you can’t get power from the sun at night. | Here’s what happened in AI today: | 😺 Hermes Agent v0.13.0 ships as ~30% of OpenClaw users switch 📰 Moonshot AI raises $2B at $20B+ valuation, Kimi maker 📰 Pentagon briefly blacklists Alibaba/Baidu, then retracts 🍪 Prime Intellect Lab moves to GA for self-improving agents 🌟 Three governments adjusted AI oversight stance this week; they're converging
| Hey: Want to reach 700,000+ AI-hungry readers? Advertise with us! | P.S: Love robots? We’re starting a new robotics newsletter! Sign up early here. | | 😺 The agent quietly leaving OpenClaw in the dust | Quick Sunday vibes check: a 78-year-old marketing exec who had never written code shipped a working robotics app this week. | That stat comes from Clement Delangue's thread on Hugging Face's Reachy Mini app store, which crossed 300+ live apps and roughly 10,000 robots deployed worldwide. The exec used natural language to build it without any Python or special robot software; in the time it would have taken you to install ROS, this person built a thing the robot now does. | These stories show a pattern: AI is becoming a tool that lets people build things they couldn't before. Anthropic's new Dreaming feature lets agents process past sessions overnight and write themselves new memory while you sleep. And Nous Research's Hermes Agent shipped a major release this week, pushing the same idea further with a persistent personal agent that learns your specific work over time. | Speaking of Hermes: If you don’t know, Hermes is kinda like the successor to OpenClaw (the personal AI assistant that defined this category over Christmas). Yesterday it shipped v0.13.0 "The Tenacity Release", with 864 commits from 295 contributors in one week and 8 critical security holes closed. (One was a Discord bug that let bots message users across servers they shouldn't reach.) About 30% of OpenClaw users have switched per Reddit sentiment surveys, citing easier setup, better memory defaults, and a self-improving learning loop. | Here's what happened: | Hermes Agent launched February 2026 from Nous Research, the lab behind the Hermes model family. 135K+ GitHub stars, MIT licensed, ships with 40+ bundled skills (modular instruction packs the agent reuses). The architecture is built around a closed learning loop. After a complex task, Hermes enters a "Reflective Phase": it analyzes what worked, extracts reusable patterns, and writes a new skill file encoding the solution. Next time a similar task arrives, it queries its own skill library instead of reasoning from scratch. Three memory layers (session, episodic via SQLite, procedural skills). Runs on a $5 VPS, GPU cluster, or serverless. Model-agnostic; works with OpenRouter, Anthropic, OpenAI, Nous Portal, Kimi, MiniMax, GLM, or your own endpoint. Talk to it via Telegram, Discord, Slack, WhatsApp, Signal, Email, or CLI. Yesterday's release added Google Chat as the 20th platform, plus durable multi-agent Kanban with heartbeat, zombie-worker reclaim, retry budgets, and a hallucination gate. It also added persistent /goal for long-running tasks, post-write file linting on every edit, and session auto-resume when the gateway restarts mid-task. Installation is a one-line curl installer that auto-handles all dependencies (Python 3.11, Node.js, ripgrep, ffmpeg). Run hermes setup and the wizard auto-detects ~/.openclaw, offering to import settings, memories, skills, and API keys (ask your regular AI chatbot to help you set it up if that’s confusing)
| Why this matters: OpenClaw built the category by organizing everything around a messaging hub; Hermes flipped the design and put the agent's learning loop at the center. Both agents can have AI-written skills, but Hermes's loop is automatic. OpenClaw skills are runbooks you (or an AI you prompt) write up-front. Hermes pauses every ~15 tool calls and after complex tasks, reflects on what just worked, writes a Markdown skill file capturing the pattern, then refines it the next time. Compounding is built in. | Our take: Hermes isn't strictly better. OpenClaw still has 24+ messaging integrations (vs. Hermes's six), more security scrutiny, and transparent file-per-memory you can inspect. Many power users run both, with OpenClaw as the orchestrator and Hermes as the learning loop. But if you want one self-hosted agent that gets better at your work the more you use it, Hermes is becoming the answer. | |
|
Turn AI into Your Income Engine | | Ready to transform artificial intelligence from a buzzword into your personal revenue generator? | HubSpot’s groundbreaking guide "200+ AI-Powered Income Ideas" is your gateway to financial innovation in the digital age. | Inside you'll discover: | A curated collection of 200+ profitable opportunities spanning content creation, e-commerce, gaming, and emerging digital markets—each vetted for real-world potential Step-by-step implementation guides designed for beginners, making AI accessible regardless of your technical background Cutting-edge strategies aligned with current market trends, ensuring your ventures stay ahead of the curve
| Download your guide today and unlock a future where artificial intelligence powers your success. Your next income stream is waiting. | Get Your Guide | | | |
|
If you've run a long AI session, you know the moment: two hours in, the model is finally good, and then you hit the context limit. Most people copy a "where I left off" note into a fresh session manually. It works; it's lossy and tedious. | Matt Pocock built and open-sourced a /handoff skill that automates this. It's a SKILL.md (a reusable instruction set you attach to a Claude project) that compacts the session into a clean handoff doc: context, goals, artifacts produced, suggested next steps. A fresh agent or human picks up exactly where you left off. | How to use it: | Grab the SKILL.md from Matt's skills repo. Add it to your Claude project (Settings → Skills → Add). Type /handoff when you're about to run out of context. Copy the resulting Markdown into a fresh session.
| /handoff
Compact this session into a clean handoff document. Include:
- Current goal and sub-goals
- Context that took us multiple turns to establish
- Artifacts produced so far (with links/paths)
- Decisions made and why
- What's blocking, if anything
- Suggested next steps for whoever picks this up
| Works for any long task, not just coding: research, writing, strategy. Anywhere value compounds across turns and you don't want to lose it. | Total AI beginner? Start here (goes with this video). | Have a specific skill you want to learn? Request it here. | | |
|
|
|
Our awesome interview with Isomorphic Labs, the team trying to use AI to make drugs to treat previously untreatable diseases, and our LIVE AI starter kit session where we answer in demand questions on how to get started with AI (and what to skip). | | 📰 Around the Horn |  | Okay… i’m on board. This is cool. Now can he make the actual movie (not just a trailer) using only AI? |
| Microsoft is in talks to delay or abandon its 2030 100%-renewable-energy pledge, citing AI data-center power demand outrunning renewable supply, the first major hyperscaler to publicly walk back a climate goal because of compute scaling. This will not help the public backlash cited above ppl!! IREN signed an AI infrastructure deal with NVIDIA to deploy up to 5 GW of DSX AI infrastructure across global sites; NVIDIA gets a 5-year option to buy up to 30M IREN shares at $70 (a potential $2.1B equity stake). Trump is taking a U.S. CEO delegation to China next week, including the heads of NVIDIA, Apple, Exxon, Boeing, Qualcomm, Blackstone, Citigroup, and Visa, with Treasury Secretary Scott Bessent leading the talks. Anthropic donated Petri v3.0 to Meridian Labs; the open-source alignment toolkit (deception evals, sycophancy testing, model property checks) now lives at an independent third party, a meaningful "third-party audits" data point as the AI vetting EO debate continues. Moonshot AI raised about $2B at a $20B+ post-money valuation in a Meituan-led round; the Kimi chatbot maker is at $200M+ ARR (April), as Chinese frontier labs continue narrowing the gap with Western open-weights leaders. The Pentagon briefly added Alibaba and Baidu to its list of Chinese military companies before quietly retracting the addition, exposing internal administration tensions on China policy.
| |
🌟 Sunday Special: Three governments adjusted their AI oversight stance this week. They're converging. | The pattern: limited pre-deployment review focused on cyber and bio capability, plus targeted bans on harmful applications. Here's what moved: | 🇺🇸 The U.S. White House officials briefed AI labs on a working group that would require pre-release review of frontier models. CAISI is already running pre-deployment evaluations for Google, Microsoft, and xAI. The trigger: Anthropic's new Mythos model. 🇪🇺 The EU agreed to simplify parts of the AI Act under SME pressure, with a new ban on nudification apps and adult deepfakes, plus rolled-back compliance costs for smaller AI companies. 🇨🇳 China is opening direct AI risk talks with the U.S. ahead of Trump's May 14-15 Beijing summit with Xi. Topics include unpredictable model behavior, autonomous military systems, and non-state actors obtaining frontier capability via open-source distillation. The 2023 Biden-Xi version stalled out, but perhas Mythos-class demos changed the calculation.
| Wort noting: The UK's AI Security Institute has become the reference architecture. The U.S. just needs to reinvent it under a different name, and the EU is editing toward it. This is good. | For ALL of the top stories & tools from this week, read our weekly Digest. | | |
| | | That’s all for now. | | What'd you think of today's email? | |
|
| P.P.S: Love the newsletter, but only want to get it once per week? Don’t unsubscribe—update your preferences here. |
|