|
In partnership with |  |
| |
So apparently someone on X posted a real Monet painting and told people it was AI, which immediately turned the internet into an emergency Art Criticism symposium.Ā | | The comments did exactly what you would expect: people found āslop,ā āno soul,ā weird textures, fake passion, and all the usual signs of a machine-made image. One Redditor summed it up perfectly: āAll of a sudden everyone's an expert on impressionism.ā | Another pointed out the best part: the original poster basically prompted people to ādescribe in detailā why it was inferior⦠and they returned the most likely tokens. | The real lesson: AI has made people suspicious of everything, but confidence still seems to ship with no fact-checking layer. We have invented the world's first reverse Turing test, where humans prove they are human by hallucinating they are art critics. | Hereās what happened in AI today: | šø U.S.-China AI talks started as Trump met Xi. šø OpenAI put Codex inside the ChatGPT mobile app. š° OpenAI's Apple partnership reportedly started souring. š° Marvel layoffs pointed to Hollywood's AI shift. š Agent hooks and goals showed where real AI work is heading.
| ā¦and a whole lot more that you can read about here. | Hey: Want to reach 700,000+ AI-hungry readers? Advertise with us! | Quick plug: Corey made another guest appearance on The AI Fix, where he attempts to answer the age old question: "will AI keep us as pets or turn us into batteries?ā Good luck with that one! Although if Corey were a battery, we bet heād be a solid state one! | | šø Thursday in an AI Nutshell: Trump and Xi Talked AI Rules While Codex Went Mobile | Two stories caught our attention today besides all the cybersecurity stuff (thereās⦠a lot) that weāre going to deep dive on this Sunday (def tune in for that!) | In Beijing, President Trump and President Xi kicked off a two-day summit covering trade, Taiwan, rare earths, tariffs, high-tech exports, and most importantly (to this outlet, anyway): AI. | Meanwhile, back in the US, OpenAI announced Codex is coming to the ChatGPT mobile app, so developers can monitor, steer, and approve coding agents from their phones. | Hereās what happened in the Trump/Xi meeting: | Treasury Secretary Scott Bessent told CNBC that the U.S. and China plan to set up a safety protocol for AI. Bessent said the U.S. and China are setting up an AI safety protocol because the U.S. is āin the lead.ā The protocol focuses on ābest practicesā for frontier models, especially preventing powerful AI from reaching nonstate actors. Bessent pointed to Anthropicās new Mythos model as one reason Washington is nervous about AI cyber capabilities. He also predicted a āstep-function jumpā from upcoming Google Gemini and OpenAI model releases. Nvidia CEO Jensen Huang joined Trumpās China delegation as U.S. H200 chip sales to major Chinese firms remain stuck in āa lot of back-and-forth.ā Importantly, Xi warned Trump that mishandling Taiwan could put U.S.-China relations in āgreat jeopardy.ā One key thing: The summit could help stabilize last yearās one-year trade truce around rare earths, tariffs, agricultural purchases, fentanyl cooperation, and high-tech export controls.
| Now hereās what happened with Codex: | OpenAI brought Codex to iOS and Android in preview across ChatGPT plans. More than 4M people now use Codex every week. Codex mobile lets you review active threads, screenshots, terminal output, code diffs, test results, and approvals from your phone. The work runs remotely in the environment where the project lives, like a laptop, Mac mini, or managed remote workspace. OpenAI also added Remote SSH, Hooks, programmatic access tokens for Business and Enterprise, and HIPAA-compliant local use for eligible Enterprise healthcare workspaces.
| Why this matters: The diplomacy story is about preventing the Thucydides Trap. That is the political-science idea that war risk rises when a rising power threatens to displace an established power. In AI, the trap looks like this: one side fears falling behind, so both sides race harder, share less, and treat safety coordination as weakness. | That is why even boring-sounding ābest practicesā talks matter. A safety protocol will not solve Taiwan, chips, rare earths, or military competition. But it could still create one narrow lane where both governments agree that autonomous weapons, model misuse, and surprise capability jumps need guardrails. | Now on the Codex side, why should YOU (a non-engineer) care about this? Because developers are the preview for how normal people will use agents 12-18 months later. Dev tools get the weird power-user features first, and then they become everyday buttons for normies. | Hooks are the best example:Ā | A /hook is a rule that runs at the right moment.Ā Today, a developer can use one to scan for secrets, run validators, create memories, or customize Codex behavior by project.Ā Tomorrow, a normal person could use the same pattern for just-in-time skills:Ā "Before sending an email, check whether I sound too harsh."Ā "Before booking travel, check my budget.ā āBefore filing an expense, attach the right receipt and categorize it correctly."
| Goal-style workflows point the same way:Ā | A /goal tells an agent what "done" means, then lets it work for longer without needing a prompt every five minutes.Ā For normal people, that looks like:Ā "Plan the family trip, compare three options, watch prices for a week, and only ask me when it is time to book." "Clean up my inbox, draft replies, and stop when anything sounds legally sensitive."
| Our take: The big AI shift is continuity. Governments need continuity so rivalry doesnāt turn every new model into a crisis. Workers need continuity so we can trust our agents when we give over more control to the machines. | That makes Thursday's stories feel connected. One was about keeping the AI race from running off the road. The other was about letting an agent keep working while you are away from your desk. Different stakes, same question: how do humans stay meaningfully in the loop, but at a comfortable armās length? | |
|
Stop forgetting what you agreed to | | You know that feeling when you leave a meeting and immediately forget half of what you promised? | Thatās not a memory problem. Itās a meetings problem. | Granola helps you become the person who actually follows through. Take quick notes during the call; nothing formal. Granola transcribes in the background and turns those notes into clear summaries with real next steps. | After the call, share notes with the team so everyoneās aligned. Or chat with them to pull out exactly what needs to happen next. | No more dropped balls. Just clarity and follow-through. | Download Granola free | |
|
The best agent workflows have two ingredients: a goal that defines "done" and hooks that keep the agent consistent while it works. | A goal is the finish line. A hook is a just-in-time rule that runs at the right moment. Developers use these for code checks, tests, and project rules. You can use the same pattern for normal work: trip planning, research, inbox cleanup, budget reviews, meeting prep, and anything else that requires multiple steps. | Copy this: | Help me turn this task into a long-running agent workflow:
[TASK]
Define:
1. The goal: what counts as done, in concrete terms.
2. The checkpoints: where the agent should stop and ask me.
3. The hooks: rules that should run automatically at the right moments.
4. The risky steps: anything public, expensive, irreversible, sensitive, or customer-facing.
5. The progress updates: what I should see while the work is running.
Make it useful for a normal person, not a developer.
Give examples of hooks I can use for this exact task.
Favorite insight: agents get useful when they know when to continue, when to check themselves, and when to bother you.
| Want more tips like this? Check out our AI Skill of the Day Digest for May. | Have a specific skill you want to learn?Ā Request it here.Ā | | |
|
|
|
|
| In our latest podcast episode, Corey and Grant sit down with Wen Sang, co-founder and COO of Genspark, to unpack how AI is moving from āanswer my questionā to āfinish the work,ā with agents that can build decks, research customers, operate software, remember preferences, and run on a cloud computer for you. | Watch and/or Listen: YouTube | Spotify | Apple Podcasts | | š° Around the Horn |  | These robots from Figure are kinda mesmerizing to watch⦠its also funny when it sometimes just chucks one off the conveyor belt for no good reason; been there! |
| The Information reported OpenAI is considering legal action against Apple after Apple allegedly limited ChatGPT's iOS role, buried the feature, then started getting cozy with Google and Anthropic for similar AI features. Bold strategy: invite OpenAI to the group project, make it do the bibliography, then ask Gemini to present. Cerebras (the AI chip company) minted two new billionaires in its IPO, with shares priced at $185, opening at $350, and closing up 68% at $311.07, a debut that could kick off a wider wave of AI companies racing to public markets Marvel layoffs reflect Hollywood's broader shift toward freelancers, AI tools, and less predictable creative pipelines, according to former Disney animator Tom Bancroft (video). PwC expanded its Anthropic alliance and plans to train 30K U.S. employees in Claude Code. A tire-changing robot showed how robotics is starting with the most annoying errands first. Respectfully, let it have the lug nuts.
| |
|
The IT strategy every team needs for 2026 | | 2026 will redefine IT as a strategic driver of global growth. Automation, AI-driven support, unified platforms, and zero-trust security are becoming standard, especially for distributed teams. This toolkit helps IT and HR leaders assess readiness, define goals, and build a scalable, audit-ready IT strategy for the year ahead. Learn whatās changing and how to prepare. | Download the Toolkit | |
š” Intelligent Insights: | Brandon Stewart shows that government-controlled media already shapes LLM outputs through training data, with models showing stronger pro-regime bias in languages from countries with lower freedom of the press. Lujain Ibrahim finds that sycophantic AI makes real human interaction feel more effortful and less satisfying over time, because validation from chatbots quietly changes what people expect from friends and family. UK AISI reports that frontier modelsā autonomous cyber-task horizons are doubling every few months, with recent models beating earlier trendlines and completing longer tasks than expected. Microsoft Research demonstrated that āwhimsicalā adversarial strategies can break AI agents because weird, out-of-distribution tactics expose failure modes standard safety tests miss. Ryan Greenblatt proposed that AI labs run deliberate misalignment training experiments now, including pessimization runs and clean chain-of-thought baselines, so they can study dangerous failure modes before models get much stronger. Gallup found 7 in 10 Americans oppose AI data centers in their local area, including 48% who are strongly opposed, making data centers even less popular nearby than nuclear plants as compute demand becomes a local political fight.
| | |
| | | Thatās all for now. | | What'd you think of today's email? | |
|
| P.P.S: Love the newsletter, but only want to get it once per week? Donāt unsubscribeāupdate your preferences here. |
|