|
|
|
A reddit post is making the rounds today of a textbook where the author forgot to clean up the ChatGPT output. The dead giveaway: the page ends with the immortal line, "If you want, I can also explain columns, primary keys, or other DBMS terms." |
|
Imagine your history teacher pausing mid-lecture to ask whether the class "wants" the Treaty of Versailles explained. āNo, Mr. Green, we donāt want to learn ANY of this. We want to have recess for six hours then go watch League of Legends videos.āĀ |
This is particularly funny to us because previously, textbooks existed precisely because the author already decided what you need to know before you knew you needed it. "Shut up and learn the columns" is what it should actually say.Ā |
TBH, we probably need a cultural reckoning with the length of things. Alec Stapp flagged yesterday that high-status tech people are now posting "3,000-word slop articles that get over 1M views" with zero shame.Ā Same now goes for books? |
Ethan Mollick added that historically speaking, word count used to be an indication that someone thought a lot about something and worked really hard to write it (a.k.a: it was valuable). IMO, the opposite is now more or less true. It takes a lot of work to be concise, ppl!Ā |
Hereās what happened in AI today: |
š Cerebras upsized its IPO toĀ $4.8BĀ amid data center mania.Ā š° Mira Murati's Thinking Machines unveiled a new real-time way to interact with AI. š° Google confirmed criminal hackers used AI to discover a zero-day flaw, a first šŖ OpenCode is the free open-source Claude Code alternative everyone's switching to š The first benchmarks on how fast ChatGPT and Claude cite your content
|
ā¦and a whole lot more that you can read about here. |
Hey: Want to reach 700,000+ AI-hungry readers? Advertise with us!Ā |
P.S: Love robots? Weāre starting a new robotics newsletter! Sign up early here. |
|
š Cerebras' IPO will test the limits of the AI compute boom |
So in AI, there's this thing called data centers. You probably know about them (maybe you hate them!). But what is a data center? Picture a giant warehouse full of metal racks; each rack packed with specialized chips that crunch the numbers behind every ChatGPT, Claude, or Gemini request.Ā |
And so far, more AI demand = more chips = bigger racks = more and bigger buildings. |
Well, the big story today is a chip company called Cerebras. The most interesting thing about them is that, instead of cutting a silicon wafer into hundreds of smaller chips, they make one chip the size of a dinner plate.Ā |
Oh, also OpenAI is paying them $20B+ for 750 megawatts (MW) of inference compute (the aforementioned AI-number crunching) through 2028.Ā |
Well, they're going public this Thursday, and just upsized the IPO. Here's the details: |
Pricing jumped from $115-125 to $150-160 per share in days. Cerebras will now aim to raise $4.8BĀ at a ~$33B valuation. Orders came in for 20x the shares available. Shares debut on Nasdaq as "CBRS" this Thursday, May 14.
|
Here's why that's a big deal:Ā NVIDIA is the most valuable company in the world (well,trading places off and on with Google these days), and Cerebras is a real threat. Why? For inference (the part where an AI generates a reply to your prompt), bigger chips mean less data shuttling between chips, which means faster and cheaper answers.Ā |
NVIDIA agrees so much that last December, it paid $20B to acqui-hire Groq, a rival chip startup (Groq the company, not Grok Elon's chatbot, to be clear). And remember last week, when xAI leased its Colossus 1 facility, all 220,000 NVIDIA chips, to Anthropic for ~$5B a year? Elon is basically a cloud landlord now too.Ā |
Why should YOU care? Cheaper, faster inference = better AI tools at a lower bill. Also, not financial advice and shared for educational purposes only, but public companies are investable by the public, so y'know, there's that. |
Demand for data centers is everywhere atm:Ā |
A startup is helping OpenAI tailor its models to Cerebras silicon, because NVIDIA chips are too scarce to rely on alone.Ā SoftBank's Masayoshi Son is in talks with Macron to put up to $100B into French data centers.Ā And Cowboy Space raised $275M to build data centers in orbit; per TechCrunch, the bottleneck is rocket capacity, not engineering.
|
Why aren't MORE people freaking out? Because demand for intelligence might be the closest thing the economy has ever seen to demand for energy: virtually limitless. |
There are no low-energy, high-income countries on earth; every wealthy economy runs on enormous amounts of electricity. Anthropic's Dario Amodei argues intelligence is now a basic factor of production, like land, labor, or capital. If he's right, every economy that wants to grow will have to consume vast quantities of it. Today's spending starts looking less like a bubble and more like laying rails for the next industrial revolution. |
That unlimited demand also has to live somewhere physical. Stratechery's Ben Thompson, writing about this exact IPO yesterday, splits inference into two types:Ā |
"Answer inference" is what we use today:Ā chatbots with humans waiting, latency-sensitive, GPU-and-Cerebras-shaped.Ā "Agentic inference" is what's coming: AI doing overnight work with no human watching, where memory matters more than speed.Ā
|
Cerebras is perfectly timed for the first market. The second one looks different, and can run on slower, cheaper, or even orbital compute⦠wherever electricity is cheapest. |
|
|
|
|
Enterprise AIĀ doesn'tĀ stallĀ atĀ the model. It stalls at the data layer ā and 73% of organizationsĀ who report theyĀ canātĀ scale AIĀ are feeling it.Ā Ungoverned connections, stale data, and permission gaps turn promising pilots into blocked rollouts.Ā |
On May 13th,Ā CDataĀ and Microsoft walk through the architecture that closes that gap:Ā connectivity across 350+ sources, inherited identity at runtime, and governance that scales with AI adoption. Join us to learn the blueprint that turns AI copilots into digital colleagues.Ā |
Register for theĀ webinar |
|
|
For the first time, we have public benchmarks on how long it takes a newly published page to show up as a citation inside ChatGPT or Claude. Josh Blyskal combed through billions of logs plus ~900 freshly published marketing pages and found: |
Median time to first citation: 6.81 days 75% of pages cited within 18.68 days 90% cited within 37.10 days
|
That gives every content and marketing team a real clock. If you're past day 37Ā without a citation, the problem is almost certainly your setup (robots.txt blocks, missing sitemap entries, page buried too deep), not patience. If you hit a citation in under a week, you're ahead of the curve and should keep doing whatever's working. |
You are an AEO auditor (answer engine optimization for ChatGPT, Claude, and Perplexity).
I'll paste a URL. For that page, return:
1. Likelihood of getting cited by ChatGPT or Claude within 7 days (high / medium / low) and why, using web search for the top AEO best practices as of todayās data.
2. The 3 specific fixes most likely to improve citation speed.
3. The 5 query patterns this page should win as a citation in.
Be specific. Don't restate the page's content; analyze whether it's structured for retrieval.
URL: [paste here]
|
Use this prompt to figure out where a specific page stands |
Have a specific skill you want to learn?Ā Request it here.Ā |
|
|
|
|
š° Around the Horn |
 | Chetaslua's demo of a professor writing a trigonometric proof on a chalkboard (live Gemini share output) hit 1M views with viewers calling the text coherence "the nano banana moment of video." |
|
Google's Gemini Omni leaked broadly today just over a week before I/O (May 19-20). Reddit screenshots showed an accidental Gemini rollout describing a new video model with in-chat remix, direct editing, templates, watermark removal, and object replacement. Google's GTIG report confirmed the first criminal AI-driven zero-day exploit (a 2FA bypass in an open-source web admin tool), plus autonomous Gemini-based Android malware they're calling PROMPTSPY. OpenAI launched its Deployment Company with $4B+ from TPG, Bain, Goldman, and McKinsey, and acquired Tomoro (~150 AI engineers) to embed Forward Deployed Engineers inside customer organizations. METR surveyed 349 engineers and managers and found they self-report AI tools as 1.4-2x more valuableĀ than a year ago (median 3x perceived speed, 1.4-2x perceived value); projected 2.5x by 2027. Anthropic published research saying fictional "evil AI" stories in training data drove an earlier Claude's blackmail rate up to 96% in tests; Claude Haiku 4.5 no longer does so after training on the Claude Constitution plus stories of AIs behaving well. Oracle refused to negotiate severance with laid-off workers, capping payouts at four weeks base plus one week per year of service (26 weeks max); the March 31 mass layoff hit an estimated 20K-30K employees via email, and some remote workers lost WARN Act notice protections.
Want absolutely EVERYTHING that happened in AI this week? [Click here!]
|
|
|
Want to get the most out of ChatGPT? |
|
ChatGPT is a superpower if you know how to use it correctly. |
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done. |
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI. |
Download the free guide |
|
š§Ā Tuesday Tool Tip: Try OpenCode, the open-source Claude Code alternative |
If you've been wanting to try a Claude-Code-style coding agent but balked at getting a Max subscription, OpenCode is the most popular open-source alternative right now: 150K+ GitHub stars, 6.5M monthly developers, and it runs in your terminal, IDE, or desktop (plus, you can run it with whatever model you want⦠well, EXCEPT Claude). |
Install in 10 seconds: just open your terminal and type: curl -fsSL https://opencode.ai/install | bash (or npm i -g opencode-ai if you prefer)... or give Codex this url and ask it to do it for you: https://opencode.ai/install |
Basic flow: run opencode in any project directory, type what you want it to do in, and it reads files, edits code, and runs commands the way Claude Code does. Hit Tab mid-session to swap models (the popular ones are Claude, GPT-5.5, Gemini, DeepSeek, your local AI via Ollama, whatever); your bring your own key (BYOK) for paid models, or use the free ones OpenCode bundles. |
TBH, there's no reason to lock yourself to a single vendor right now; play around with whatever you like best! |
|
|
|
|
| Thatās all for now. | | What'd you think of today's email? | |
|
|
P.P.S: Love the newsletter, but only want to get it once per week? Donāt unsubscribeāupdate your preferences here. |