OpenAI fumbles?
As I noted last week, ChatGPT 5 continues the steady, incremental model improvement we’ve seen in the last 2-3 years, rather than being a step change. However, because it’s called ‘5’ rather than, say, 4.6, and because of the way Sam Altman and other people at OpenAI had talked about it, and casually tossed around (essentially meaningless) terms like ’super-intelligence’, there’s now something of a backlash because the model isn’t a big further jump. It’s about time for the hype cycle to turn, or at any rate people are trying.
Slightly more interesting: the most significant part of GPT5 is the ‘router’ that auto-selects which model to send your query based on its complexity, and as part of that OpenAI got rid of the model picker entirely and retired the 4o model. But it turned out that a small but very loud group of early adopters preferred the tone of voice of 4o, so OpenAI brought it back. This is a classic step in the maturing of a space: as you build out the product, you simplify and abstract, and remove access to the nuts and bolts at the lower levels, but your earliest users like twiddling the nuts and bolts themselves. So you either keep that complexity and then have to maintain it (Windows, Android) or hide it (the web, Mac, iOS).
Finally, Sam Altman did a PR dinner with a group of SF tech journalists, with many maybe-casual remarks, such as that the new head of apps, Instacart’s Fidji Simo, will be launching multiple consumer apps outside of ChatGPT. That gets to the core of all LLM debates: is this one universal UI and one universal product, or an enabling layer and API for many other things? Fewer and fewer people really believe the former. SLOWDOWN, OLD MODELS, DINNER
Chips to China
Chinese open-model labs are doing well, but they’d be doing better with access to Nvidia’s latest and best compute. DeepSeek’s latest model is apparently delayed by attempts to use Huawei’s chips instead, and there’s a lot of smuggling to fill the gap: the US has been hiding location trackers in shipments and two people were just arrested in California.
Trump’s position, on this as so much else, has been spinning like a weather vane, but the latest idea is that Nvidia (and AMB) will pay the US government 20% of revenue from sales in China - for Nvidia this applies to the H20 product that was designed to comply with a previous set of export restrictions. This belies the point of export controls - if you don’t want China to have the chips, the price is irrelevant. DEEPSEEK, TRACKERS, ARRESTS, FIFTEEN PERCENT
Bailing out Intel?
After Trump (briefly) claimed that Intel’s new turnaround CFO is too Chinese, apparently the White House is now considering buying a stake in Intel, presumably to support the investment it needs to get its next-generation process to market and stay in the game. As I’ve written before, you don’t have to be Trump to think it would be a major strategic problem for the USA not to have its own cutting-edge chip manufacturing capability. LINK
Perplexity pumps
Perplexity claimed to be bidding $35bn for Chrome, which Google may be obliged to sell as part of one of the ongoing antitrust cases it’s involved in. This was another of the publicity stunts that this company favours (so transparent that literally everyone saw through it - giving an ‘exclusive’ to both the WSJ and Bloomberg didn’t help), but it prompts two thoughts. First, we should remember that while antitrust cases are very boring and rumble on for ever, Chrome really might be subject to a forced sale, with its billions of eyeballs but with no revenue of its own, and that no-one really knows what that would look like. And second, no-one really knows what the UX of LLMs will be (note OpenAI above talking about spinning off apps), but memory, user data and distribution all seem important. LINK
|