Dear friends,
There is now a path for China to surpass the U.S. in AI. Even though the U.S. is still ahead, China has tremendous momentum with its vibrant open-weights model ecosystem and aggressive moves in semiconductor design and manufacturing. In the startup world, we know momentum matters: Even if a company is small today, a high rate of growth compounded for a few years quickly becomes an unstoppable force. This is why a small, scrappy team with high growth can threaten even behemoths. While both the U.S. and China are behemoths, China’s hypercompetitive business landscape and rapid diffusion of knowledge give it tremendous momentum. The White House’s AI Action Plan released last week, which explicitly champions open source (among other things), is a very positive step for the U.S., but by itself it won’t be sufficient to sustain the U.S. lead.
Because many U.S. companies have taken a secretive approach to developing foundation models — a reasonable business strategy — the leading companies spend huge numbers of dollars to recruit key team members from each other who might know the “secret sauce“ that enabled a competitor to develop certain capabilities. So knowledge does circulate, but at high cost and slowly. In contrast, in China’s open AI ecosystem, many advanced foundation model companies undercut each other on pricing, make bold PR announcements, and poach each others’ employees and customers. This Darwinian life-or-death struggle will lead to the demise of many of the existing players, but the intense competition breeds strong companies.
Keep building! Andrew
A MESSAGE FROM DEEPLEARNING.AIBuild AI applications that access tools, data, and prompt templates using Model Context Protocol (MCP), an open standard developed by Anthropic. In “MCP: Build Rich-Context AI Apps with Anthropic,” you’ll build and deploy an MCP server, make an MCP-compatible chatbot, and connect applications to multiple third-party servers. Sign up now
News
White House Resets U.S. AI Policy
President Trump set forth principles of an aggressive national AI policy, and he moved to implement them through an action plan and executive orders.
What’s new: In “Winning the Race: America’s AI Action Plan,” the White House outlines a trio of near-term goals for AI in the United States: (i) stimulate innovation, (ii) build infrastructure, and (iii) establish global leadership. As initial steps in these directions, the president directed the federal government to (a) procure only “ideologically neutral” AI models, (b) accelerate permitting of data-center construction, and (c) promote exports of AI technology.
How it works: Rather than advocating for legislation or legal challenges, the plan focuses on actions the executive branch of government can take on its own. President Trump had ordered technology advisor Michael Kratsios, AI advisor David Sacks, and national security advisor Marco Rubio to make a plan to “sustain and enhance America’s global AI dominance” within days of starting his current term. Senior policy advisors Dean Ball and Sriram Krishnan, among others, also played key roles.
Behind the news: In contrast to President Trump’s emphasis on U.S. dominance in AI, the previous Biden administration focused on limiting perceived risks.
Why it matters: The Trump administration’s action plan sets the stage for U.S. AI developers to do their best work and share their accomplishments with the world. It aims to avoid the European Union’s risk-averse regulatory approach and counter China’s rising power and influence in AI development. To those ends, it prioritizes a unified national AI policy, streamlines the building of infrastructure, facilitates distributing models and hardware abroad, supports the development of datasets and open-source models, and refrains from defining the arbitrary thresholds of theoretical risk.
We’re thinking: This plan is a positive step toward giving the U.S. the infrastructure, global reach, and freedom from bureaucratic burdens that it needs to continue — and possibly accelerate — the rapid pace of innovation. However, the executive order in support of models that are “objective and free from top-down ideological bias” is wrong-headed. The president complains that some AI models are “woke,” and he wants to discourage references to climate change, diversity, and misinformation. But putting those requirements into an executive order, even if it clears some roadblocks to AI development, risks emphasizing some of Trump’s own ideological preferences.
Qwen3’s Agentic Advance
Less than two weeks after Moonshot’s Kimi K2 bested other open-weights, non-reasoning models in tests related to agentic behavior, Alibaba raised the bar yet again.
How it works: The updated Qwen3 models underwent pretraining and reinforcement learning (RL) phases, but the company has not yet published details. During RL, the team used a modified version of Group Relative Policy Optimization (GRPO) that it calls Group Sequence Policy Optimization (GSPO).
Performance: The authors compared Qwen3-235B-A22B-Instruct-2507 and Qwen3-235B-A22B-Thinking-2507 to both open and proprietary models across tasks that involved knowledge, reasoning, coding, and tool use. They compared Qwen3-Coder to open and proprietary models on agentic tasks (coding, tool use, and browser use).
Why it matters: Developers of open-weights models are adjusting their approaches to emphasize performance in agentic tasks (primarily involving coding and tool use). These models open doors to a vast range of applications that, given a task, can plan an appropriate series of actions and interact with other computer systems to execute them. That the first wave of such models were built by teams in China is significant: U.S. developers like Anthropic, Google, and OpenAI continue to lead the way with proprietary models, but China’s open-weights community is hot on their heels, while the U.S. open-weights champion, Meta, may step away from this role.
Learn More About AI With Data Points!
AI is moving faster than ever. Data Points helps you make sense of it just as fast. Data Points arrives in your inbox twice a week with six brief news stories. This week, we covered Z.ai’s launch of a low-cost, agentic GLM-4.5 model and Cursor’s release of Bugbot for automated code reviews. Subscribe today!
U.S. Lifts Ban on AI Chips for China
Nvidia will resume sales of H20 processors in China.
What’s new: Nvidia and AMD said they’ll resume supplying to China graphics processing units (GPUs) tailored to comply with U.S. export restrictions, including Nvidia’s H20 and AMD’s MI308, after the Trump administration, which had blocked the sales, assured the companies it now would allow them.
How it works: In April, the White House announced that shipments to China of Nvidia H20s, AMD MI308s, and equivalent chips would require export licenses, which apparently would not be forthcoming. That requirement effectively shut both companies out of China, which in 2024 accounted for 13 percent of Nvidia’s revenue and 24 percent of AMD’s. The White House’s decision to grant the licenses follows months of lobbying by Nvidia CEO Jensen Huang.
Behind the news: U.S. lawmakers of both major parties aim to protect U.S. economic interests and prevent China from using advanced chip technology for military applications.
Why it matters: AI presents geopolitical opportunities for technological and economic dominance as well as challenges to military power. The U.S. export restrictions are intended to balance these elements, yet they have been largely ineffective so far. This year, DeepSeek developed DeepSeek-R1, which delivers high performance for a low development cost. H20s were among the hardware used to train that model, TechCrunch reported. Alibaba, Moonshot, Tencent, and other Chinese companies also have produced high-performance foundation models, while China has accelerated its own semiconductor industry to avoid relying on US suppliers. Relaxing the restrictions may balance U.S. interests more effectively.
We’re thinking: Ensuring national security is crucial, but so is enabling the free flow of ideas and innovation. We applaud the relaxation of trade restrictions and look forward to further contributions by developers in China and around the world.
People With AI Friends Feel Worse
People who turn to chatbots for companionship show indications of lower self-reported well-being, researchers found.
What’s new: Yutong Zhang, Dora Zhao, Jeffrey T. Hancock, and colleagues at Stanford and Carnegie Mellon examined correlations between users’ chatbot usage and psychological health. The more frequently users chatted, shared personal information, and went without human social relationships, the lower they rated their own well-being, the authors found.
Key insight: Chatbot users may not report the subject matter of their conversations accurately, but LLMs can identify and summarize topics in chat histories. This makes it possible to correlate the intensity and depth of chats with self-reported measures of well-being, such as loneliness and satisfaction.
How it works: The authors surveyed 1,131 users of the chatbot service Character.AI, which provides chatbots for purposes like roleplay, conversation, and education. In addition, they gathered 413,509 messages from 4,363 conversations with 244 participants who agreed to share their chat logs.
Results: The authors computed correlations among the various signals and the six measures of well-being. They found that most users turned to chatbots for companionship, whether or not they selected companionship as a motivation for their chats. Furthermore, reliance on chatbots for companionship indicated lower well-being.
Yes, but: The authors found a consistent correlation between chatbot companionship and lower well-being, but they didn’t establish causation. The data shows that people who sought companionship from chatbots likely struggled with loneliness or a lack of close social connections. It remains unclear whether loneliness caused the users to use chatbots for companionship or vice-versa, or whether using chatbots relieved or exacerbated their loneliness.
Behind the news: AI companions have been shown to bring both benefit and harm. Some studies report short-term benefits like reduced loneliness and emotional relief. Users say chatbots are nonjudgmental and easy to talk to. But other work has found emotional overdependence, distorted relationship expectations, and harmful behavior encouraged by unmoderated bots.
Why it matters: Increasingly, people converse with chatbots as an alternative to human conversation. Chatbot builders must be aware of the potential pitfalls of using their products and conduct research sufficient to enable them to build more beneficial bots. Of course, society also has a role to play by fostering social support through access to community, care infrastructure, and mental-health services.
We’re thinking: Whether it’s beneficial or not, developers are building chatbots that aim to form relationships with people. Such relationships appear to fulfill many of the same needs as human relationships, and they do so in ways that many people, for a wide variety of reasons, find more practical or comfortable. Some developers may be tempted to exploit such needs for profit, but we urge them to design apps that focus on strengthening human-to-human relationships.
Work With Andrew Ng
Join the teams that are bringing AI to the world! Check out job openings at DeepLearning.AI, AI Fund, and Landing AI.
Subscribe and view previous issues here.
Thoughts, suggestions, feedback? Please send to thebatch@deeplearning.ai. Avoid our newsletter ending up in your spam folder by adding our email address to your contacts list.
|