In the last few weeks, three big developments have demonstrated quite starkly the priorities and also the limits of national AI strategies. First, two governments of very different hues have announced ambitious national AI strategies. The UK’s AI Opportunities Action Plan was presented by Prime Minister Keir Starmer on 13 January. On 21 January, President Trump, who had invited all the biggest tech companies to take front row seats at his inauguration, launched the Stargate AI initiative with a focus on new investments in infrastructure. And just a few days ago, the world of AI was upended by the launch of the Chinese DeepSeek AI app, challenging what we thought we knew about AI and giving the established companies their worst day on global markets for quite some time. 

The DeepSeek story is still playing out. But if the UK and US announcements were about putting the respective countries ahead in this supposed AI race that we are all apparently running, then the DeepSeek launch showed how fragile any national leadership role might be. 

The US has used trade restrictions on chips, the key infrastructure requirement of AI, to safeguard national security and protect national companies. But these restrictions, according to DeepSeek executives, were what drove the innovations that have shaken the market—pushing developers to find AI programs that could achieve high performance with lower compute power. 

Avoiding a race to the bottom

This week’s developments in the industry suggest that a policy stance based on locking other countries out of the global race for AI might not, in the end, serve national champions very well. It’s not just that trade protections might actually incentivize rather than suppress innovations. Like any other race that pits individual athletes against each other, it only works if that takes place with a framework of common rules, and if lessons are shared. The AI race between nations is real—but it needs collaboration for countries to be able to realize their national ambitions. 

The UK’s Action Plan sets out an ambition to be an ‘AI maker and not an AI taker’. But it also recognizes that collaboration is necessary, if the UK is to get the compute capacity it will need, succeed in attracting international talent, and, most critically be part of creating the global rules and infrastructure of data sharing that will be vital to power the models that will continue to suck up data faster even than it’s being produced. Whether in the form of data sharing agreements, partnerships for skills or compute capacity, or in setting global regulatory standards so that everyone is playing by the same rules—collaboration will set countries free to focus on what they are best at, growing the global AI pool for the benefit of all. 

Global cooperation helps humanity win 

Global collaboration is about more than supporting national strategies. There are also some collective issues that can’t be tackled alone—from the risks of deepfakes poisoning politics to the dangers posed by autonomous weapons. Risks that are driven by a global industry are too big for any single country to address without cooperation from others.

Previous waves of technology change, from the development of the oil industry to the rise of social media, show that collaboration early on can prevent potentially existential risks later. From oil companies downplaying the impact of fossil fuels, to social media titans allowing harmful content that drives clicks and revenue, history has shown us the dangers of technological innovation led solely by corporate interest, without global standards to guide this development from an early stage. 

There is also a balance between the interests of countries set on winning a global contest, and the public interest. If only US and Chinese athletes were able to compete in marathons, that might serve them well, but it would have deprived the world of the thrill of watching Eliud Kipchoge. Competition can be a spur to ambition and innovation, but if left unchecked it can work against the public interest, and the same is true of AI at a global level. Collaboration to level the playing field, even a little bit, will give us as yet unknown breakthroughs that are to come from the people we haven’t yet heard of in Mozambique, in Mexico, in Mongolia and beyond. 

The characterization of AI as a global race and the ensuing competition to win it, brings a significant risk of fragmented national efforts that undermine the global collaboration and fair competition essential for addressing shared challenges. Usain Bolt, Mo Farah, and Faith Kipyegon all rely on a global set of rules and norms to keep them safe in the race, to stop unscrupulous trainers or race organizers injecting them with harmful drugs or forcing them to run dangerous distances. They also learn from and inspire each other. To keep safe in the AI race, and to create a situation where national champions can do their very best to make the world a better place, global cooperation is vital to set the rules of the road, control harmful effects, and make sure that competition happens in ways that push everyone towards the public good. 

What does a safer race look like?

There is no shortage of international organizations hoping to carve out a role in convening, norm setting, capacity building, or rule making on AI. Competition between them for a piece of the action is as acute as it is between national governments. This is inevitable, and there won’t ever be one single approach for all countries, regions, and purposes. However, there are some global institutions—such as the UN—that have the reach and legitimacy to form the umbrellas under which the division of regulatory labour between entities is negotiated and worked out. When it comes to the big risks: the climate, nuclear weapons, the management of pandemics, the UN is always brought in as the only globally representative and legitimate body. That needs to be the case here. 

The UN High Level Panel report, ‘Governing AI for Humanity’ sets out an agenda for the AI rules of the road. Appropriately for a fast-moving technology, this is a mixture of process agreements and outcome agreements. It proposes forums within which governments can collaborate on specific dimensions of AI as the technology changes, including evaluating scientific developments, and reviewing progress on national and regional regulations to encourage coherence and interoperability. It also suggests specific mechanisms for collaboration towards outcomes in areas of common interest, such as standards development, data sharing, and key investments to support lower income countries. It’s a good first draft of the global agenda, and one which UN member states supported as part of the Pact for Future, agreed in September 2024. 

Ensuring that AI serves the public interest will be a marathon, not a sprint. It is in all of our interests, and it is the mission of the Global Partnership for Sustainable Development Data, to ensure that the race is run within a set of rules which ensure that the finishing post is one that serves the whole of humanity, and not just the lucky few.