After the AI Action Summit in February 2025, my main takeaway was this:
- While top-down international coordination could benefit safe AI development, it is increasingly harder in the current political climate.
- What seems to matter more now are bottom-up efforts: where a few key players or smaller coalations to do meaningful, pragmatic work that can diffuse through adoption simply because it’s useful.
- That’s the vibe I get from the recently released the Singapore Consensus on Global AI Safety Research Priorities.
The past AI Safety and AI Action Summit series led to the International AI Safety Reports, commissioned by the UK, the first major comprehensive scientific report on frontier AI safety, backed by 30+ governments, and international organizations. When I saw the Singapore Consensus came out, a long report about AI safety, similar expert profiles as the authors, my first reaction was: how is this any different? But digging deeper, this report builds on top of the AI Safety Reports by offering a research roadmap for each risk area identified. It’s less about prescriptive findings, more about doing things. In a nutshell:
| Singapore Consensus | International AI Safety Reports | |
|---|---|---|
| Audience | AI researchers, companies, labs, funding bodies | Policymakers, regulators |
| Focus | A research agenda for frontier AI safety | Risk and policy analysis for frontier AI models |
| Tone | Pragmatic and action-oriented | Analytical and prescriptive |
| Title Strategy | Probably wanted to avoid “safety” in the title | Used “safety” to signal focus clearly |
I find it useful and strategic that the Singapore Consensus aims to get more into the weeds by offering concrete technical research proposals. That being said, just like the previous reports, it doesn’t assign responsibilities or provide funding. So it’s a call for action, not enforcement. In this setting, it becomes more important to ask “who” attended. In the bigger picture, the attendees seemed to include people who probably left the AI Action Summit a bit disappointed, given the lack of tangible progress on international coordination. But still, there is more that the attendee list can tell us about:
Companies in the room:
There was attendance from OpenAI, Anthropic, Google DeepMind, Meta, and Microsoft – mostly technical folks, framing this event more as a “technical and scientific collaboration” than any formal commitment to risks or safety. I’d be curious to know whether any Chinese developers, especially DeepSeek were invited, or whether their presence would have jeopardized some parties’ attendance (or if they were even able to travel). I’m not surprised that Mistral wasn’t there; they haven’t shown any public interest in safety or risk discussions (quite promising as an open-weight developer).
The AISIs in the room:
There were five AISIs represented: the UK, the US, Singapore, Japan, and Korea (plus the EUAIO, if we count that here). It was interesting to see the US AISI represented by Paul Christiano, their Head of Safety (though I’m not sure if he personally attended). It’s encouraging that he remains active in this role, as someone deeply knowledgeable about frontier AI risks. Elizabeth Kelly, the former US AISI director who had previously spoken highly of the International AI Safety Network the US would lead, stepped down after Trump took office and has since joined Anthropic. Perhaps the US AISI’s current position is something like: “We’re building fast, and we need our best scientists to keep our people safe and globally competitive.”
The UK AISI’s participation, with multiple staff members present, reaffirms their ongoing commitment to frontier AI safety research—also evident in their continued projects. The UK was the first country to openly bring catastrophic risks from AI into multilateral policy discussions, and to launch a summit and report series dedicated to safety. But after the most recent AI Action Summit, the UK, along with the US, did not sign the summit declaration and also renamed its AI Safety Institute to the AI Security Institute. Many viewed this shift as the UK aligning with the US trend: “safety is out, security is in.”
So, is Singapore trying to take over the UK’s role? I don’t think so. Singapore may be a trusted international actor, but it doesn’t carry the same geopolitical weight that the UK does in the Western bloc. That said, their focus seems far more pragmatic than the UK’s original aspirations. However, one key shift stood out to me: Singaporean government investing in an effort that is not “pan AI” but focusing only on frontier AI risks and safety. When I attended the AI Safety Connect back in February 2025 in Paris, Wan Sie Lee who represented the Singapore AISI noted that they don’t focus on existential risks—not because they aren’t important, but because others already (referring to other AISIs). I should note that Wan Sie Lee may have since changed roles and may have been speaking about her specific unit at the time. Singapore AISI was represented by someone else at this most recent event, so it’s possible this pivot had already been in motion. Still, it’s a notable move: a targeted, government-backed event explicitly focused on frontier AI safety. It is not surprising that Singapore positions itself as a bridge between Western and Asian actors, and even China. It is unclear to me whether Singapore AISI would have meaningful influence in steering collaboration in the industry and across AISIs specifically on frontier AI risks or this was rather a one-off event.
What this event reminded me, though, is that international coordination is still possible, especially if it offers clear, mutual benefit. Past efforts like the AI Action Summit and the International AI Safety Reports were incredibly valuable in elevating safety to a global policy concern. And I think diplomatic efforts will continue, though likely in smaller, more targeted formats going forward, focused on specific risks or shared infrastructure, hopefully supported by academic collaboration that might still influence frontier AI development and safety to some degree.
I wouldn’t expect me to quote JD Vance, but even in his speech at the AI Action Summit which famously started with “I’m not here to talk about AI safety… I’m here to talk about AI opportunity”, there was still a line my inner Pollyanna held onto: “Now, this doesn’t mean, of course, that all concerns about safety go out the window. But focus matters, and we must focus now on the opportunity to catch lightning in a bottle, unleash our most brilliant innovators, and use AI to improve the well-being of our nations and their peoples.”



No Comments / Yorum Bulunmuyor