Breakthroughs, anyone?
It’s almost 2026. AI hasn’t fully cured cancer or solved climate change yet. It would probably be better to have that before generating a video of you skydiving with Margot Robbie (I guess this is also a privacy or copyright issue).
We need bigger scientific breakthroughs, and we needed them yesterday. What’s more contested is whether AI can actually help deliver them, and if so, how.
The US announced the Genesis Mission as their big bet to make America the leader in AI-driven science. The patriotism around the launch is a bit of showmanship, and maybe some of the incentive here is purely political, uniting the US public around “solving humanity’s greatest challenges” against a future anti-AI stance by the Democrats. But I think the main motivation is simpler: unlike AI’s promised gains in other areas (where we’re still waiting for an AI CEO in the Fortune 500 or a robot picking up your kid from daycare), AI’s benefits in science are already demonstrable. AlphaFold won the Nobel Prize. AI-assisted research is on the rise and can be beneficial in different ways. For many in evidence-driven policy circles, the gap between current state and promised potential seems smaller in science than anywhere else. R&D drives substantial economic returns, so it all makes sense to go big on this.
What’s actually in the way?
So back to it: cancer, climate change, AI?
The US Government announced a public request for information tied to the launch of the Genesis Mission – and AI in science is only a part of it. It is all about how to redesign the scientific infrastructure: processes, systems, habits, institutions, talent…
This makes a lot of sense, because AI is not a band aid to all problems in science. Let me tell you a relevant story here. A friend at one of Europe’s best academic institutions spent months last year unable to take measurements, because an expensive machine had been broken, and repairs were slow. I can easily imagine him saying: “AI, AI, AI… AI can’t solve everything! There are real-life problems we scientists deal with.”
He’s right that many problems in science are about resources, workflows, coordination, bureaucracy… AlphaFold can’t fix a broken spectrometer. But AI offers doing many things differently, such as predicting a machine will fail before it does, and it can be an invitation to ask: which parts of how we do things should fundamentally change? Back to that machine: What was the reason for failure? How common is this across different labs in that institution, or even bigger, in a country? How much research time gets sacrificed due to machine downtime? And if this is a systemic problem, should investments (public or private) support AI-assisted alternatives that make “machine downtime” a thing of the past? What fields already offer such equipment, and what fields would benefit highly from hardware investment in this? So the question becomes “when can we get this machine fixed, if ever?” to “can we make machine downtime a thing in the past”?
So why does this matter now?
These aren’t new questions. Calling it metascience or not, people have been thinking and offering insights into where the bottlenecks in science are or more out-of-the-box ideas like whether it is marginally more difficult to come up with scientific breakthroughs. And some of the more policy-relevant recommendations are landing, even if slowly. The OECD published a great compilation of articles on AI in science back in 2023, and their point about academics needing abundant public compute access is now in action.
But the stakes have shifted. AI is now the center of top public debates, and AI in science is framed as a sacred mission. If the politicians and investors set the stage as such, the stakes are higher.
And let’s remember: AI is not a band-aid. These promises only materialize through systemic change in how science gets done.
This is why moments like the European Union designing FP10, its science and R&D funding strategy for the next decade, or the US Government seeking public input into the Genesis and its broader science funding strategy matter so much. Even though the Genesis and FP10 are definitely not an apple-to-apple comparison, there are not many public initiatives at mega-billion scale, so one tends to compare the two. What stands out in the Genesis Mission to me is its concrete operational targets: “build this infrastructure, hit these milestones in x-many days”. Again, it’s easier said than done if you are a nation state. But will FP10 lay out a strategy that it executes and rigorously oversees as well? Horizon Program, the EU’s current science investment initiative wrapping up by 2027, declared specific goals, but its main instrument seems like funding many projects that applied to the program, which hopefully achieved the broader mission of the program. Will FP10 also rely mostly on coordinated action across many actors (countries, companies, institutions…)? I hope Horizon will offer insights into FP10 on to what degree to adopt a similar strategy.
Over to you
From a distance, scientific breakthroughs seem more possible than ever. Fly closer, the scene changes: all the tangled branches, the spider webs, the messy bushes and hidden thorns…
As a citizen of the world, I believe:
- Science is one of the few things that can unify across ideological lines, and increasingly, it’s where public expectation and pressure will land.
- Getting the strategy wrong means years of lost prosperity, health, and wellbeing, and again, it’s where public expectation and pressure will land.
As a citizen of the world, I also wonder, is it both exciting and daunting to be a scientist right now? Publishing becomes a numbers game, you are tired of a hurdle that everybody knows about but nobody has the time to solve, convenience and ambition seem like two competing priorities… And on top, you have to adapt to but also be accountable for new ways of doing things, I believe? I stumbled upon a recent survey, a small but targeted sample, which showed that scientists citing lack of skills as a barrier to AI adoption actually increased from 2024 to 2025, and more than half asked for guidance on how to use these tools. I’d bet science policymakers would say the same: they need clearer signals on what scientists actually need.
One thing that I guess everybody involved (scientists, policymakers, funders) is on the same page about is this: can AI revolutionize science such that we answer harder questions, not just publish faster, reliably? And these years matter for that question more than ever. What you do day-to-day as a scientist matters: how intentional you are about what works and what could work better. What funders, public or private, prioritize matters: whether new things to cheer for are trustworthy or not. And the information flow between all the layers in between is probably more important now than it’s been in a long time.
Whether you’re in academia, a startup, an FRO, a corporation, philanthropy, government, civil society, whatever altitude you’re at, I’m curious what questions you’re sitting with. Happy to chat, or point me to someone who I should talk to.



No Comments / Yorum Bulunmuyor