All articles

Security & Governance

When Global AI Governance Stalls, Scientists and Civil Society Take the Lead

AI Data Press - News Team
|
March 9, 2026

Fadi Daou, Executive Director of Globethics, explains why global AI governance has stalled and how scientists and civil society are stepping in to fill the leadership void.

Credit: Outlever

Key Points

  • Global efforts to govern artificial intelligence have stalled amid a geopolitical contest between the U.S. and China, leaving a leadership vacuum that multilateral declarations have so far failed to fill.

  • Fadi Daou, Executive Director of Globethics, argues that this top-down gridlock has rendered global governance mostly symbolic, as the most powerful players are unwilling to accept limits.

  • Daou points to a growing bottom-up coalition of scientists, civil society organizations, and enterprises as the most credible path toward governance frameworks that can hold frontier AI accountable.

Right now, AI governance is defined by fragmentation. Everyone talks about global coordination, but the power players aren’t willing to align.

Fadi Daou

Executive Director
Globethics

Fadi Daou

Executive Director
Globethics

Many global leaders publicly call for coordination on artificial intelligence, but behind the scenes, fragmentation and gridlock define how AI is actually governed. The push for a unified framework has stalled as the world's two biggest AI players, the U.S. and China, treat leadership as a zero-sum game. That stalemate, entrenched by industry profit incentives and a U.S. drive against data sovereignty, has created a governance gap that no major power is rushing to fill. What's emerging instead is a bottom-up governance model driven by coalitions of scientists, civil society, and responsible enterprises.

Fadi Daou is the Executive Director of Globethics, an international NGO working to equip individuals and institutions for ethical thinking and responsible governance, headquartered in Geneva with centers across five continents. An award-winning thought leader and former CEO of Adyan Foundation, which he scaled from a volunteer group into a globally recognized institution, Daou has spent his career advising organizations like UNESCO and the UN on building frameworks for a more just world. He believes the current top-down approach to AI governance has already failed, yet notes that the gridlock is forcing a new kind of leadership to emerge from the ground up.

"Right now, AI governance is defined by fragmentation. Everyone talks about global coordination, but the power players aren’t willing to align. If 90% of the industry is controlled by a few actors who don’t want limits, global governance becomes mostly symbolic," says Daou. The players with the most influence over any global framework are the same ones competing to dominate it, and they have no interest in slowing down.

That division was on full display at the recent India AI Impact Summit 2026. Billed as a forum to foster global dialogue, the event instead highlighted the deep divisions between the superpowers and the efforts of "middle power" nations to find a third way. The varying readiness of different governments to engage has created a fractured environment where the incentives for the most powerful players are often not aligned.

  • A summit of symptoms: "The recent India AI Impact Summit was a clear symptom of our fragmented and problematic situation. The event featured a near-total absence of official representation from China, while the U.S. signaled it doesn't want global governance for AI. Together, these two players represent 90% of the industry," says Daou. When the dominant players opt out, any framework that emerges represents, at most, the remaining ten percent.

  • Déjà vu with a deadline: To explain the dynamic, Daou draws a parallel to the 1930s nuclear race, where major players are unwilling to unilaterally limit their capacity because each is determined to win. "The talk now is, when will the AI Hiroshima happen? Then we will start to slow down, talk seriously, and put some boundaries in place," he adds.

The geopolitical race has left a patchwork of policies in its wake. Existing tools, like the 2021 UNESCO recommendation on AI ethics, offer established frameworks, but they are non-binding and largely toothless against national and commercial interests. Further complicating matters, the scientific community that could provide clarity is itself divided. Leading AI safety researchers align on the urgency of the risk, while other prominent voices maintain those fears are exaggerated. As Daou notes, the divide is often interest-based: many of these scientists work for the same companies competing in the commercial race.

  • Passing the buck: In the corporate world, that fragmentation is amplified. Many industry leaders openly acknowledge the risks but continue to prioritize speed, as seen when Anthropic reportedly dropped a key safety pledge just days ago. For every $280 put toward innovation, just $1 goes to safety. "There is a subversive debate about providing AI agents with a legal personality. The motive is to create an escape from accountability for corporate leadership by allowing them to argue that the AI agent itself is responsible. It's a vicious tactic," says Daou.

  • The accountability boomerang: The internal conflict of knowing the risks but proceeding anyway creates a clear line of future liability. Daou predicts this accountability will be reactive, arriving only after significant problems emerge. "In the short term, we will see big problems happen, and then accountability will come. Those responsible will not be able to escape their legal responsibility for the harm caused, especially when they knew their actions could lead to it," he observes.

That forecast of a future reckoning is a key factor galvanizing a new form of leadership from the ground up. It is an inclusive movement, centering the voices and capabilities of the global majority, especially the global south. As Daou sees it, the hope for progress is being driven by the growing momentum of cross-sector alliances stepping in to act where governments have stalled. In the same venues that showcase the global stalemate, a bottom-up movement is taking shape, driven by those unwilling to wait for a crisis.

The networks now forming across scientific communities, civil society, and responsible enterprises are beginning to shift the narrative. "What we're seeing now is the formation of new coalitions that bring together leading authorities, active civil society organizations, and scientists," says Daou. "In the coming months, these groups will become more vocal and will be instrumental in shaping the narrative." Their goal is to change the balance between risk and responsibility, creating a world where the investment in safety begins to match the power of the innovation itself.