All articles

Data & Infrastructure

Why Sovereign AI Could Be the Next Enterprise Horizon Beyond the Cloud

AI Data Press - News Team
|
September 28, 2025
Credit: Outlever

Key Points

  • AI adoption now involves balancing public cloud speed with sovereign system control, according to Brian Ray, Head of Data Analytics & AI at Atos.

  • He describes how the lack of regulation and planning leads to high failure rates in AI projects, with some firms already leaving public cloud services due to cost and control issues.

  • Ray explains why sovereign AI is quickly gaining traction, but still faces challenges like proprietary training data and high costs of model reproducibility.

  • He concludes by highlighting the need for adaptable sovereign strategies, citing political shifts that can render AI frameworks non-compliant as an example.

"AI without regulation is like playing tennis without a net. It's like riding a motorcycle without a helmet. The idea that you'll move quicker and better without constraints is a fallacy. In fact, standards don't slow you down. They actually make you better."

Brian Ray

Head of Data Analytics & AI for the US and Canada
Atos

Brian Ray

Head of Data Analytics & AI for the US and Canada
Atos

For enterprise leaders, the decision to adopt AI no longer means choosing between the public cloud for speed and sovereign systems for control. Instead, a more nuanced reality is emerging, one where sovereignty is a spectrum, and organizations must judge where each workload belongs. Misjudge the balance, however, and risk wasted investments, compliance penalties, security failures, or unnecessary limitations.

Brian Ray is the Head of Data Analytics & AI for the US and Canada at IT services and consulting giant Atos. A technology executive with 25 years of experience in AI and data innovation, his perspective is shaped by a career as CEO of the Chicago Python User Group (ChiPy) and building data science teams at Deloitte. For Ray, sovereign AI boils down to one thing: control over data and systems.

  • The illusion of speed: A lack of disciplined planning for risk and ROI is creating an illusion of speed that ends in failure 95% of the time, Ray said. “AI without regulation is like playing tennis without a net. It's like riding a motorcycle without a helmet. The idea that you'll move quicker and better without constraints is a fallacy. In fact, standards don't slow you down. They actually make you better."

  • A multi-layered approach: Exercising this control means navigating regulations across distinct layers, Ray continued. “Your region, your industry, and your organization. It can also get down to a personal level.” A multi-layered reality like this one forces organizations to make deliberate choices about governance, he explained. For example, the University of Michigan recently decided to give individual professors the authority to govern AI use on a per-classroom basis.

Now, the shift is causing some companies to exit the public cloud due to cost and control. Ray acknowledged the short-term risk of losing access to APIs. But he also called it a temporary gap, noting that model creators are already adapting their technology for sovereign environments to meet rising demand. However, control doesn't end with infrastructure, according to Ray.

Even within a sovereign system, the term "open source" is a misnomer that creates a major blind spot, he explained. Two challenges with true AI sovereignty tend to arise as a result: hidden training data and a lack of reproducibility.

  • Hidden training data: The first major blind spot is the proprietary nature of the data used to train foundation models. This "black box" problem means organizations can never have true sovereignty over the technology, even when deployed in a secure, controlled system. "The training data isn't open, so you don't know what's inside the model. Even in a controlled system, you aren't truly sovereign because you don't have control over that core data."

  • A lack of reproducibility: The second blind spot is a lack of reproducibility, which breaks the traditional promise of open-source software. The immense computational and financial costs required to train or modify a model create a new barrier that code access alone cannot solve. "The original promise of open source was, 'If I find something wrong, I can fix it.' You can't do that with AI. The computational requirements are so high, you would need a garage full of GPUs."

Today, the internal risks are further compounded by external volatility. For instance, Ray described “political whiplash,” where sudden policy shifts render AI frameworks non-compliant overnight. But for an enterprise with AI in production, a sudden reversal can mean non-compliance, fines, and shutdowns. Giving firms the control to adapt makes a compelling case for a sovereign strategy that acts as a buffer, he said.

Navigating this landscape means thinking more like a gardener tending a bonsai tree, favoring deliberate design over chaotic growth, Ray concluded. Do it later, and it could become an expensive, complex retrofit. "Building that bonsai framework, where you're controlling the process while still letting it grow, has to be set up from day one. If it isn't, you'll be in trouble later when you have to apply those standards with no basis for how to do it."