All articles

Data & Infrastructure

Kubernetes Proves Its Adaptability Again As AI Workloads Reshape Cloud Native Infrastructure

AI Data Press - News Team
|
March 19, 2026

Andrea Giardini, Founder and Lead Consultant at KundoLabs and CNCF Ambassador, explains how Kubernetes is becoming the orchestration layer for AI workloads by abstracting GPU constraints, enabling multi-cloud portability, and letting teams assemble modular infrastructure stacks.

Credit: Outlever

Key Points

  • Recent Kubernetes releases add AI-centric features like flexible GPU scheduling, and hyperscalers such as Google Cloud now let teams specify GPU ranges rather than single devices, abstracting away hardware shortages and cost volatility.

  • Andrea Giardini, Founder and Lead Consultant at KundoLabs and CNCF Ambassador, says Kubernetes is the platform where the AI infrastructure stack will run, even as the surrounding tooling remains experimental and modular.

  • Kubernetes' ubiquitous API gives enterprises portability across clouds and sovereign providers, a feature that grows more valuable as data sovereignty requirements tighten across Europe and beyond.

Kubernetes has repeatedly proven its adaptability. Now it's showing it can be the backbone for AI infrastructure, ready for whatever the next industry trend demands.

Andrea Giardini

Founder
KundoLabs

Andrea Giardini

Founder
KundoLabs

Kubernetes keeps adding AI-specific capabilities with each release, and hyperscalers are building on top of them. Google Cloud's Autopilot clusters, for example, now allow teams to specify a range of acceptable GPUs for a workload rather than a single device. If one GPU type is unavailable or expensive in a given zone, the scheduler moves to the next option. In an environment where GPU shortages remain a real constraint, that kind of abstraction turns Kubernetes into the layer that makes AI workloads economically viable at scale.

Andrea Giardini is the Founder and Lead Consultant at KundoLabs, a boutique consultancy that specializes in cloud infrastructure for data platforms and AI applications. He is also a CNCF Ambassador, a Kubernetes instructor, and the Co-Founder of Renao, a community organization connecting cloud native professionals across Europe. His client work spans startups and enterprises building production AI and data systems on Kubernetes.

"Kubernetes has repeatedly proven its adaptability. Now it's showing it can be the backbone for AI infrastructure, ready for whatever the next industry trend demands," Giardini says. The abstraction Kubernetes provides between software and hardware is what makes it useful for AI. Rather than pinning workloads to specific GPU models in specific regions, teams can let the scheduler optimize for cost and availability across whatever resources exist. Giardini sees this as the same pattern Kubernetes has followed for a decade: absorb a new category of workload by extending its orchestration model.

  • Proven pattern: "Over the past ten years, Kubernetes has shown over and over its ability to adapt to the market. We saw it when we switched to containers, when we moved to the cloud, and now we're seeing it with AI," Giardini says. More tools are being developed on the platform to run AI efficiently and at lower cost, and recent releases include features designed specifically for GPU-intensive and data-heavy workloads.

  • No turnkey stack: The cloud native ecosystem still operates like a set of puzzle pieces, and Giardini does not expect that to change. "I don't think there is a good AI stack that is so plug-and-play that you just deploy it on your Kubernetes cluster and everything works. But Kubernetes is becoming the platform where these stacks will run," he says. The pluggable nature of cloud native tooling means organizations assemble what fits their requirements, and projects that stay modular and composable are the ones gaining traction.

Running stateful workloads on Kubernetes, particularly databases, remains a strategic trade-off. Giardini acknowledges it requires more operational expertise than stateless deployments, but says the maturity gap is closing. Some organizations have made the decision and are committed to it.

  • A deliberate trade-off: "I know companies that have made a conscious decision to run all their databases in Kubernetes, and they are very happy with it. But it comes with extra effort, development time, management, updates, and security considerations," Giardini says. Running databases on Kubernetes is increasingly viable thanks to mature operator patterns, but teams need to weigh that operational investment against managed alternatives.

  • Platform as product: For Giardini, the biggest factor in whether a platform initiative succeeds has nothing to do with technology choice. "A platform needs a product manager. You need to serve your developers as if they are your clients, because they are the clients of your platform," he says. Enterprises that treat platforms as internal products that evolve through user feedback avoid the pattern he sees too often: finishing one platform only to start building the next one from scratch.

The feature that Giardini thinks deserves more attention than it gets is Kubernetes' ubiquitous API. A deployment manifest on Azure looks the same as one on Google Cloud or Scaleway. Open source projects distribute Kubernetes manifests that target the API, not a specific provider. As data sovereignty pressures grow and organizations evaluate sovereign cloud providers alongside hyperscalers, that portability becomes a strategic asset.

"A deployment in Azure is the same as a deployment in Scaleway, which is the same as a deployment in Google Cloud. Having the same API everywhere makes moving between providers much easier than it was a few years ago using VMs or bare metal," Giardini says. "That's one of the key features that sometimes doesn't get highlighted enough."