All articles

Security & Governance

AI needs global governance standards to safely scale, not policy patchwork

AI Data Press - News Team
|
November 7, 2025

Vivek Kumar, AVP of Data Protection at AI and data strategy firm EXL, discusses the importance of international standards to guide AI governance and mitigate risks.

Source: Outlever.com

Key Points

  • AI governance is becoming a strategic necessity, demanding a focus on ethics, transparency, and accountability in AI systems.

  • Vivek Kumar, AVP of Data Protection and AI Governance at EXL, discusses the importance of international standards to guide AI governance and mitigate risks.

  • The shortage of skilled AI governance professionals is a major hurdle for companies aiming to implement robust frameworks.

  • Balancing AI innovation with risk management requires global cooperation and alignment with emerging regulations.

Every country is trying to create its own AI guidelines and codes of conduct. The challenge is: how do we unify these into a standardized governance model?

Vivek Kumar

AVP, Data Protection & AI Governance
EXL

Vivek Kumar

AVP, Data Protection & AI Governance
EXL

The AI governance playbook is being frantically rewritten. As countries race to draft their own rules, what’s needed isn’t just more policy, but a mindset shift: one that embeds ethics, transparency, and accountability directly into AI’s design.

For Vivek Kumar, AVP of Data Protection and AI Governance at AI and data digital transformation firm EXL, pushing AI governance from afterthought to cornerstone is less about checking the boxes and more about rethinking the rules entirely.

Governance gone global: AI's regulatory landscape is a global patchwork. "Every country is trying to create its own AI guidelines and codes of conduct. The challenge is: how do we unify these into a standardized governance model?" That external fragmentation makes internal alignment even more critical. "How do we build ethically-aligned processes and guidelines that bring transparency and data disclosure, not just for clients, but for employees too?" Kumar asks.

That challenge isn't just structural; it's human. There’s a glaring hole in the AI governance toolkit. "Limited availability of qualified AI governance professionals and skillsets is really, really challenging right now," Kumar says.

Let go of control: "The challenge with current AI risk frameworks is they're often built just thinking about controls," says Kumar. But the tech has outpaced the rulebook. "What if we approached it differently? Consider AI as another human you’re chatting with—that’s how you need to build your risk framework."

Forget vague ethical hand-waving; what AI needs now is practical standardization. It’s not just about adding terms like "LLM observability risk," though monitoring behavior and ensuring transparency matter. It’s about rethinking AI’s purpose and classification from the ground up to expose and eliminate insidious biases.

Setting the standard: Most companies, he says, have the basics in place. "However, there is a strategic shift in terms of how the governance structure and vision should be worked upon." That shift moves past surface-level safeguards and toward embedding ethics—transparency, fairness, and accountability—directly into AI’s core. And this isn’t a solo effort. Kumar sees global coordination as essential: "We need strong cross-country and industry standards to keep AI on the right path."

Pedal to the metal: Rather than slowing innovation, Kumar sees governance as the accelerator—if it’s built right. His call to action: invest in internal R&D for ethical AI. "Let’s not wait for someone else to build a standard and tell us what to do. We have unique problems," he says. The solution lies in cross-functional collaboration. "You need a more collaborative effort so people understand you, and you’re able to convey the right message."

The identity theft era: Privacy risks are escalating as AI systems generate answers that can be biased, incomplete—or eerily invasive. "We're really getting into an identity theft era where you don’t know what information about you is being processed by whom," Kumar warns. He’s seen firsthand how AI can infer deeply personal insights from just a sliver of context. "Even limited context can now be used to generate more data—your behavior, your likes, your dislikes.”

Balancing act: So, what’s the endgame? Not a utopian AI free-for-all, and not a hard stop, but a carefully calibrated balance. "There has to be a balance between how rapidly this innovation is evolving and how we avoid the associated threats," Kumar says. Achieving that balance means aligning mindsets, skillsets, and ethics, and doing it all at a global scale. "We need to keep a close eye on how geopolitical collaboration is developing, which countries are introducing AI regulations and standards, and how we can keep pace to adopt them."