The governance models most organizations rely on for AI were designed for a different era. Policy documents reviewed quarterly, approval committees that meet after development is complete, and compliance checks disconnected from delivery workflows all produce the same result: teams work around them. Shadow AI appears. Risk decisions lose consistency. And the leadership confidence required to scale AI erodes precisely when organizations need it most.
Akshata Udiavar is a Partner and Principal at EY, where she leads EY Studio+ for Banking and Capital Markets. She drives agentic AI transformation across financial services, combining AI-led decisioning with human-centered client experiences for major banking and wealth management institutions. Previously at Goldman Sachs, she has spent her career at the intersection of financial services strategy, regulatory compliance, and enterprise transformation, with a focus on turning emerging technology into measurable impact across the client experience and product lifecycle. Udiavar points to execution as the dividing line between governance that exists on paper and governance that works in practice.
"Governance only works when it's embedded directly into how teams design, build, and deploy AI rather than added after the fact," Udiavar says. The distinction she draws is operational, not philosophical. Organizations that treat governance as something that happens alongside development, with automated risk checks, reusable guardrails, and proportional controls tied to defined risk tiers, move faster than those that treat it as a gate at the end. The reason is straightforward: standardized governance eliminates repeated decision-making. Teams that do not have to reinvent their approach to risk every time they build something new can focus on the work itself.
Standardize to accelerate: "The fastest organizations don't loosen controls. They standardize and automate them so teams can innovate without reinventing decisions every time," Udiavar says. Reusable guardrails, clear risk tiers, and defined accountability prevent the ambiguity that slows delivery. As enterprise AI investment accelerates, the organizations scaling fastest are those that have made governance part of the development workflow rather than a review after it.
Shadow AI as symptom: "When governance lives in static policies or late-stage committees, teams work around it and shadow AI starts to appear," she continues. The problem is not that teams are being reckless. It is that disconnected governance creates friction that feels arbitrary, so people route around it. The result is inconsistent risk decisions, fragmented adoption, and difficulty demonstrating the control needed to satisfy regulators and boards.
Trust, in Udiavar's opinion, is not a sentiment. It is a measurable condition that determines whether AI moves from pilots into core operations. When trust is present, leaders deploy AI in regulated and customer-critical journeys, approvals move faster with fewer late-stage objections, and teams stop debating whether to use AI and start focusing on how to scale it responsibly. When trust is absent, adoption stalls even where the technology is ready.
Transparency over perfection: "Trust increases when organizations prioritize transparency, observability, and traceability over perfection," Udiavar says. The practices that build it include clear disclosure of when and how AI is used, documented model intent and data lineage, continuous monitoring for bias and performance, and human oversight for consequential decisions. These measures give internal teams the confidence to use AI responsibly and give customers a reason to accept AI-driven outcomes.
Non-negotiable guardrails: Some controls cannot flex regardless of speed pressure. Udiavar identifies four: clear data rights and consent, named accountability for AI outcomes, transparency and explainability where AI affects customers or financial results, and continuous oversight rather than one-time approvals. "When these guardrails are clear and consistently applied, governance stops being a roadblock and becomes the foundation for bold, responsible innovation," she says.
The shift Udiavar describes requires leaders to change how they measure governance itself. Instead of evaluating it by how much risk it blocks, effective organizations measure how quickly teams can launch approved use cases, how confidently leadership can stand behind AI-driven decisions, and how effectively the organization responds when something goes wrong. By those metrics, governance is not a cost center. It is what makes enterprise-wide AI adoption possible.
"Innovation without governance can move quickly in isolated pockets, but it rarely scales," she concludes. "Governance is what allows organizations to move fast and safely across the enterprise."