All articles

Leadership

Public Sector AI Will Advance When Leaders Overcome Governance Paralysis

AI Data Press - News Team
|
February 2, 2026

Alan Shark, Associate Professor at George Mason University, argues that risk-averse public sector leaders are stalling AI progress by demanding perfect policies before taking action.

Credit: diane555 (edited)

Key Points

  • Government AI adoption lags not because the technology falls short, but because leadership waits for perfect governance while employees already use AI informally and without oversight.

  • Alan Shark, Associate Professor at George Mason University’s Schar School of Policy and Government, explains how this leadership hesitation creates a widening gap between personal use and enterprise action.

  • Progress comes from leaders building AI literacy, piloting low-risk, data-driven use cases, and managing risk through measured experimentation instead of delaying deployment.

We have too many people—especially in the public sector—huddled around the start line, asking if the guidelines and governance are perfect. Those are good questions, but not good enough to hold up progress.

Alan Shark

Senior Fellow
Center for Digital Government

Alan Shark

Senior Fellow
Center for Digital Government

The AI deployment gap in government has little to do with model capability. It's a leadership problem. While individual workers quietly adopt AI for personal productivity, large public sector organizations remain frozen, demanding perfect governance frameworks before taking a single step forward.

That's the view of Alan Shark, Associate Professor at George Mason University's Schar School of Policy and Government and Senior Fellow at the Center for Digital Government. With over 30 years of experience in public technology and a new book on AI leadership for public sector organizations, Shark has spent years watching government agencies struggle to move from pilots to production.

"We have too many people—especially in the public sector—huddled around the start line, asking if the guidelines and governance are perfect. Those are good questions, but not good enough to hold up progress," says Shark. The gap between personal and enterprise adoption, he says, is stark.

  • Raise your hands: When Shark asks rooms full of government employees how many use AI, hands go up in greater numbers each year. Many stay down anyway. "A lot of times in public forums, I see people not raising their hand who I know are using it because they're afraid their coworkers will accuse them of cheating or using AI in an unfair manner." For summarizing meetings, organizing documents, and improving presentations, adoption is already widespread. The enterprise side, however, is a different story.

  • A leadership vacuum: Organizations remain tethered to vendor roadmaps and leadership cultures that frame experimentation as exposure, not progress. Shark describes a vacuum at the top, where hesitation replaces direction. "There is a lack of leadership and a lack of understanding of what the technology can actually do," he says. "I hear leaders say they don’t want to be first, they want to be last, they want others to try it and report back." That reluctance is reinforced, not challenged, by the very professional associations meant to guide them. "The organizations these leaders rely on have been very slow to introduce AI in any meaningful way. There is a leadership gap, and they are not getting reinforcement or practical help from their national or regional associations."

But when leaders stay frozen, their employees do not. Shark has written about the emergence of "shadow AI," where frustrated workers start experimenting independently without oversight. The danger is real: staff may inadvertently expose personally identifiable information or build systems that lack accountability. But Shark understands the frustration. "A lot of the younger employees see what AI can do and they want to play with it."

  • Agentic possibilities: The solution, Shark says, is not more waiting but informed action. Leaders need a working grasp of AI fundamentals and the confidence to pilot practical, data-driven use cases. "Anything built on data collection and analysis is low-hanging fruit," he says. Citizen-facing chatbots are an obvious starting point. "Chatbots have already proven successful in augmenting human teams by answering complex questions 24/7 and in multiple languages," Shark says. From there, agentic systems unlock deeper value. "Agentic AI can proactively identify eligibility and help people complete forms using existing information. That’s where the impact becomes very real.”

  • Pattern recognition at scale: Data is the key, he says. Anything in the public sector that depends on data collection and analysis is ripe for AI, and much of it represents low-hanging fruit. "When you start to look at data, whether it's 311 systems, crime statistics, or benefit programs, AI can look for patterns that humans may not recognize. Most of your calls are coming from this one block? That could be enormously helpful to urban planners."

Some cities are already moving. Seattle recently published a comprehensive AI plan outlining responsible deployment strategies. The GSA's AI guide for government provides practical frameworks. And the UK just announced a Meta-backed AI team to upgrade public services. The path forward exists for those willing to take it.

The risk-averse mindset, Shark argues, ultimately creates more risk than it prevents. Delaying deployment does not eliminate problems; it just ensures they happen without preparation or institutional learning. Progress comes from measured experimentation, continuous monitoring, and leadership accountability. "It's not about avoiding risk," Shark says. "It's managing risk. Take steps and measure. Take steps and measure. Even after you feel it's good, you still continue to monitor. If it goes wrong, it could cause so much harm in terms of public trust. But you have to start."