AI governance is drifting toward the same failure mode that hollowed out a generation of cybersecurity compliance. Organizations are bolting on automated report generators, watching green check marks populate dashboards, and calling the work done. The problem is that the people who bear the actual risk of an AI system are not the ones reading the reports.
Sandra Luksic is an Associate in AI Governance at Booz Allen Hamilton, the management and technology consultancy that advises U.S. federal and commercial clients on AI policy and responsible deployment. A former Stanford Responsible AI Fellow, they have built AI inventories, risk assessments, and governance tooling across both sectors, and they are vocal about a gap they see widening inside the discipline itself.
"Risks are borne by people, not organizations. If governance forgets that, it is just compliance theater," says Luksic. The goal of risk management should be to protect people," they say. "When we talk about an organization bearing risk, we might think of data leakage or a cyber breach. But the organization is not a human being. It cannot feel or be accountable or be responsible at the end of the day for the harms these risks cause."
A disconnection problem: For Luksic, the deeper issue is not that governance is being automated but that the thing being automated was already disconnected from people. "Most risk management platforms do not make that connection, and the risks are out of context. It's not so much that they're just being automated. They're disconnected from human beings in the first place."
Kicking the ball away: The automated piece, they say, tends to be the paperwork itself. "Generating reports, sending emails to stakeholders, asking for evidence. That piece is great to automate. But you're just kicking the ball of the hard work down the line, and you accrue a lot of governance debt." That debt, they warn, tracks the same pattern as prior compliance regimes where SOC 2 or ISO certifications eventually produced binders of paper that meant very little in practice.
Automation bias: The LLM is often the wrong tool for the job, they continue. "LLMs are next-word predictors. They don't think. They don't reason. We cannot let ourselves be fooled by the automation bias. As AI governance experts, we have to be aware that we're susceptible to the same bias we warn our customers about."
Without an industry-normed standard equivalent to SOC 2, the field risks converging on audit theater rather than substantive oversight. Many major institutions, they note, have attempted to own that standard. It's unclear if any have yet succeeded.
Evaluate use cases, not vendors: Luksic's working hypothesis is that similar use cases are showing value across different industries, such as chatbots, RAGs, generative AI, and analytic models. That narrowness is an opportunity. "If you build an AI risk framework starting from the five most common use cases you're actually seeing in your organization, you get 70% of the way toward predicting the risks and the controls."
They evaluate every use case across four traits: end user (internal or external), data sensitivity, model scope, and model type. "With those four traits in combination, we have the literature, the research, and the case studies to predict what the risks and necessary mitigations are going to be."
Community-led, not vendor-led: The audit question also deserves a rethink. "Why are we leaving evaluations of how trustworthy an AI system is to a third-party auditor? There's really interesting work being done around community-led AI evaluations and community-led audits. If it's just being done to create a secondary market where we all pay each other off to make checkboxes, that's deeply disappointing."
Names on the plate: Accountability has to land on individuals, Luksic argues, echoing a theme surfacing across enterprise AI workflows. "At some point, individuals need to put their name on things. Executives need to put their names on things. When it's your head on the plate, you are incentivized to work harder. There's no incentive to do that right now if it exists in a system that feels totally separate from you."
The expertise gap: In cybersecurity, thousands of experts can spot a hollow SOC 2 report on sight. AI governance is not there yet. "There's a meaningful lack of expertise in being able to distinguish bullshit from real meaningful control work."
Yet they push back on the idea that AI is somehow outpacing the rules. The Americans with Disabilities Act has not disappeared. Neither has the Civil Rights Act. The NIST AI Risk Management Framework, unwieldy as it is, exists. "There's an AI exceptionalism that ignores the very real existing regulations on the books. AI is software. We have tons of tools to regulate software, cybersecurity included. We have the expertise, the policy, and the incentives to make this work. We just need to use them."
The governance field, in other words, does not need a new vocabulary so much as the discipline to apply the one it already has to the humans standing on the other side of the report.