*The views and opinions expressed by Son-U Michael Paik are his own and do not necessarily represent those of any former or current employers.
The adoption of AI by multinational corporations is introducing complex governance challenges for leaders. When a single algorithm functions across global operations, it can create inconsistencies in risk and compliance management. Now, to ensure operational stability and consistent standards, one approach gaining traction is to build a unified governance framework from within.
For a practitioner's take on the subject, we spoke with Son-U Michael Paik, a New York attorney and AI governance architect. With a perspective shaped by 25 years of designing risk systems for global institutions, Paik's experience extends from Wall Street to Korean industrial giants in the tire, shipbuilding, and steel sectors. Now General Counsel for BABL AI, an Advisory Board Member of the Institute of Health & Management, and Global Ambassador & Responsible AI Governor for South Korea, Paik sees how the trade work of a decade ago provides a critical toolkit for governance, risk, and compliance (GRC) in the modern era.
With a reactive culture facing an unpredictable technology, the solution for Paik is a unified model. "As a lawyer, I advise a unified model. You cannot have lax standards in one jurisdiction and high standards in another. That inconsistency will come back to bite you. A unified plan is essential because people move, and the work is interconnected. It cannot be siloed in one office."
'Tomorrow's paper' test: It also forces leaders to face public accountability, Paik says. For him, a simple gut-check often provides the best navigation tool. "Centering on human rights helps, because it provides a point of consensus. The simple test is, 'What do you not want to be known for in tomorrow's paper?' Clear lines and fundamental principles need to be discussed upfront, not decided on the fly. That is the governance point."
Making it matter: To be successful, a model must move beyond policy and into the mechanics of culture, Paik explains. "At the individual and team level, you must create KPIs and incentives. Give people a reason to watch for non-compliant activity and to recognize how it harms the business. It will cost you one way or the other."
Eventually, Paik says, risk awareness must become a measured part of everyone's job. "In many companies, risk management is broken. The culture often tolerates risk as long as it doesn't happen on an individual's watch. In that environment, risk isn't managed. It's simply pushed away or covered up," Paik says. But a weak foundation often makes adopting unpredictable technology dangerous, he explains.
Enter the 'demon cat': Unlike traditional software, AI operates on probability, not fixed rules. Now, a new level of unpredictability has entered the system as a result. "This technology is not deterministic. It is probabilistic," Paik explains. "My favorite analogy is the demon cat from Adventure Time, a character with approximate knowledge of many things. You never know when that cat will pop up. Your risk process must account for it appearing in your workflows."
But the 'demon cat' of risk is not just a technical problem, Paik explains. It represents the organizational risk created when human wisdom is overlooked. "When people leave, their tacit knowledge walks out the door with them," he says. "I remember a pre-AI merger where the buyer fired an entire unit, only to realize no one else knew what that unit did. They had to go back and hire them all again."
The indispensable expert: Managing this risk requires a transparent bargain with indispensable employees, Paik continues. "The ideal is to upskill or reskill workers in exchange for 'downloading' their wisdom. This bargain enables you to utilize AI more effectively while retraining employees for other tasks. But you cannot fool them."
Navigating this new era means that professionals must evolve, leveraging their deep knowledge as a powerful tool. "If you've been in an industry for 30 years, you have deep wisdom. AI can help you deploy that wisdom across adjacent industries, extending your career and making you far more marketable," Paik says.
Ultimately, the push for AI-driven efficiency will fail if it ignores the human expertise that fuels it, Paik explains. For him, realizing the promise of automation depends on a foundation of trust, cooperation, and respect for invaluable human knowledge. "Mid-level and senior people should be excited about AI. What they do and what they know is incredibly hard to capture. They are the ones who will determine whether AI succeeds in their organization. Without their cooperation and wisdom, it will not work."
In conclusion, Paik offers his own profession as a final case study. "In-house lawyers must evolve from individual experts into legal risk managers. They are capable, but they have to understand that AI is not a personal productivity tool. It is a system that must be owned and managed within the organization's governance framework."