All articles

Data & Infrastructure

In multinational implementations, AI's 'Demon Cat' can't be tamed by a single set of rules

AI Data Press - News Team
|
November 7, 2025

General Counsel Son-U Paik discusses the complexity of integrating AI into existing risk management systems due to varied local regulations.

Credit: Outlever

Key Points

  • Multinational corporations face a strategic paradox as AI technology promises efficiency, but is challenged by global compliance and geopolitical tensions.
  • General Counsel Son-U Paik highlights the complexity of integrating AI into existing risk management systems due to varied local regulations.
  • Paik advocates for a unified corporate standard to manage AI risks effectively across different jurisdictions.

 

For enterprises, generative and agentic AI is relatively new, but it still needs to be integrated into their existing risk management systems. It could be one company like OpenAI that's behaving one way in Europe, another way in California, and another way entirely in Japan. It's varied and complicated for multinationals.

Son-U Paik

GC and AI Governance Architect
BABL AI | Counsel for Responsible AI (South Korea)

Son-U Paik

GC and AI Governance Architect
BABL AI | Counsel for Responsible AI (South Korea)

For multinational corporations, the era of AI presents a strategic paradox. The technology promises seamless, borderless efficiency, yet it is being deployed into a world fractured by competing rules, sovereign data complexities, and intensifying geopolitics. A single AI model, for example, touching HR in India, marketing in the United States, and product design in Germany creates a global governance headache, forcing companies to navigate a chaotic web of compliance obligations.

We spoke with General Counsel and AI Governance Architect Son-U Paik to discuss risk management and response. As GC at BABL AI, an organization dedicated to algorithmic audits, and member of the Global Counsel for Responsible AI (South Korea), Paik’s perspective is uniquely borderless. With a German mother and a Korean father, he was educated in the U.S., practicing law on Wall Street and in Silicon Valley before returning to Asia to build risk management systems for industrial conglomerates in tires, shipbuilding, and steel. Paik also contributed to drafting the first EU General-Purpose AI Code of Practice. This rare blend of experience gives him a ground-level view on how public sector and enterprises must navigate this new era of risk.

  • The global governance headache: "For enterprises, generative and agentic AI is relatively new, but it still needs to be integrated into their existing risk management systems," Paik said. One core problem is that while AI workflows are global, regulations are fiercely local. This forces companies into a complex dance of compliance. "It could be one company like OpenAI that's behaving one way in Europe, another way in California, and another way entirely in Japan," he said. "It's varied and complicated for multinationals."

  • Geopolitics in play: This legal fragmentation is layered on top of a geopolitical landscape fraught with tension over the very components that power AI, from the chip war between the U.S. and China to export controls and tariffs that have become front-page news. For global companies, AI risk is now inextricably linked to trade and supply chain compliance.

Beyond the geopolitical chaos lies a fundamental technical challenge: modern AI is inherently unpredictable. Paik used a surprising analogy from a popular cartoon called Adventure Time to explain this reality to executives. In the show, an animated cat called "Demon Cat" that "knows almost everything," and pops up unexpectedly and frequently to antagonize the protagonists.

  • The 'Demon Cat' in the workflow: "AI technology is not deterministic; it's stochastic or probabilistic," he explained. "My favorite analogy involves Demon Cat from Adventure Time that has 'approximate knowledge of many things.' You don't know when that cat will pop up. Similarly, risk management processes need to account for those Demon Cats popping up in your workflows."

  • A unified standard or bust: Faced with this legal and technical uncertainty, how should a corporation respond? For Paik, the answer requires confronting an uncomfortable truth about corporate behavior. "Risk management in a lot of companies is terrible," he said bluntly. "Many leaders say, 'As long as it doesn't happen on my watch, it's ok'. The risk is not managed, it's just pushed away or covered up." The antidote to this inertia, he argued, is a clear, unified corporate standard that holds the entire organization to its highest ethical and legal bar, regardless of location. "As a lawyer, if I were the general counsel of an organization, I would go for a unified model. You cannot have lax standards in one jurisdiction and high standards in another, because that will come back to bite you. It will always find its way back to you." 

Paik warned that the greatest risk lies in devaluing the human wisdom that AI models are meant to augment. He recalled a pre-AI merger where a buyer fired an entire unit, only to discover that "nobody knew what they did," forcing the company to hire them all back. The same danger exists today if companies view AI simply as a tool for workforce reduction. To counter this, Paik championed a radically transparent approach.

In a recent training with young, skeptical employees at a client company, he addressed their fears head-on. "What I can promise you is that by going through this training, you'll be ready for the workforce in another capacity and be compensated more in a workplace of your choosing," he told them. "It's up to this company to hang on to you." This method builds trust by empowering employees, turning a moment of anxiety into an opportunity for growth.

 

*All opinions expressed in this piece are those of Son-U Paik, and do not necessarily reflect the views of his employer.