All articles

Enterprise AI

Exploring the patchwork trap of AI in manufacturing and the need for core-first strategy

AI Data Press - News Team
|
November 7, 2025

Francisco Almada Lobo, CEO or Critical Manufacturing, highlights the importance of aligning AI capabilities with practical business needs.

Credit: criticalmanufacturing.com (edited)

Key Points

  • The rush to implement AI in manufacturing without clear goals or data integrity can undermine operational resilience.
  • Francisco Almada Lobo, CEO or Critical Manufacturing, highlights the importance of aligning AI capabilities with practical business needs.
  • Human oversight and structured approval processes are crucial for building trust in AI-driven manufacturing systems.
  • The concept of a fully autonomous factory is considered a fantasy, with human roles evolving alongside intelligent systems.

 

If you just focus on the platform, forgetting the actual real-world use cases, then it becomes problematic, difficult to explain, and difficult to communicate. It's very important to explain the use cases, the purpose, and the risks and how to contain them.

Francisco Almada Lobo

CEO
Critical Manufacturing

Francisco Almada Lobo

CEO
Critical Manufacturing

Manufacturers are rushing to deploy AI agents and advanced automation without always asking the most critical question: why? The leap from cumbersome classical machine learning to the seemingly effortless power of generative AI has created an almost irresistible temptation to adopt new tools. But without a clear purpose, clean data, and disciplined guardrails, these projects risk becoming costly distractions that undermine operational resilience instead of improving it.

We spoke with Francisco Almada Lobo, the CEO of Critical Manufacturing, a leader whose expertise was forged over decades on the front lines of IT and production automation in the high-stakes semiconductor industry at firms like Infineon and Qimonda. Lobo argued that the industry's current enthusiasm is masking a significant danger.

The problem, he said, begins with a powerful illusion of ease. "The temptation is so high that people are just saying, ‘Okay, let's go deep into that without thinking it through’—without thinking not only of the purpose, or lack of purpose for some of these things, but also the consequences if things aren't done the right way." The antidote to this hype-driven implementation, Lobo insisted, is a return to fundamentals. Before any agent can deliver value, its data foundation must be solid, a lesson many are learning the hard way.

  • No shortcuts to readiness: "If you feed a genAI model incomplete, scattered, or inconsistent information, then you're most likely going to have hallucinations," Lobo warned. "The LLMs will create incorrect responses, and worse, they will create them with such confidence that it becomes difficult to distinguish what's right from wrong."

This foundational work presents manufacturers with a stark strategic choice. They can either try to bolt AI onto legacy systems through a complex "patchwork" of data lakes and reconstruction, or they can pursue a "core-first" strategy by redesigning base applications like modern MES platforms with data and AI at their center.

  • The patchwork trap: "I cannot tell you which one is easier, but I can tell you that one is much more of a patchwork being done after the fact, while the other involves considering the data aspects and your strategy right from the start," he explained. "I think that's the right path because the patchwork can only take you so far. After the first patch comes a second, then a third."

Even with a solid architecture, AI initiatives are destined to fail if they remain abstract technological pursuits. Lobo stressed that value is only created when platform capabilities are tied directly to solving real-world business problems.

  • From platform to purpose: "If you just focus on the platform, forgetting the actual real-world use cases, then it becomes problematic, difficult to explain, and difficult to communicate," he said. "It's very important to explain the use cases, the purpose, and the risks and how to contain them."

Those risks escalate dramatically as AI evolves from providing information to taking action. The danger is no longer just a misleading chart, but a physical error on the factory floor with tangible consequences.

  • From wrong answers to wrong actions: "When you're talking about MCPs, it's not just about getting wrong information, but actually performing wrong actions," Lobo cautioned. "You have to think that if an LLM can really perform actions, can modify configurations, can run jobs, can take decisions on dispatching material—these things are incredibly risky."

  • Building in the guardrails: To manage this, Lobo argued for a disciplined, step-by-step approach built on human oversight. The goal isn't to eliminate human involvement but to build systems that earn trust over time. "We need to use approval flows, have humans in the loop, and implement guardrails and explainability layers," he stated. "It's critical because factories need to build confidence in the decisions being made by these systems. That's how they can, over time, be given more autonomy to deliver the best possible results and value."

This human-centric vision extends to his view on the future of manufacturing. The long-held dream of the fully autonomous "lights out" factory is a fantasy, he argued. "I don't think that's possible. I don't think that's achievable. I don't think that's desirable. As more and more intelligent systems will come, humans will start morphing, will shift into different areas, again making sure that the entire thing works as intended."