As executives head into 2026, many are finally ditching artificial intelligence for the sake of optics and demanding actual return on investment. Such a leadership pivot, focusing on practical workflow integration, exposes a strange paradox on the ground: eager employees trying to save time with new tools are actually doubling their work because they lack guidance.
An AI Solutions Developer at St. Paul's School, Roney Nascimento sees the paradox play out daily. He bridges the gap between theoretical machine learning and daily user workflows, building production-level systems, including the AI Teachers platform, which serves more than 15,000 educators monthly. Nascimento views many of today’s adoption hurdles as user operations issues, amplified by the absence of clear corporate guardrails. "Half of the people want to use AI, and the other half don't like it at all. Even inside the same company, you see completely different attitudes toward the same technology," he says.
Double the trouble: As companies rush to deploy new tools, accountability in enterprise workflows is fracturing. In the absence of clear internal guidelines, some employees turn to unvetted applications on their own. That, in turn, reinforces skepticism among colleagues who are hesitant to use the technology at all. Unguided experimentation, according to Nascimento, frequently creates a dynamic where some of the heaviest adopters end up taking on more work, not less. "Half of the people who are using AI heavily right now are not reducing the workload," he says.
Nascimento blames the lack of structure. Teams without a shared approach to AI governance tend to split into distinct camps: some experimenting heavily, others watching from the sidelines, and many unsure what is actually allowed. Risk-averse departments are freezing. Some remain on the sidelines, risking long-term competitiveness. Without top-down clarity, those holding back often see caution as the safer option.
Red tape reality: Nascimento views the hesitation as a pragmatic data sovereignty concern that requires a formal policy, not just a mandate to innovate. "There is no one in HR using AI. No one. They think that if they start using AI and something happens, they could face potential legal problems." That specific hesitation usually traces back to the C-suite. Translating ambition into practical lanes between human and machine labor on the digital factory floor leaves certain leaders hesitant to establish firm policies.
Caught in the crossfire: For many boards and executives, the drive to unlock superagency in the workplace is currently competing with difficult questions about team structure and job impact. As Nascimento says, "The leadership is not trying to replace people with AI, but they are also not trying to tell the people how to use AI."
The phantom pink slip: He also acknowledges the anxiety that keeps leadership frozen, noting the unspoken calculus behind every AI rollout. "The worst-case scenario could fire half of the employees and say, 'We don't need you anymore because, see, now the AI can run 24/7. I just need to pay for one cloud service, and that's it."
Acknowledging the tension is a first step, but it must be followed by the documentation of an AI policy that defines the boundaries between human and machine work. At the same time, adoption hurdles are often tied to hard technical constraints. As boards and upper management work to evolve their decision-making, many leaders still conflate conversational chatbots with traditional machine learning.
Math over magic: While generative AI serves as an accessible interface, Nascimento points out that analyzing large datasets typically requires well-designed machine learning models. "If you have more than a million lines of code or data, the AI is not able to take more than a billion tokens. You need to know how to use the machine learning systems driving our decisions."
That technical boundary also shapes how he thinks about AI in education and other sectors. Because the tech still hallucinates and exhibits bias, HR and other cautious departments are actually right to wait for an internal expert to build a secure, finely tuned system. For organizations trying to move beyond experimentation, a more sustainable path involves appointing that expert, clarifying boundaries, and matching specific machine learning systems to specific roles.
Without leadership clarity, AI adoption will remain fragmented, inefficient, and potentially counterproductive—a patchwork of overwork and avoidance rather than a coherent strategy. Nascimento sees the path forward as straightforward, even if the organizational politics are not. Start with an internal expert, establish a policy, and give people structured room to experiment. "Companies need an AI policy first," he says. "Without clear guidance, people don't know how to use AI safely or effectively. That's the starting point you can't come back from."