All articles

Data & Infrastructure

The Model Is The Least Of Your Concern Once Enterprise AI Moves Past The Demo

AI Data Press - News Team
|
May 10, 2026

Michael Benedict, Client Partner at Slalom, explains why production AI advantage lives in the architecture around the model, and why companies that make employees' lives better through AI will outcompete those that simply bolt it on.

Credit: AI Data Press News

The model is kind of the least of your concern. Any system that you build should be interchangeable for different models. All the surrounding infrastructure is the piece that's not going to change.

Michael Benedict

Client Partner
Slalom

Michael Benedict

Client Partner
Slalom

Enterprise teams are still debating which AI model to use, while the actual determinant of success sits in everything around the model: data pipelines, routing logic, reliability thresholds, and whether anyone redesigned the workflow before bolting on an LLM. The industry obsession with benchmarks is a useful starting point, but once a company moves past experimentation, the model becomes the most interchangeable component in the stack.

Michael Benedict is a Client Partner at Slalom, focusing on systems integration and data modernization. Before Slalom, he spent nearly nine years at Deloitte, where he orchestrated a 12-system ecosystem at a top-three U.S. telecom provider to automate a critical customer journey with zero human touchpoints.

"The model is kind of the least of your concern. Any system that you build should be interchangeable for different models. All the surrounding infrastructure is the piece that's not going to change," says Benedict.

Benchmarks are a starting point, not a strategy

Benedict sees model selection as a narrowing exercise, not a differentiator. The real technical moat lives in what surrounds the model.

"It's hard to create a moat around one model that's sustainable in any way," Benedict says. He describes enterprises moving toward model routing systems where one model decides which model to send a request to, with a closed feedback loop refining the routing over time. "You're paying for the most cost-effective model that still gets you your output."

"If you call Claude through an API versus through one of their actual products, it behaves differently," Benedict says. "Benchmarks are the best proxy we have. But once you get into the real world, there's just too much nuance in data and systems." He compares it to choosing between Snowflake, Databricks, and Redshift. "People say, which one's the best? And it depends. Are you concerned about performance, cost, long-term maintainability, or stability?"

From POC to production is a reliability problem

Benedict frames the core enterprise challenge as a reliability problem that most teams underestimate. Anybody can put together a POC and get it to work in a cherry-picked situation. Getting to production-grade reliability is a different engineering discipline entirely.

"Nobody really appreciates the difference between 99% and 99.9%," Benedict says. "If you have a production workload, each order of magnitude is just as important." Previous enterprise systems operated at very high availability. AI systems are nowhere near that, and the gap widens as the business process becomes more critical.

Benedict sees enterprises moving away from frontier models for most production work. "What businesses are going to is: what's the smallest model I can use, and what's the scaffolding I can put around it to make sure it performs this narrowly scoped task 100% of the time?" Using smaller models with strong guardrails for execution and reserving larger models for reasoning is where cost discipline meets reliability.

Bolting AI on makes everything worse

The sharpest operational point Benedict makes is about what happens when teams add AI to a process without rethinking the process itself.

Benedict describes AI-enabled SDLC work where an LLM generates code but the review process stays the same. "Somebody's got to review this now 10x, 20x volume of code. You're going to make the person's life miserable," he says. The fix: write tests, validate against those tests, and only involve the human when something fails. "You have to account for the fact that there's always going to be a human in the loop."

"Even if the individual tasks get automated, it doesn't really mean the job gets automated," Benedict says. "Back in the day, people calculated numbers by hand. Then we had calculators. Then spreadsheets. Then reports." Coding follows the same pattern: you stop writing code by hand but start managing the intent of the agents working on it. Each wave pushes the role up one layer of abstraction.

Benedict returns to where enterprise AI consistently breaks down: the data layer. Agents accessing databases at scale and parallelizing sub-tasks amplify the same problems companies faced during cloud migrations. "Having that solid foundational structure, parallelizable, high-read, high-write infrastructure, is the key," he says.

The companies that build a durable AI advantage will not be the ones chasing whichever model tops a leaderboard that week. They will be the ones who build modular systems, design for reliability, and improve the experience of the people doing the work. "Anybody who figures this out and really makes their employees' lives more enjoyable through technology is going to have everybody wanting to work for them," Benedict says. "Versus the companies that don't. It's going to get a reputation and become a self-fulfilling prophecy."