All articles

Data & Infrastructure

How One CTO Built His Own Framework To Stop AI Code Tools From Producing Unmanageable Systems

AI Data Press - News Team
|
April 5, 2026

Julien Groselle, a CTO and AI architect, explains how he moved from AI-generated code chaos to production-ready output by enforcing extreme programming discipline, and why the real engineering skill is now systems thinking, not writing code.

Credit: Ai Data Press

Key Points

  • AI code generation tools produce working output but introduce hidden technical debt through code bloat, lack of reuse, and unpredictable architecture, requiring strict frameworks to make them production-ready.

  • Julien Groselle, CTO and AI architect, says the value of engineers is shifting from writing code to understanding architecture, integration, and systems design, because AI tools cannot hold the full picture of how a system fits together at scale.

  • European data sovereignty requirements prevent some companies from using leading American AI tools entirely, forcing organizations toward open-source internal models and creating a fragmented adoption landscape.

Today, coding itself is no longer the hard part. The real skill is understanding how to build the system around it. AI can write the code, but it cannot hold the full architecture in its head.

Julien Groselle

CTO
AI Architect

Julien Groselle

CTO
AI Architect

AI coding tools can generate a working function in two seconds that would have taken a developer seven hours. That productivity gain is real. What's also real is the hundred-line, thousand-line output that never reuses existing functions, hallucinates logic, and produces a codebase that no one can maintain at scale. The gap between AI-generated code that compiles and AI-generated code that belongs in a production system is where most teams are getting stuck.

Julien Groselle is a CTO and AI architect based in Switzerland who previously built DigNow.io, an AI-powered due diligence platform for crypto and Web3. With a career spanning infrastructure engineering at organizations including SICPA, the Council of Europe, and France Télécom, Groselle now uses AI-assisted development tools for roughly 80% of his daily work. He built his product almost entirely with Claude Code, and the early experience nearly derailed the project. "Today, coding itself is no longer the hard part," Groselle says. "The real skill is understanding how to build the system around it. AI can write the code, but it cannot hold the full architecture in its head."

  • From chaos to control: When Groselle first tried building his product with Claude Code from scratch, the results were unusable. "It was hallucinating and creating too much code. For simple pages or simple functions, it was generating hundreds, even a thousand lines. It never reused existing functions." He tried an existing project management tool for AI-assisted development but found it buggy, so he built his own framework called CCXP, based on extreme programming principles. "I forced the model to reuse code and follow guidelines. Now it works, and I can build production-ready tools. It changed my life."

  • Everyone is a builder now: The speed advantage cuts both ways. Groselle sees AI compressing the build-fail-learn cycle from months to hours. "With AI, you can fail in half a day. You can build something, just fail, and do it again the second part of the day." That acceleration is powerful for experienced engineers who understand systems architecture, but it also means anyone with access to an AI coding tool can ship something that touches production data. When builders lack the systems knowledge to implement proper access controls or credential management, the database becomes the blast radius. In practice, this can mean AI-generated queries that bypass existing access patterns, duplicate business logic, or operate on production data without the constraints engineers would normally enforce.

As AI-generated code moves closer to production, the challenge shifts to ensuring it aligns with existing architecture, respects data boundaries, and integrates safely into live systems. Teams adopting AI-assisted development need clear validation layers and controls to prevent hidden risks from reaching critical infrastructure, especially at the database level. This makes enforcing strict access controls, like Row Level Security and scoped credentials, essential to ensuring agents only interact with the data they are explicitly allowed to access. As AI-generated code begins to influence real systems and data, review becomes the primary control layer between generated output and production impact.

  • Human-in-the-loop review: Groselle treats code review as the critical human checkpoint for AI-assisted development. Every developer reviews their own output first. Then he runs the code through a second model for automated review. Then a human reviews the review. "Code review is the human in the loop for AI-assisted coding now," he says. His biggest concern at this stage is security. "We need a security audit from outside, red team, blue team. Someone with a stamp that can say these guys have done a good job and there are no big security problems."

  • Architecture as the new core skill: For younger engineers who learned to code primarily through AI, Groselle sees a real risk. The skill that matters now is not writing functions but understanding how systems fit together. "AI for coding is like a junior developer. You can ask it to do things and it does them well. But you cannot ask it to understand how everything will work at the end." Load balancing, external integrations, production constraints: these require big-picture thinking that AI cannot assemble on its own. "Young people must learn and understand architecture and how all the computing pieces work together."

The adoption challenge goes beyond technical discipline. Groselle recommends that conservative organizations start with AI in the review layer, not the generation layer. "Your developers continue to work as always. You do not change anything. Then you add small pieces, like having AI do all the reviews. Then you evaluate the output." The incremental approach is slower but sustainable.

  • Sovereignty as a hard constraint: For Swiss and European companies, adoption hits a wall that American organizations rarely encounter. "I have many companies here that cannot use American AI tools. They cannot. The files and prompts cannot cross the border," Groselle says. Swiss regulatory frameworks and data residency requirements mean that even the most capable tools are simply unavailable. His recommendation: internal open-source models. "For such companies, it could be interesting to have internal open-source coding agents. Small ones, big ones, it depends on the budget."

  • Right-sizing the model: Groselle also pushes back on the default toward the largest available model. "Using really big models to do everything is a bad idea. It could be interesting to evaluate small or medium models to do one task but do it well." The economics reinforce the point. Token-based pricing makes costs unpredictable at company scale, while SaaS-model pricing at least allows forecasting.

Groselle’s guidance for teams is simple, "Never trust AI blindly. Always review. It is text generated by statistics not the truth. If you decide to trust it you have to know what you're trusting." Every output must be reviewed, validated, and understood in the context of the system it touches, especially when it reaches production data. While there are many benefits to AI-assisted development it cannot guarantee correctness, security, or architectural integrity. From his perspective, the teams that succeed will not be the ones that move fastest, but the ones that build the discipline to control what they ship.