All articles

Enterprise AI

From pilot to practice: scaling AI in healthcare with ethical oversight

AI Data Press - News Team
|
November 7, 2025

Fractional CAIO Brian M. Green discusses the need for distinct governance and ethics in healthcare AI implementations.

Credit: gorodenkoff (edited)

Key Points

  • AI adoption in healthcare faces challenges beyond technology, focusing on ethical integration and change management.

  • Fractional CAIO Brian M. Green discusses the need for distinct governance and ethics in AI, warning against conflating the two.

  • Green advocates for embedding ethical frameworks in AI products to ensure patient-centricity and safety.

The biggest part of AI right now is change management, particularly in healthcare. It can be disruptive if it's not done correctly.

Brian M. Green

Fractional Chief AI Officer, Founder
Health-Vision

Brian M. Green

Fractional Chief AI Officer, Founder
Health-Vision

It's clear by now that AI adoption in any regulated enterprise setting requires much more than a "sum of its parts" approach. Healthcare applications specifically promise revolutionary efficiency and life-saving insights, but often stall due to complex change management challenges. Often the greatest barrier to successful AI adoption isn't the technology stack, but the intricate process of communicating its safe and ethical integration into daily workflows.

Brian M. Green is a fractional Chief AI Officer and founder of the consultancy Health-Vision. With a background that includes serving as Vice President of Commercial Planning and Strategy at Health Union, Green has a deep understanding of the health communications landscape. He now advises healthcare organizations and startups on a challenge he believes is dangerously misunderstood, arguing that the industry's focus on technology has come at the expense of the two things that matter most: deeply-human governance and ethics.

  • An industry-wide afterthought: "The biggest part of AI right now is change management, particularly in healthcare. It can be disruptive if it's not done correctly," says Green.

  • Governance ≠ ethics: For Green, the problem begins with a fundamental misunderstanding of terms. While the EU has moved aggressively on AI regulation, he sees the US market as being in its "early days," where crucial safety concepts are often dangerously conflated. "Ethical AI is kind of like almost an afterthought," he says. "People lump it in with governance, but you can ensure some ethical uses by having robust governance without ensuring the AI itself is ethical. It's a separate process."

It’s a principle Green is putting into practice across multiple high-profile implementations, as well as within his own AI-powered application he is building for patients with rare diseases. Whether its in deep enterprise integration or application-layer startups, embedding an ethical framework directly into the product’s design allows Green to "measure patient-centricity" rather than just the table stakes of simply avoiding harm.

The need for this layered approach becomes undeniable with the rise of agentic AI, where swarms of intelligent agents perform complex, coordinated tasks. While the potential is immense, Green warns that the newest technologies significantly expand threat matrix. He points to a real-world use case in remote patient monitoring where a team of 40 different agents could work in concert to review patient data, anonymize records, perform quality assurance, and surface life-saving recommendations to a care team in hours instead of weeks. This power, he argues, demands a level of scrutiny far beyond that of a typical tech product.

  • Beyond pilots: "My focus is on helping businesses develop something beyond a pilot, and helping them get scalable AI up and running that can be integrated within the workforce," Green says. "What we need to do from a governance perspective is very different. I'm not selling shoes."

To make his point, he notes that even if he were selling shoes, the risks aren't zero, giving an example of a simple chatbot inadvertently leaking private data in more subverted ways due to the interconnectivity of data systems powering the bot. In healthcare, the stakes are infinitely higher. His solution is an organizational playbook modeled on a system designed for the highest of stakes. He advises companies to create ongoing multi-stakeholder review committees. It's a model he knows intimately, having served on an Institutional Review Board himself.

Green's insistence on precision extends to the very language used to describe AI. He argues that the popular term for AI-generated falsehoods is not only inaccurate but philosophically harmful.

  • The language of risk: "I hate the term 'hallucination,'" he says. "I prefer to say 'AI error' or a 'spectrum of errors' because it's more accurate." This isn't just a semantic debate. For Green, these errors represent concrete security vulnerabilities. An attacker can mimic an error to inject a malicious command that lies dormant within a system, waiting to be activated later. "It's like the Manchurian Candidate," he warns, "except it's in the code."

  • Risk-adjusted ROI: Getting buy-in for these robust governance structures requires speaking the language of the C-suite. Green recalls the shocking tendency of some companies to simply budget for future ransomware payments rather than investing in prevention. To counter this short-term thinking, he frames the conversation around a metric the CFO can understand. "I focus the ROI discussion around risk-adjusted ROI," he explains. "I say, 'Here are all the potential risks to your business. We can't quantify all of them, but a lot of them we can.'"

For companies navigating the complex regulatory landscape, he stresses that perfection isn't the initial goal. "Regulators don't expect you to go from 0 to 100," he advises. "They expect you to do due diligence. Everything you do has to be documented." Ultimately, his strategy is about empowerment. "My goal is to arm an internal champion, a CISO or a CTO, with the data and information they need so that their CFO says yes." As this technology becomes a default layer in every system, a natural question arises: Will there be a day when this level of intense ethical oversight is no longer needed? For Green, the answer is a definitive no.