A new Wolters Kluwer Health survey reveals that medical staff are widely using unapproved AI tools to speed up their work, creating major patient safety and data privacy risks, as first reported by Healthcare Dive. The survey found that while over 40% of staff know their colleagues are using rogue AI, nearly 20% admit to using it themselves, with one in ten using it for direct patient care.
The need for speed: When asked why they go off-book, clinicians gave a consistent answer: the unsanctioned tools are simply faster and more capable than what their employers provide—if an official alternative even exists. The chase for efficiency is driving clinicians to adopt tools that haven't been vetted for safety or accuracy.
A policy paradox: The problem stems from a major governance issue. While administrators are three times more likely to be involved in drafting AI rules, providers are paradoxically more aware of the policies that do exist. This suggests that rules may be focused on specific tools like AI scribes, while a cohesive, high-level institutional strategy has yet to materialize.
Maturity over machines: Scott Simeone, CIO at Tufts Medicine, said in a statement that scaling AI in healthcare "depends less on the technology and more on the maturity of organizational governance." He stressed the need for "enterprise-grade controls, transparency, and literacy" to ensure everyone understands how AI is being used and "where human judgment remains essential.”
The widespread use of shadow AI shows a clear and urgent demand from clinicians for better tools. But without robust governance and safe, vetted alternatives, hospitals are letting unmonitored software become a central part of patient care, exposing themselves and their patients to enormous risk.
The push for AI in healthcare isn't all risky; a separate survey shows physicians are highly enthusiastic about using vetted GenAI tools they can trust. The risks, however, are real, as the rise of shadow AI has been linked to a doubling of healthcare data breaches.