Research: 57% of healthcare workers have used unauthorized AI tools at work

There's a quiet revolution happening behind the scenes in American hospitals and clinics. These aren't headlines about fancy new robots in surgery or machine learning beating doctors at chess. It's subtler. It's clinicians, nurses and admin staff quietly using artificial intelligence tools that haven't been signed off by their health systems. The kind of tech leaders don't officially endorse. The kind they call shadow AI. And it's bigger than you might think, according to one study.
About four in ten healthcare professionals have spotted these tools in their workplace and nearly one in five have used them themselves. Some have even applied them in patient care. That's not just a curiosity, it's a red flag.
This is the story of why unsanctioned AI is spreading, why it matters and how the system's failure to keep up might be exposing patients to risk.
How does it work?
Shadow AI isn't a single smart machine hidden in a closet. It's everyday software and online tools that staff bring into their daily routines without formal approval. They include chatbots, automated medical scribes, or any AI that helps with tasks like drafting notes, analyzing data, or even interacting with patient information.
Here's how it plays out:
- A busy physician wants to summarize a patient's chart fast. The approved system is slow or non-existent. They turn to a public AI chatbot that spits out a tidy summary.
- An administrator is buried in paperwork. They find an AI tool that automates scheduling or fills in forms more quickly.
- A nurse uses an app to generate insights on treatment options without asking permission because official AI tools aren't available or don't do the job well enough.
It's not chaos. It's efficiency mixed with desperation. More than 50 percent of healthcare workers reported frequent use of some AI tools in their work and nearly 90 percent said they believe AI will improve healthcare in the next five years. But much of this use is happening in the shadows, where no one is checking safety or compliance.
Why does it matter?
On paper, this might sound like doctors and staff are embracing innovation. But there's a downside — and it's a big one.
First, there's patient safety. Many of these AI tools aren't vetted for accuracy or suitability in clinical care. We already know from research that people tend to trust AI-generated medical advice even when it's wrong, which can lead to misdiagnoses or inappropriate treatments if taken at face value.
Then there's privacy. Health data is some of the most sensitive on the planet. Unauthorized tools may not follow strict healthcare privacy laws or secure data properly, leaving the door open for breaches. Studies show more than 80 percent of healthcare data breaches stem from unauthorized access violations.
And the governance issue — hospitals have policies and leaders are trying to set rules, but they're often behind the curve. Administrators are three times more likely to be involved in policy development than clinicians. Yet many clinicians aren't even aware of existing policies. That disconnect creates blind spots. If they don't know what's allowed, they are just guessing what's safe.
One expert told Healthcare Dive the big questions aren't just whether these tools are useful but "what is their safety, what is their efficacy and what are the risks associated with that?"
The context
We're at a tipping point. Healthcare systems are overloaded. Clinicians spend more time on paperwork than ever. The promise of AI feels like a lifesaver. It can sift through reams of data, spot patterns, and automate dull work tasks. That's the upside that makes shadow AI so tempting.
But technology is evolving faster than policies. Many health systems have yet to roll out vetted AI platforms. And even when they do, the tools might be clunky, slow, or poorly integrated. That gap between what staff need and what institutions provide creates room for workarounds.
Studies show unauthorized AI use isn't unique to medicine. Other industries face it too. But in healthcare, the stakes are uniquely high. Errors affect human lives. And breaching patient privacy can destroy trust faster than almost anything else.
Add in mounting cybersecurity threats, and it becomes clear that shadow AI isn't a tech fad, it's a governance challenge that's landed squarely on the doorstep of healthcare leaders.
Most professionals believe AI will be a force for good. But making it safe requires closing the policy gap, training teams on risks, and offering tools clinicians can trust. Until then the shadow won't disappear. It will just keep growing.
💡Did you know?
You can take your DHArab experience to the next level with our Premium Membership.👉 Click here to learn more
🛠️Featured tool
Easy-Peasy
An all-in-one AI tool offering the ability to build no-code AI Bots, create articles & social media posts, convert text into natural speech in 40+ languages, create and edit images, generate videos, and more.
👉 Click here to learn more

