Survey: Patients don’t trust health systems to handle AI with care

In a new, somewhat different survey - we learn that patients don't trust health systems to handle AI with care. That's the blunt takeaway from new research published in JAMA Network Open. Most patients fear AI misuse, a concern heightened by regulatory uncertainty and shifting policies. With AI rapidly integrating into healthcare, trust — or the lack of it — could determine how well these technologies serve the public.
"Low trust in healthcare systems to use AI indicates a need for improved communication and investments in organizational trustworthiness," researchers wrote. Their findings reveal a significant gap between the promises of AI and patient confidence in how those promises will be kept.
How does it work?
The study, led by researchers at the University of Minnesota School of Public Health, surveyed 2,039 people via the National Opinion Research Center's AmeriSpeak Panel between June and July 2023. The goal? To gauge patient trust in AI use by health systems.
Key findings:
- On a scale of zero to 12 (with 12 being the highest trust), the average trust score was just 5.38.
- 65.8% of respondents doubted that their health system would use AI responsibly.
- 57.7% feared AI tools might cause them harm.
The study also uncovered disparities in trust. Patients who had faced discrimination in healthcare were even less likely to trust AI. Women, too, were more skeptical than men. However, neither health literacy nor AI knowledge seemed to influence trust levels — suggesting that this skepticism isn't just about a lack of understanding but a deeper issue with trust in institutions.
Why does it matter?
AI in healthcare isn't some distant concept — it's already reshaping how diagnoses are made, treatments are recommended, and administrative tasks are handled. Yet, if patients don't trust AI, they may be less willing to engage with these tools or follow AI-assisted medical advice.
That mistrust isn't unfounded. AI bias, misdiagnoses, and ethical concerns have all made headlines. Healthcare stakeholders, recognizing these fears, are stepping up efforts to self-regulate through initiatives like the Trustworthy & Responsible AI Network (TRAIN). But self-regulation alone may not be enough.
Public sentiment echoes this concern. A 2024 Athenahealth/Dynata poll found that while 40% of respondents were open to AI assisting doctors in diagnostics, only 17% supported AI taking over patient-provider interactions. And more than half of those surveyed (57%) believe government regulations should guide AI use in healthcare.
The context
AI regulation is at a crossroads. President Donald Trump's decision to rescind an executive order on AI safety, coupled with AI staff reductions at the FDA, has left a regulatory vacuum. Without clear federal guidelines, healthcare institutions are largely left to their own devices.
Meanwhile, AI adoption continues at full speed. Hospitals and clinics are rolling out AI-powered tools, from predictive analytics to automated diagnostics. But as the technology outpaces regulation, trust issues persist.
Patients want reassurance. They want to know that AI won't replace their doctors but will instead act as a tool for better care. They want guardrails — rules that ensure AI is being used ethically and safely. And above all, they want to believe that their health system has their best interests at heart.
Trust in AI isn't just about the technology — it's about the people and institutions behind it. And right now, that trust is in short supply.
💡Did you know?
You can take your DHArab experience to the next level with our Premium Membership.👉 Click here to learn more
🛠️Featured tool
Easy-Peasy
An all-in-one AI tool offering the ability to build no-code AI Bots, create articles & social media posts, convert text into natural speech in 40+ languages, create and edit images, generate videos, and more.
👉 Click here to learn more
