Elsevier survey finds clinicians’ AI usage and optimism grows despite concerns

The white coats have spoken — and they're cautiously hopeful. In the whirlwind of healthcare demands, clinician burnout, and system bottlenecks, Elsevier's Clinician of the Future 2025 survey reveals a profession eyeing artificial intelligence not as a magic wand, but as a potential life raft. Drawing insights from over 2,000 doctors and nurses across 109 countries, this annual report shows that while AI adoption is picking up steam, its true promise lies just beyond the bend — if institutions can keep up.
Highlights
AI usage nearly doubles: Nearly half (48%) of clinicians have now used an AI tool for work — almost twice as many as last year's 26%. China leads the pack with 71% reporting usage, while the US (36%) and UK (34%) lag behind.
ChatGPT over clinical tools: A striking 97% of AI-using clinicians rely on generalist tools like ChatGPT. Only 76% use clinical-specific AI. The gap between consumer-grade and medical-grade AI is still wide.
Optimism vs. reality:
- 70% say AI can save them time.
- 58% believe it speeds up diagnostics.
- 54% trust it to boost diagnostic accuracy.
- But only 16% actually use AI for clinical decision-making today.
Trust hinges on transparency:
- 68% want AI tools to cite their sources.
- 65% favor AI trained on peer-reviewed content.
- 64% expect tools to tap the latest medical literature.
- In the UK and US, over 75% prioritize factual accuracy.
Access and training fall short: Only a third of clinicians feel they have adequate institutional access to AI tech. Training and governance are even more scarce, with just 30% receiving proper training and 29% feeling AI oversight is in place.
Burnout and patient loads are real: 28% of clinicians say they simply don't have the time to deliver quality care. Nearly 70% report seeing more patients now than two years ago, and almost half admit fatigue affects their performance.
Why does it matter?
Because the math doesn't lie: rising patient loads plus shrinking time equals burnout. Clinicians need help, and AI is right there — waiting in the wings. The problem? It's still more promise than practice.
Doctors and nurses believe AI could transform care delivery. But without trust in the tools and support from institutions, that belief stalls at the door. "As the healthcare industry continues to grapple with increased demands and limited resources," says Jan Herzhoff, President of Elsevier Health, "clinicians have identified many opportunities for AI... to help improve patient outcomes."
What's more, there's a growing recognition that how AI is built matters just as much as what it does. Transparency, quality data, and clear sourcing aren't bells and whistles — they're lifelines for building confidence in clinical use.
The context
This optimism — laced with realism — lands at a critical moment in healthcare. Post-pandemic strain, workforce shortages, and the relentless pace of care have created a system under pressure.
In many ways, AI is already here. But in too many settings, it's showing up uninvited, unvetted, and unsupported. Clinicians want more than tools — they want trust, training, and governance. Without those ingredients, even the smartest algorithm won't stand a chance.
The findings echo a global inflection point. Generative AI, once the plaything of tech circles, is elbowing into hospitals, clinics, and exam rooms. Whether it becomes a trusted partner or just another dashboard depends on how the next few years unfold — and whether healthcare leaders answer the call.
💡Did you know?
You can take your DHArab experience to the next level with our Premium Membership.👉 Click here to learn more
🛠️Featured tool
Easy-Peasy
An all-in-one AI tool offering the ability to build no-code AI Bots, create articles & social media posts, convert text into natural speech in 40+ languages, create and edit images, generate videos, and more.
👉 Click here to learn more
