American Medical Association calls for explainable AI

The American Medical Association (AMA) just threw down the gauntlet on AI in medicine. At its latest House of Delegates meeting, the group passed a bold new policy that zeroes in on something that's been raising eyebrows in clinics everywhere: transparency. As AI tools race into exam rooms and decision trees, the AMA is drawing a line in the sand — saying, in effect, "If AI's gonna help treat patients, we need to know how it works."

"Physicians must feel more confident that the clinical tools they use are safe, based on sound science," said AMA Board Member Dr. Alexander Ding. Because in medicine, murky answers just don't cut it.

How does it work?

The new policy doesn't just ask for transparency — it demands explainability. That means any AI tool used in a medical setting must be able to tell a human (yes, a qualified one) how it came to its conclusions.

In practice, that looks like:

  • Clinical AI tools providing clear, understandable explanations for their output.
  • Decisions that doctors can interpret, vet, and talk through with patients — especially when lives are on the line.
  • Evaluations carried out not by the developers themselves but by neutral third parties like regulatory agencies or medical societies.

It's not just a nice-to-have; it's the linchpin for safe, shared decision-making.

Why does it matter?

Because patients aren't algorithms. And trust — between doctor and patient, and doctor and tool — is everything.

"The need for explainable AI tools in medicine is clear, as these decisions can have life or death consequences," Dr. Ding pointed out. When a machine recommends a course of treatment, a physician shouldn't be left shrugging or blindly trusting a black box.

When AI outputs can't be explained:

  • Clinicians lose their ability to apply training and judgment.
  • There's a risk of following faulty advice without knowing it.
  • Patients miss out on informed consent.

The AMA's stance? If it can't be explained, it shouldn't be used to guide care.

The context

AI isn't new to healthcare, but it's accelerating fast — and not always with a clear roadmap. Augmented intelligence and machine learning tools are already shaping diagnoses, treatments, and resource decisions. But as adoption surges, guardrails are lagging.

The AMA's policy builds on growing calls for oversight. Its Council on Science and Public Health raised red flags around opaque AI tools, especially when proprietary algorithms are kept hush-hush for business reasons. The report emphasized that:

  • Intellectual property claims shouldn't trump patient rights.
  • Safety and efficacy still need to be backed by old-school rigor, like randomized clinical trials.
  • Clear definitions and shared standards are overdue — and vital.

By calling for a glossary of key terms and a collaborative approach to regulation, the AMA is signaling it won't sit by while AI transforms care without accountability. It's staking a claim: real intelligence — whether artificial or human — starts with clarity.

source

💡Did you know?

You can take your DHArab experience to the next level with our Premium Membership.
👉 Click here to learn more