Korean researchers show privacy-protecting AI for medical imaging

Artificial intelligence is changing how we look at the human body, one scan at a time. But there's always been a catch — privacy. How can doctors use powerful algorithms to diagnose patients without handing sensitive medical data to machines that could be hacked? A team from Korea's Asan Medical Center seems to have cracked that code.

Professors Sang-Wook Lee and Jungyo Suh have created an AI that can read kidney CT scans while they're still encrypted. In short, the model never sees what it's looking at — yet it still gets the diagnosis right.

How does it work?

The secret sauce is homomorphic encryption — a bit of cryptographic wizardry that lets computers perform calculations on locked data. Picture a robot arm operating inside a sealed safe, manipulating objects without ever opening the door.

  • The team used 12,446 kidney CT images — 5,077 normal, 3,709 cysts, and 2,283 tumors — to train a deep learning model.
  • Then they applied the Cheon-Kim-Kim-Song (CKKS) scheme, a Korean-developed method that supports decimal computations, essential for medical imaging.
  • Partnering with CryptoLab, the original creators of CKKS, they fine-tuned the system to handle complex real-number math even while encrypted.

The result? A model that scored AUC values between 0.97 and 0.99 — a near-perfect measure of classification accuracy. Even though the encrypted images ballooned to 500 times their original size, high-performance GPUs kept analysis times within a couple of minutes.

As Professor Lee noted, "With future advancements in high-performance graphics processing units and further optimization of algorithms, this encrypted model is expected to become the standard for privacy-preserving medical image analysis."

Why does it matter?

In healthcare, privacy isn't just paperwork — it's a matter of ethics, trust, and law. AI models need mountains of patient data to learn, but that data is often locked behind firewalls for good reason. This research shows that hospitals don't have to choose between innovation and confidentiality.

  • Zero data leakage means doctors can harness AI's diagnostic precision without exposing personal details.
  • Regulatory confidence grows, since encrypted workflows sidestep legal gray zones around patient consent and data sharing.
  • Global implications are huge: similar systems could extend to X-rays, MRIs, and even genomic data.

Professor Suh put it simply: "This encrypted model ensures the secure protection of sensitive patient information, making it a technology that can promote the wider use of AI diagnostics while minimizing legal and ethical concerns."

The context

Homomorphic encryption isn't new, but until recently it was too slow and too heavy for real-time use. What makes this work stand out is that it brings post-quantum cryptography — tech designed to survive future quantum hacks — into the hospital. The study, published in Radiology: Artificial Intelligence (impact factor 13.2), reflects a broader shift: medical AI is maturing from flashy prototypes to privacy-aware tools ready for deployment.

Supported by the Asan Institute for Life Sciences, the Ministry of Science and ICT, and Korea's National Research Foundation, this project marks a milestone. It suggests that the next generation of AI in medicine won't just be smarter — it'll be safer, too.

source

💡Did you know?

You can take your DHArab experience to the next level with our Premium Membership.
👉 Click here to learn more