AI chatbot helps doctors learn difficult end-of-life conversations

Hong Kong is about to make advance medical directives legally binding in May. But there's a problem - the city doesn't have nearly enough trained doctors to handle the complex conversations this will require.
Enter artificial intelligence. Researchers at the University of Hong Kong are building an AI chatbot that trains healthcare workers to discuss end-of-life care with patients and families. The tool simulates realistic patient responses and gives doctors immediate feedback on their communication skills.
How does it work?
The chatbot runs practice scenarios where doctors can rehearse advance care planning conversations. It plays the role of patients and family members, responding naturally to what the doctor says.
"This is a conversational AI simulator that enables students and clinicians to practice ACP discussions in realistic scenarios with real-time feedback," says Dr Jacqueline Yuen Kwan-yuk, who leads the research at HKU's medical faculty.
The system judges communication quality based on several factors - how well doctors share information, understand patient preferences, respond to emotions, and involve patients in decisions. This assessment framework was tested on 137 real conversations between doctors, patients, and families across five hospitals and a community hospice.
The AI trains on actual conversation transcripts that have been stripped of identifying information. The team expects to spend 24-36 months perfecting the technology, testing it first with students, then with practicing doctors.
Why does it matter?
Hong Kong's healthcare system faces a crisis. The city has roughly 40 palliative medicine specialists for 7.5 million people - that's one specialist for every 150,000 to 180,000 residents. International standards call for at least 1.5 specialists per 100,000 people. Hong Kong currently has 0.6.
The shortage of trained conversation facilitators is even worse, according to Dr Yuen. Medical schools offer limited training in these difficult discussions. Few postgraduate programs exist to build these skills among working doctors.
When the new legislation kicks in, this gap could create serious problems. Poorly trained doctors might rush through conversations or miss important details. Legal and ethical issues could arise if advance directives don't truly reflect what patients want.
The research found troubling quality gaps in current practice. Doctors discussed backup decision-makers in fewer than 5% of cases. They explored non-medical priorities in only 30% of conversations. Treatment plans connected to patient values in under one-third of discussions.
The context
Traditional training methods can't scale fast enough. Apprenticeship models and faculty workshops are too slow and expensive to train thousands of doctors quickly. The AI approach offers 24/7 practice access without needing human supervisors.
"AI platforms could provide standardised training across institutions and personalised feedback at scale," Dr Yuen explains. The system gives doctors a safe space to practice emotionally complex conversations without real-world consequences.
But Dr Yuen warns of risks if training doesn't keep pace with the new law. "Superficial compliance" could result - where advance directives are legally valid but don't really capture patient wishes. Unprepared doctors might suffer moral distress from difficult conversations they're not equipped to handle.
The team plans to expand beyond education. Hospitals could use their assessment framework for quality improvement and peer reviews. The system might integrate with electronic health records through structured documentation templates.
Other regions are tackling similar challenges. Singapore launched myACP, a free online tool for documenting end-of-life care preferences, available to residents 21 and older without serious medical conditions. Tokyo-based Jolly Good partnered with Harvard-affiliated Brigham and Women's Hospital in 2023 to create virtual reality training for palliative care communication.
Dr Yuen stresses that AI should support, not replace, human judgment in these deeply personal conversations. Any system would need strong privacy protections and informed consent under Hong Kong's data protection laws.
💡Did you know?
You can take your DHArab experience to the next level with our Premium Membership.👉 Click here to learn more
🛠️Featured tool
Easy-Peasy
An all-in-one AI tool offering the ability to build no-code AI Bots, create articles & social media posts, convert text into natural speech in 40+ languages, create and edit images, generate videos, and more.
👉 Click here to learn more

