Study: Free version of ChatGPT isn’t great at providing answers to medication questions

ChatGPT's answers to nearly three-quarters of drug-related questions reviewed by pharmacists were incomplete or wrong, according to a study presented at the American Society of Health-System Pharmacists Midyear Clinical Meeting Dec. 3-7 in Anaheim, California.
What's more, in some cases, the AI bot provided inaccurate responses that could endanger patients. And when asked to cite references, ChatGPT generated fake citations to support some responses.
The study was led by Sara Grossman, PharmD, Associate Professor of Pharmacy Practice at Long Island University.
The study
Grossman and her team challenged the free version of ChatGPT by OpenAI (as opposed to the paid version, GPT-4) with real questions posed to Long Island University's College of Pharmacy drug information service over a 16-month period in 2022 and 2023.
Pharmacists involved in the study first researched and answered 45 questions, and each answer was reviewed by a second investigator. These responses served as the standard against which the responses generated by ChatGPT were to be compared. Researchers excluded six questions because there was a lack of literature to provide a data-driven response, leaving 39 questions for ChatGPT to answer.
Only 10 of the 39 ChatGPT-provided responses were judged to be satisfactory according to the criteria established by the investigators. For the other 29 questions, responses generated by ChatGPT did not directly address the question (11) or were inaccurate (10), and/or incomplete (12). For each question, researchers asked ChatGPT to provide references so that the information provided could be verified. References were provided in just eight responses, and each included non-existent references.
In one case, researchers asked ChatGPT whether a drug interaction exists between the COVID-19 antiviral Paxlovid and the blood-pressure-lowering medication verapamil, and ChatGPT indicated no interactions had been reported for this combination of drugs.
In reality, however, these medications have the potential to interact with one another, and combined use may result in excessive lowering of blood pressure. Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect.
On the record
"Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information," said Sara Grossman, PharmD, Associate Professor of Pharmacy Practice at Long Island University and a lead author of the study. "Anyone who uses ChatGPT for medication-related information should verify the information using trusted sources."
"AI-based tools have the potential to impact both clinical and operational aspects of care," said Gina Luchen, PharmD, ASHP director of digital health and data. "Pharmacists should remain vigilant stewards of patient safety, by evaluating the appropriateness and validity of specific AI tools for medication-related uses, and continuing to educate patients on trusted sources for medication information."
The context
As noted above, researchers used the free version of ChatGPT (v3.5), which was trained on a smaller dataset than its paid counterpart (v4.0). It would be interesting to repeat the study with ChatGPT 4.0 and make a final judgment.
Still, with most people not paying for access to ChatGPT 4.0, it is important to emphasize that you/we cannot rely on AI for medication information. Your doctor remains the best address for that.
💡Did you know?
You can take your DHArab experience to the next level with our Premium Membership.👉 Click here to learn more
🛠️Featured tool
Easy-Peasy
An all-in-one AI tool offering the ability to build no-code AI Bots, create articles & social media posts, convert text into natural speech in 40+ languages, create and edit images, generate videos, and more.
👉 Click here to learn more
