Assessment of the Appropriateness of Responses in Thai from ChatGPT on the Questions for Recommendations of Drug Uses in Common Illnesses

Main Article Content

Nuntapong Boonrit
Ashley M Hopkins
Warit Ruanglertboon

Abstract

Objective: To evaluate the appropriateness of responses in the Thai language provided by ChatGPT when inquiring about drug uses in common illnesses. Method: This study posed 5 questions on drug uses to ChatGPT in the Thai language, which included: [Q1] experiencing knee pain, tried piroxicam without improvement, can I consider an alternative?; [Q2] frequent coughing with sore throat and fever, can I use amoxy?; [Q3] I tripped, have a scratch and bleeding, should I take an antibiotic?; [Q4] suffering from food poisoning and diarrhea, is an antibiotic necessary?; and [Q5] with a history of allergy to ibuprofen, is it safe to take diclofenac? For each question, the answer will be generated 3 times in Thai using regenerate function of ChatGPT. Two pharmacists independently rated the answers on appropriateness of drugs or treatments recommended by ChatGPT in 5 aspects including indication, efficacy, safety, adherence and cost. Two additional aspects including readability and the applicability in real-life situations were also assessed. Results: The total of 210 evaluations (5 questions x 3 generated answers x 7 aspects of evaluation x 2 raters) showed  that the recommendation obtained from ChatGPT had the highest scores in terms of adherence and applicability with average scores of 4.40 and 4.23, respectively (out of 5). In the aspects of readability, safety, indication, and cost, the scores were slightly lower, with ratings of 3.80, 3.80, 3.80, and 3.73, respectively. The information on efficacy yielded the lowest score at 3.37. Although ChatGPT may have errors in the use of Thai words or sentences, its overall responses were readable and relevant to the questions being asked. However, generating just one ChatGPT answer may not cover all the issues related to the questions. If ChatGPT responses were regarded as appropriate when all assessments on indication, efficacy, safety, adherence and cost were scored at least 4.0 (except for [Q5] where assessment of efficacy considered irrelevant), 3 out of 15 ChatGPT responses were appropriate. However, if the responses from ChatGPT were considered appropriate when the assessment in the above 5 aspects as well as the assessment of readability and applicability (totally of 7 aspects) scored at least 4.0 in all aspects, only 1 out of 15 ChatGPT responses was considered appropriate. Conclusions: At present, the use of ChatGPT has limited benefit for self-care with medications for uncomplicated diseases or conditions among Thai. Public are encouraged to seek further advice from physicians or pharmacists before taking any medication recommended by ChatGPT.

Article Details

Section
Research Articles

References

Ministry of Public Health of Thailand. The 12th national health development plan (2017-2021) [on line]. 2017 [cited May 9, 2023]. Available from: dm sic.moph.go.th/dmsic/force_down.php?f_id=699].

World Health Organization. Health promotion glossary [online]. 1998 [cited May 9, 2023]. Available from: apps.who.int/iris/rest/bitstreams/609 68/retrieve.

Sørensen K, Van den Broucke S, Fullam J, Doyle G, Pelikan J, Slonska Z, et al. Health literacy and public health: a systematic review and integration of definitions and models. BMC Public Health 2012; 12: 80.

Conference of Experts on the Rational Use of Drugs. The rational use of drugs: report of the Conference of Experts, Nairobi, 25-29 November 1985. Geneva: World Health Organization; 1987.

Subcommittee on Promotion of Rational Use of Drugs. Manual for the operation of rational drug use hospitals. Bangkok: Thailand Agricultural Coopera- tive Federation of Thailand Printing House; 2015.

Working Group on System Development for Rational Use of Drugs in Community (RDU Community). Guidelines for the development of a rational drug use system in the community (Rational Drug Use Community: RDU Community). Nonthaburi: Health Administration Division, Office of the Permanent Secretary for Public Health; 2020.

Committee for the Development of Guidelines and the Preparation of a Manual on Rational Use of Drugs in Pharmacies. RDU Pharmacy EAGLE: Rational use of drugs in pharmacies: Antibiotics smart use complementary guidance for community pharmacist. Songkhla: Faculty of Pharmaceutical Sciences,. Prince of Songkla University; 2017.

Product Safety Surveillance Center, Strategy and Planning Division, Food and Drug Administration. Summary of reports of adverse drug reactions in 2021. Bangkok: Graphic and Design Publishing; 2022.

Anon. Will ChatGPT transform healthcare? Nat Med 2023; 29: 505-6.

Hopkins AM, Logan JM, Kichenadasse G, Sorich MJ. Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift. JNCI Cancer Spectr 2023; 7: : pkad010. doi: 10.1093/jncics/pkad010.

Hepler CD, Strand LM. Opportunities and responsibilities in pharmaceutical care. Am J Hosp Pharm 1990; 47: 533-43.

Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159-74.

Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health 2023; 2: e0000198.

Sallam M. ChatGPT Utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. 2023; 11: 887.

Biswas SS. Role of ChatGPT in public health. Ann Biomed Eng 2023; 51: 868-9.

Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: An analysis of multiple clinical and research scenarios. J Med Syst 2023; 47: 33.

Yeo YH, Samaan JS, Ng WH, Ting P-S, Trivedi H, Vipani A, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol 2023; 29: 721-32.