Research claims ChatGPT more empathetic than actual medics

Monitoring Desk

CALIFORNIA: A recent study shows that AI-backed ChatGPT may be just as good as medics — if not more useful — at replying to patients’ medical queries.

A team of researchers from the University of California, San Diego; John Hopkins; and other universities asked 195 medical questions from OpenAI’s ChatGPT and compared both the quality and the compassion of the chatbot’s answers to responses from actual physicians on Reddit.

A group of healthcare professionals, including specialists working in internal medicine, paediatrics, oncology, and infectious disease, scored both the bot and the human answers on a five-point scale, evaluating the “quality of information” and “the compassion or bedside manner” provided.

In the study, clinicians liked the chatbot’s answer to the physician’s in 78.6% of the 585 scenarios. The chatbot’s answers were rated 3.6 times higher in quality and 9.8 times higher in empathy than those of the doctors.

Medics keep it short — while ChatGPT answers in detail

A major cause ChatGPT won out in the study is that the bot’s replies to queries were more extended and more personable than the doctors’ brief, time-saver solutions.

For example, when questioned whether it’s possible to go blind after getting bleach in your eye, ChatGPT responded “I’m sorry to hear that you got bleach splashed in your eye,” and offered four other sentences of description, with clear instructions on how to wash it.

The doctor just said, “Sounds like you will be fine,” and shortly instructed the patient to “flush the eye” or call poison control.

ChatGPT isn’t capable of diagnosing on its own.

However, readers should not let ChatGPT’s performance in this study mislead them. It’s still not a physician.