ChatGPT as 'doctor': When AI leads to dangerous self-medication

At a time when artificial intelligence is entering every aspect of our lives, many people are entrusting their health to ChatGPT. But can AI replace a real doctor? From mothers treating their children according to a robot's advice, to the global rise of digital self-medication, this phenomenon is raising serious concerns for public safety.

Ida Ismail

With the development of artificial intelligence, more and more patients are avoiding medical visits, relying on ChatGPT or online platforms to interpret tests and receive medical advice. This trend, according to doctors, is becoming a serious risk to public health, as it can lead to misdiagnoses and irreparable damage to patients.

Pediatrician Dr. Loreta Gjoni shares concrete cases of young mothers who receive information via the internet and treat their children according to artificial intelligence advice. One mother refused to give her child fever-reducing medication, such as paracetamol for a temperature of 38.5, after ChatGPT had told her about the side effects of the medication.

"The child had been running a high fever for several hours and came home exhausted and sleepy. The mother wouldn't let us apply a fever reducer, she was very against using paracetamol. Only after explanations was she convinced that she had acted wrongly and we stabilized the child's condition," she says.

The mother indicated that she had relied entirely on the advice of ChatGPT, which had said alarming things about the effects of using paracetamol, but also aspirin in another case.

According to experts, ChatGPT can help as an information tool, for translating terms, summarizing data or general clarifications, but not as a diagnostic or treatment tool. Dr. Gjoni emphasizes that parents often ask him questions that stem from misinterpretations of artificial intelligence:

"It is important that doctor-patient communication is based on solid foundations. Only the doctor knows how to interpret symptoms and decide on the appropriate treatment. ChatGPT does not know the patient, has no clinical context, and bears no responsibility."

The case is not isolated. In many countries, reports show a rise in “digital self-medication” through AI. An international study published in JMIR Human Factors by Stanford University researchers analyzed the use of ChatGPT for self-diagnosis and the reasons why people trust this tool for health issues. Of the 607 respondents, 78% stated that they are willing to use ChatGPT to get medical advice or interpret symptoms.

The authors emphasize that this trust stems from the impression of high performance and the feeling that ChatGPT helps in decision-making. However, the study warned of serious risks and recommends that artificial intelligence be regulated and adapted for safe and supervised use in the field of healthcare.

Even OpenAI itself has warned that ChatGPT "does not replace healthcare professionals," but many users see it as an authoritative source due to the convincing way it formulates answers.

Experts warn that artificial intelligence cannot distinguish clinical nuances, such as patient history, co-occurrence of symptoms, or drug interactions. Furthermore, AI models often “hallucinate,” that is, create false facts, presenting them as accurate.

Another problem is that ChatGPT is not built to provide personalized advice. It relies on generic texts from the internet, which may be outdated, out of context, or without verified medical sources.

According to the World Medical Association (WMA), the use of AI in healthcare requires clear protocols, transparency and professional oversight. Otherwise, there is a risk that patients will make risky decisions, delay treatment or abuse medications. In an era of rapid technological developments, the greatest risk remains the replacement of accurate doctor information with that of artificial intelligence. /acqj.al