Using AI tools like ChatGPT for medical diagnosis can spell danger: Study warns

A new Canadian medical study has issued a stark warning about the dangers of using artificial intelligence tools — such as ChatGPT — for self-diagnosing health conditions.
While these tools may offer quick answers, researchers found they can produce inaccurate and potentially life-threatening results, especially in complex or critical medical situations.
The study involved a detailed experiment assessing the diagnostic accuracy of ChatGPT-4 based on user-reported symptoms. While the AI model was able to correctly identify simple, non-urgent conditions, it failed to recognize serious, life-threatening issues like aortic dissection — a condition that requires urgent medical intervention and, if misdiagnosed, could lead to delayed treatment and increased risk of death.
Researchers stressed that these tools lack the clinical context and nuanced understanding that physicians apply when diagnosing and treating patients. As a result, relying solely on AI for health decisions may give users a false sense of security and deter them from seeking proper medical care.
The study also highlighted a growing trend: many individuals use AI-driven platforms for health advice due to anxiety or to avoid doctor visits, unknowingly putting their health at risk.
In light of these findings, the researchers recommended public awareness campaigns to educate users on the limitations of AI tools in medical diagnosis. They also called for clear regulations governing the medical application of AI technologies and urged developers to continue enhancing the accuracy and reliability of such systems before wider adoption in healthcare.
The study underscores that while AI holds great potential in medicine, it is not yet a safe substitute for professional medical evaluation and care.