A recent study delved into the phenomenon surrounding the placebo effect in relation to the “Chat GPT .” An official from the American artificial intelligence company, “OpenAI,” which developed the renowned chatbot, hinted at its potential as a proficient psychologist. This suggestion garnered substantial criticism for potentially oversimplifying the complexities of mental health treatment.

Lillian Wong, responsible for AI-related security issues, shared her personal experience with Chat GPT, discussing stress and work-life balance through audio communication, reports Al-Rai daily.

Wong found the conversation to be emotionally impactful, feeling heard and comforted. She pondered if this could be a form of therapy, bringing attention to the chatbot’s newer paid voice synthesis feature introduced about a year ago.

In response, American developer and activist Cher Scarlett expressed strong disagreement, stating that psychology’s goal is to enhance mental health, requiring diligent effort. Scarlett emphasized that sending positive feelings to oneself is beneficial but distinct from formal treatment.

A recent study, detailed in the scientific journal “Nature Machine Intelligence,” suggested that this phenomenon could be attributed to the placebo effect. Researchers from the Massachusetts Institute of Technology (MIT) and the University of Arizona conducted a survey involving 300 participants.

They informed some participants that the chatbot displayed empathy, others that it was manipulative, and the rest were told it exhibited balanced behavior. The results showed that those who believed they were interacting with a virtual assistant capable of empathy were more inclined to perceive their conversation partner as trustworthy.


Read Today's News TODAY... on our Telegram Channel click here to join and receive all the latest updates t.me/thetimeskuwait