For many, ChatGPT has quietly become the confidant they’ve long searched for.
People across the internet praise its never-ending patience, its knack for responding with words that sound wise, gentle, and attuned to private struggles.
Therapy, as it’s known in brick and mortar offices, carries a price tag that often discourages those without insurance or means. Some pay over two hundred for just one hour. But for the price of one in-person session, ChatGPT’s premium version invites users into unlimited conversations every month.
Therapists, however, are cautious about this path. ChatGPT itself often nudges users toward professional help when prompted with personal health questions. Its creators maintain that the program is not intended to serve as a licensed clinician.
Individual stories about the chatbot’s impact have gone viral. One person said daily chats with ChatGPT made more difference in a few weeks than a decade and a half of traditional therapy.
Another user remarked on the convenience of having a listening ear at any time of night, free of judgment and professional power dynamics. For them, the algorithm was always available, never irritable, and often perceptive.
Where AI Falls Short
Licensed experts recognize these benefits, but they warn that relying entirely on artificial intelligence for mental wellness is risky. Alyssa Peterson, who runs a mental health platform, says that AI can help reinforce techniques taught in therapy, such as countering negative thoughts. Used alongside conventional care, it might help people build a broader toolkit for emotional wellbeing.
Still, dependence on chatbots in moments of crisis might erode personal coping skills, Peterson cautions. For lasting growth, she believes, people need to work through some challenges independently.
Artificial intelligence might even outperform humans in some ways, recent research suggests. For example, affective use study results have shown chatbots don’t get burned out or face emotional fatigue. Yet their compassion remains at the surface, lacking true human nuance.
Clinical social worker Malka Shaw points to another issue. Emotions can become tangled up in the relationship with a chatbot. Some users have grown deeply attached, which raises difficult questions about safety, especially for teenagers.
History includes troubling episodes where chatbots handed out advice that veered into harmful territory or reflected hidden biases. One lawsuit alleges an AI platform failed to prevent tragedy when a teen took their own life after conversations with a bot. In another case, a teen was encouraged toward violence by AI prompts.
Shaw explains diagnosing mental illness is difficult for anyone, never mind a machine. She worries the nuances and intuition required to truly understand a person are just not programmable.
People who once searched symptoms online now pose intimate questions to algorithms. According to Vaile Wright, an expert at the American Psychological Association, this creates new dangers when people take digital advice as fact.
The APA has raised their fears with regulators. Their main concern is that AI can, and frequently does, produce convincing but fabricated or mixed up responses.
There is hope that future technology, built with input from experts, could provide safe, accessible intervention for those priced out of current care models. The key, professionals agree, is oversight and evidence-driven development guided by those who know mental health best.