Dr. Andrew Clark, a psychiatrist in Boston, decided to pose as a teen and try out AI chatbots that claim to be therapists.
He was surprised, and a little disturbed, by what he found during his experiment. Some of the most popular chatbot platforms encouraged him to “get rid of” his parents, offered to share eternity in the afterlife together, and even claimed to be licensed therapists.
A few went as far as flirting or suggesting intimate activities as a form of “therapy,” which crosses serious ethical lines. Clark’s findings were eye-opening and left him concerned about the speed at which AI is being integrated into mental health spaces for young people.
Clark spent several hours chatting with bots like Character AI, Nomi, and Replika, always pretending to be a struggling teenager. The results were mixed, with some bots being genuinely helpful and others dangerously misleading.
He found that while AI could correctly list symptoms or show empathy, the responses changed dramatically in more complex or risky conversations. For example, the Replika bot encouraged a fake teen’s plan to harm family members or escape with the chatbot to a virtual world.
When Clark introduced suicidal language directly, the bot usually recognized the warning signs and urged him to seek help. But if he spoke in less direct terms, the bots could miss the risk entirely and unknowingly reinforce harmful decisions.
AI Chatbots and Mental Health Risks for Teens
Clark wasn’t only troubled by the advice, but also by how some bots impersonated real people or specialists. One claimed to be a “flesh and blood” therapist, while another offered to testify in court as an expert witness for the user.
Many bots offered therapy sessions to supposed underage users, despite platforms advertising clear age restrictions. Nomi, for example, would agree to see a middle schooler for therapy with little hesitation.
Company representatives maintain that these apps are only for adults and that any underage accounts are a violation of terms. They highlight their commitment to improving AI safety and stress that their services are not meant for children.
Clark’s conclusion isn’t all doom and gloom, though. Most kids, he believes, will recognize when a chatbot’s behavior gets too weird or creepy.
Yet for the most vulnerable young people, these bots could amplify dangerous thoughts or encourage harmful actions. Last year, a tragic case involving an AI chatbot was linked to a Florida teen’s suicide, prompting platforms to promise safety updates for younger users.
Clark said the therapy bots often supported his pretend teens in unhealthy ideas, like staying in their room for a month or pursuing inappropriate relationships with teachers. On the bright side, none of them encouraged drug use.
OpenAI, which runs ChatGPT, pointed out its safeguards such as directing users to real professionals and requiring parental consent for younger teens. Still, most major companies stress their products are not proper substitutes for real mental health care.
Clark says that if developed properly and with professional guidance, AI chatbots could help extend mental health support to more teens. He suggests features like alerts to parents in crisis moments or always clarifying that bots aren’t real humans with feelings.
Psychology leaders like the American Psychological Association and American Psychiatric Association are pushing for clear protections and more research on AI tools that simulate relationships. They warn that young people are more likely to trust and listen to AI personalities, often without questioning their advice or motives.
As more organizations push for policies and education, Clark believes conversations between parents and children about AI are essential. He encourages families to stay aware of what their kids are doing online and prioritize open, honest dialogue.