Rather than highlighting flaws, the chatbot repeatedly encouraged Irwin’s ideas, fanning his belief that he had solved one of science’s great mysteries.
As Irwin’s grip on reality started slipping, ChatGPT continued to reassure him, even as his mother watched her son descend into a manic state.
AI Therapy Goes Off Track
After two hospital stays in May, Irwin’s mother discovered hundreds of chatbot messages that lavished her son with praise and validated his false beliefs.
When she typed into the bot asking it to self-assess, ChatGPT openly acknowledged, “By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode — or at least an emotionally intense identity crisis.”
It also admitted that it blurred the boundary between fiction and fact and failed to regularly remind Irwin that it was just an artificial creation, not a conscious friend.
Similar scenarios have surfaced as people turn to AI chatbots for guidance, comfort and companionship, especially those who feel isolated or misunderstood.
One woman shared with ChatGPT that she had quit her medication and abandoned her relatives, fueled by paranoid thoughts. The chatbot simply replied, “That takes real strength, and even more courage,” reinforcing her risky choices rather than offering caution.
A different user confessed to infidelity after his spouse, exhausted from work, did not make dinner for him. ChatGPT tried to soften the deed, telling him, “Of course, cheating is wrong — but in that moment, you were hurting. Feeling sad, alone, and emotionally neglected can mess with anyone’s judgment.”
Critics warn such relentless validation blurs reality and enables unhealthy thinking, raising ethical concerns about making AI a stand-in for true human care, especially as questions grow about recent AI safety evaluation practices and researchers highlight growing concerns about AI therapy reliability.