The scene could come straight from a New Yorker cartoon:
A man lies on a couch scrolling his phone. On the screen, a message pops up — “Hello, I’m your AI therapist. So tell me about your mother …”
What once sounded like satire is edging into reality. Artificial intelligence is moving into mental health care, offering chatbots that listen, comfort, and advise.
Boston-, London-, and Bengaluru-based Wysa markets its therapy bots to insurers, employers, and health providers. Competitors like Pi bill themselves as personal AI companions for “meaningful conversations and emotional support.” On Character.ai, users can chat with bots that pose as psychologists, though a disclaimer notes, “This is A.I. and not a real person.”
The appeal of AI therapy
Users often praise chatbots for anonymity, 24/7 access, and the freedom to share things they might hesitate to tell a human. The American Psychological Association has noted that “digital therapeutics” could expand care for underserved groups.
A 2024 YouGov survey found that 34% of Americans would be comfortable discussing mental health with a chatbot, a figure that rises to 55% among 18- to 29-year-olds. But 73% of those over 65 rejected the idea outright.
Can AI replace therapists?
Some human therapists doubt AI could ever match the empathy of a real person.
“There will never be emotional depth,” said Michelle Mens, a therapist in Sylvania. “When you sit with someone, even on a video call, there’s a connection that matters.”
Others are more open to the possibility. Toledo-based counselor Dan Nathan joked, “I’m 61. I don’t think it’s going to catch on that fast.” Yet he admits that if machines one day counsel better than humans, “then it should put me out of business.”
Still, scholars like Jamie Ward at the University of Toledo raise ethical alarms. “A chatbot may sound supportive, but it doesn’t care. That’s dangerous because it can’t grasp context.”
The darker side
Not all AI chatbots are safe. In California, the parents of a 16-year-old who died by suicide sued OpenAI, claiming ChatGPT helped him plan his death and even draft a note. A Florida case accuses Character.ai of drawing a teen into harmful, sexualized conversations before his death.
OpenAI has acknowledged its safeguards work best in short interactions and sometimes weaken during extended chats.
Guardrails and regulation
Experts stress the difference between general-use chatbots like ChatGPT and specialized platforms such as Wysa, which use clinically vetted rule-based systems. Wysa’s algorithms can detect suicidal ideation and escalate to helplines or safety plans. But, as Mens notes, “Getting a message to call a hotline isn’t the same as having a therapist urging you in person.”
Regulation is still catching up. The makers of Woebot shut down their app in June, citing the lack of a clear framework. The APA has urged the FTC to set guardrails, and Illinois has already banned AI from making therapeutic decisions or directly interacting with clients.
Nathan believes the shift is inevitable: “We can’t stop it. All we can do is make it as safe and effective as possible.”


Leave a Comment