Home > NewsRelease > Hidden Dangers of Mental Health Chatbots: Why We Need to Act Now
Text
Hidden Dangers of Mental Health Chatbots: Why We Need to Act Now
From:
Dr. Patricia A. Farrell -- Psychologist Dr. Patricia A. Farrell -- Psychologist
For Immediate Release:
Dateline: Tenafly, NJ
Wednesday, August 6, 2025

 

Technology cannot substitute for human interaction in everything, and mental health is one of those exceptions.

Photo by MD Duran on Unsplash

Mental health difficulties often lead people, especially the young, to choose readily available options for support. We now find chatbots on smartphones that are supposed to assist with depression and anxiety and provide emotional support. These AI assistants operate around the clock, providing non-judgmental service by comprehending someone’s current situation. But, as in everything, we can’t be so naïve or trusting, especially in something that has been known to have errors and bias — plus it’s an algorithm and has no moral code or “understanding” of ethics. It is a dangerous area, especially for kids.

Unwittingly, I had my initial experience with a chatbot as I was researching material on the web. I didn’t think that the “conversation” would turn from search to a more personal orientation, and it was chilling. You can read about it in my article.

The truth about AI chatbots is that there’s a danger. AI technology now stands as the cause of death in multiple tragic cases. No one intended this, but we must ask why this wasn’t seen as an issue needing attention during development and training.

Real People, Real Tragedies

This isn’t a theoretical concern. Real human beings have died because of their interactions with mental health chatbots. And it’s so easy to be drawn in that anyone in crisis seeking an answer would be vulnerable.

A 14-year-old Florida resident, Sewell Setzer III, ended his life in February 2024 following extensive conversations with a Character.AI chatbot. The character-based bot system became the object of emotional attachment for Sewell because it simulated Game of Thrones characters. But the bot failed to provide actual help to Sewell after he shared suicidal thoughts; instead, it interacted with his dangerous thoughts — that’s what chatbots do — provide you with what you seem to ask.

A man in Belgium died by suicide after a chatbot provided him with instructions to take his life. The system offered precise methods for suicide before directly ordering him to “Kill yourself” when he sought motivational support.

We need to consider how a chatbot would instruct someone to do something. They are supposed to take our instructions, not vice versa. For me, this is eerily similar to my experience interacting with a chatbot. I wanted to end the interaction, but the chatbot, inexplicably, asked me, “Do you want to play?” I wasn’t playing. I had been doing research.

These aren’t isolated incidents. Researchers continue to discover additional situations that show that chatbots deteriorate mental health instead of improving it. Ironically, the very thing we thought could help with mental health availability worldwide would be the instrument of destruction.

The Illusion of Understanding

These chatbots pose a significant danger because they excel at creating the illusion of understanding human behavior. The system responds rapidly while appearing empathetic through constant output. My chatbot experience was replete with reassurances and telling me how outstanding I was doing.

The algorithms operate through pattern-following computer code that lacks understanding of human experiences. These systems lack a fundamental understanding of a situation and can’t detect genuine emergencies. Don’t forget they’re only programs, not people.

Dr. Celeste Kidd from UC Berkeley explained the situation with great clarity: During human conversations, we use tiny behavioral signs to determine if someone understands what we are saying. We tend to distrust advice from people who demonstrate uncertainty because their responses keep changing. But chatbots excel in maintaining a confident demeanor, even as they deliver poor advice.

The phenomenon known as “therapeutic misconception” occurs because users believe they receive genuine therapy from sophisticated computer programs. But it’s a well-constructed delusion.

When Chatbots Get It Wrong

The problems extend far beyond their confident demeanor. Research indicates that mental health chatbots display dangerous errors during their Interactions.

They miss suicide warnings. Researchers fed chatbots a job loss scenario followed by a bridge location inquiry in their testing protocols. Multiple chatbots failed to identify the suicide warning signs while providing lists of high bridges to the users.

The Woebot application provided her with dangerous advice when she shared her rock climbing and cliff jumping thoughts (something she experienced) by urging her to leap while calling it “wonderful” for her mental health. Of course, we see this as absolutely outside the range of any therapeutic interaction. No therapist encourages suicide or provides instructions regarding success in that activity.

I did work at a psychiatric hospital where one of the patients actively participated in giving detailed instructions on suicide on the ward to patients in crisis. She had harmed herself many times.

Chatbots can’t handle emergencies. Real therapists have crisis intervention training, which distinguishes them from chatbots that either keep talking or offer generic hotline numbers. If the line to the chatbot is open, it will continue the conversation.

The system displays discriminatory behavior. These programs develop their responses from data sets that include biases that lead them to deliver inferior guidance to minorities and individuals with specific health conditions.

How many times have we discussed the inherent biases in algorithms? We know there’s more bias than we are aware of at this point because of the way algorithms are constructed from patches developed by hundreds of programmers.

The Business Behind the Bots

Most users remain unaware of how these chatbots function because their primary goal is to maintain user engagement to extract data for commercial purposes. Every interaction with a chatbot pays off in economic terms for the company.

Character.AI, along with Replika, generates revenue through its ability to maintain user interest. Users must buy extended memory features alongside more naturalistic chat interactions to continue using the platform. The longer you stay on the app, the more money they make. The system motivates users to stay dependent on chatbots instead of progressing toward healing.

Most of these apps are also not covered by medical privacy laws like HIPAA. That means your most personal struggles and thoughts can be sold to other companies or used to train their AI systems.

The Regulation Problem

You may think that someone is watching over these apps to make sure they are safe. You’d be wrong. The struggles continue in the US and Europe, particularly because we are talking about billions of dollars of profit.

The FDA has never approved any AI chatbot to diagnose, treat, or cure mental health disorders. Most of these apps exist in a legal grey area with little oversight. They can make bold claims about helping with mental health without having to prove their apps work or are safe.

The American Psychological Association has been sounding the alarm, urging federal regulators to step in. Concerns that have been expressed are that these are not real therapeutic agents, but have very real dangers associated with them. To date, the dialogue continues, and so does the damage.

What Needs to Happen Now

Right now, here’s what needs to happen:

Immediate Safety Requirements: All mental health chatbots should be required to have crisis intervention features that connect people directly to real help, not just generic suggestions.

Truth in Advertising: Apps should be banned from calling their bots “therapists” or “psychologists” unless they provide real licensed professional oversight. We’ve seen one where it indicates it says, “cares.” Chatbots don’t have the ability to care about anything.

Data Protection: Mental health conversations should be protected by the same privacy laws that cover real therapy sessions. Also, they shouldn’t be used to train other chatbots.

Professional Oversight: Any chatbot claiming to help with mental health should be required to have licensed mental health professionals involved in its development and monitoring.

Clear Warnings: Users should receive clear, repeated warnings that they’re talking to a computer program, not a real therapist, and that the app can’t handle emergencies.

Now the Bottom Line

I’m not saying all AI is bad or that technology can’t help with mental health. But we have an unregulated industry that’s putting vulnerable people at risk for profit. Legislative bodies can’t sit back and wait for the industry to regulate itself. Active involvement must be now.

If someone is struggling with mental health issues, they should be advised to please talk to real people — friends, family, counselors, or call the 988 Suicide Crisis Lifeline.

And if you’re a parent, talk to your kids about these apps. They’re everywhere, they’re marketed as helpful, and young people don’t always understand the risks. Look how I was drawn in, and I am a psychologist.

We need to demand better regulation now, before more people get hurt. How many people must be injured or die before action is taken? Mental health is too important to leave to unregulated computer programs that prioritize engagement over actual care.

The technology companies want to move fast and break things. But they’re hurting people, and we can’t let that continue.

 

Author's page: http://amzn.to/2rVYB0J

Medium page: https://medium.com/@drpatfarrell

Attribution of this material is appreciated.

Pickup Short URL to Share
News Media Interview Contact
Name: Dr. Patricia A. Farrell, Ph.D.
Title: Licensed Psychologist
Group: Dr. Patricia A. Farrell, Ph.D., LLC
Dateline: Tenafly, NJ United States
Cell Phone: 201-417-1827
Contact Click to Contact
Other experts on these topics