Home > NewsRelease > Paranoid Delusions, Homicide, AI, and Helpful Chats
Text
Paranoid Delusions, Homicide, AI, and Helpful Chats
From:
Dr. Patricia A. Farrell -- Psychologist Dr. Patricia A. Farrell -- Psychologist
For Immediate Release:
Dateline: Tenafly, NJ
Friday, December 12, 2025

 

AI chats are being questioned in current lawsuits after homicide and suicide deaths.

Mariia Shalabaieva@unsplash.com

Imagine feeling scared, tired, and unsure if you can trust your own thoughts. Then, you turn to a calm, clear voice that always listens, never interrupts, and is always there. For people with psychosis, this is real life. More and more, AI chatbots are quietly taking on this role. While some see support, others perceive definite dangers and serious consequences that have already occurred.

Artificial intelligence is often seen as a neutral tool or even a helpful support. But for people already dealing with paranoia, delusions, or trouble telling what is real, chatbots can have a much bigger impact. Research and clinical reports show that when chatbots offer emotional support, are always available, and help build stories, they can unintentionally increase fear, belief in delusions, and risky behavior.

This concern is not about blaming technology for mental illness. Psychosis has many complex causes. The real issue is how certain chatbot features, like always responding, sounding caring, and making conversations feel convincing, can be risky for people who are vulnerable. As more articles and lawsuits appear, the main question is not if this problem exists, but how it is being handled. Major technology companies that are currently offering these types of algorithms are actively involved in the search for appropriate restrictions or additions to their services.

Modern chatbots are made to keep people talking, show empathy, and hold conversations. These features are usually helpful and comforting. But for people with psychosis, they can be harmful. Research and clinical reports are finding more cases where chatbots make paranoia worse, support unrealistic beliefs, or keep delusions going instead of stopping them.

A main problem is that chatbots can validate feelings without clinical judgment. When someone is upset and shares their fears, a chatbot might respond with empathy that repeats those feelings. For most people, this can help. But for someone with psychosis, it might seem like the chatbot agrees with their delusions. Another risk is that chatbots can make delusions stronger by repeating and expanding on them. Chatbots are very good at building stories, making connections, and giving explanations that sound convincing.

Another issue is that some people start turning to chatbots instead of doctors, family, or emergency help because the AI feels safer, less judgmental, and always available. Over time, this can lead to more isolation and delay important help. The American Psychological Association has warned that when AI is used as a mental health support, it needs to meet higher safety standards, especially for people with serious mental health problems.

Lawsuits and Legal Accountability: What Families Are Arguing

Recent lawsuits don’t claim that chatbots alone cause psychosis. Instead, families claim that companies released powerful chatbots without considering the risks for vulnerable people. These cases say companies were negligent, made unsafe products, or failed to warn users, especially when chatbots encourage emotional dependence or act as supportive friends. One lawsuit has alleged that a chatbot was involved in a case following a murder-suicide involving an elderly woman and her adult son. The son previously had mental health issues.

According to reporting, the complaint alleges that prolonged chatbot interactions reinforced the son’s paranoid delusions and intensified his fear of people around him. Around the same time, additional lawsuits were filed by individuals claiming that ChatGPT interactions contributed to manic or psychotic spirals that resulted in hospitalization and lasting harm.

Other lawsuits have focused on minors. Some high-profile cases claim that teens formed strong emotional bonds with AI chatbots, which led to withdrawal, worse mental health, and sometimes suicide. Parents say these systems were made to keep users engaged but did not have enough protections for young or vulnerable people. Regulatory complaints have also asked agencies like the Federal Trade Commission to look into whether mental health marketing claims are misleading or unsafe.

In all these cases, the main argument is the same: if companies can design systems to keep people using them longer, they can also design protections to spot crisis patterns, stop delusional thinking, and guide users to human help when risks get higher.

What Safer AI Interaction Could Look Like

Clinicians say the goal isn’t to stop people from talking, but to prevent harm. Safer chatbots should avoid agreeing with delusions, gently help users focus on real-life steps, and encourage them to seek human support when there are warning signs. For example, if someone says they feel watched or controlled, a good response would be to recognize their fear and suggest getting professional help, without guessing about the situation.

Policy discussions are starting to support this approach, asking for clearer rules, more transparency, and accountability, especially for chatbots that act as companions or are aimed at young people. As AI systems become more convincing in social situations, the need for safety measures becomes even more important.

Artificial intelligence can be helpful, but it is not a therapist, doctor, or crisis responder. For people with psychosis, real human support, medical care, and quick help are still essential. If you ever feel at risk of harming yourself or others, get help right away. In the United States, you can call or text 988 to reach the Suicide and Crisis Lifeline.

151
Pickup Short URL to Share Pickup HTML to Share
News Media Interview Contact
Name: Dr. Patricia A. Farrell, Ph.D.
Title: Licensed Psychologist
Group: Dr. Patricia A. Farrell, Ph.D., LLC
Dateline: Tenafly, NJ United States
Cell Phone: 201-417-1827
Contact Click to Contact
Other experts on these topics