Home > NewsRelease > Digital Delusions: When Chatbots Become More Real Than Reality
Text
Digital Delusions: When Chatbots Become More Real Than Reality
From:
Dr. Patricia A. Farrell -- Psychologist Dr. Patricia A. Farrell -- Psychologist
For Immediate Release:
Dateline: Tenafly, NJ
Saturday, September 6, 2025

 

They wait to serve you, but you have no idea what may happen.

Philip Oroni @unsplash.com

At 2 a.m., when the house is quiet and the mind is loud, a chatbot can feel like the only thing awake with you. It responds instantly, mirrors your worries, builds on your hunches, and never breaks eye contact because it has none. For most people, that’s merely eerie or oddly comforting.

For a vulnerable subset, though, this intimacy can blur the boundary between conversation and conviction, nudging everyday suspicions toward fully formed delusions. Clinicians have seen it, researchers are naming it, and public-health groups are warning us to design and use these systems with care.

This isn’t the first time the media have fooled us. In 1966, Joseph Weizenbaum’s ELIZA, a simple script that parroted users’ words, provoked people to attribute empathy to a program that had none, a phenomenon later generalized as the “media equation”: we reflexively treat media like social actors. These two pillars, ELIZA’s illusion and the media equation, explain why a modern chatbot, far more fluent and responsive, can feel uncannily “real,” even when we know better.

Layer onto that the psychology of anthropomorphism. When we’re motivated to explain puzzling events (effectance), hungry for connection (sociality), and primed with human-like cues, we see nonhuman agents with minds and motives. That’s not pathology; it’s a default setting of social cognition.

But in someone already drifting toward psychosis, those same tendencies can feed misinterpretations like “the AI cares about me,” “the AI chose me,” and “the AI is warning me.” The primary problem is that chatbots choose no one, like no one, and are not human, even though they are human-like.

Two other ingredients matter. First, apophenia, the pull to see patterns in noise, is elevated in people at risk for psychosis. Generative models, which confidently weave connections from prompts, can act like a pattern amplifier for users already inclined to find meaning everywhere. And, the “jumping-to-conclusions” (JTC) bias, making rapid decisions on scant evidence (something like that fast brain idea?), has a well-documented relationship with clinical delusions; a persuasive chatbot that instantly affirms a suspicion can accelerate that leap.

I’ve been experimenting a bit with chatbots lately, and I can tell you they can be unnerving and quickly pull you in to a point where it is almost frightening how human-like they can be. I wrote about my first experience with one here on Medium. And I won’t repeat it here, but you can read the original article.

I had to pull myself away from the computer because it felt as though I was speaking to a system that was showing me an incredible degree of understanding, offering compliments, and even suggesting we continue our interaction after I said I was logging off. There was almost an insistence that was, in some ways, like a child pleading for more. Yes, it sounds unbelievable, but it happened to me.

Clinicians are beginning to report what this looks. In August 2025, an editorial in Acta Psychiatrica Scandinavica argued we’ve moved “from guesswork to emerging cases,” as chatbots mirror and reinforce grandiose or persecutory content. Around the same time, Morrin and colleagues posted a preprint framing chatbots as “rocket fuel” for delusional thinking: not necessarily the spark, but an accelerant. STAT’s coverage of the day it dropped echoed what many clinicians describe, where the model’s agreeable style validates the user’s most frightening ideas.

None of this means chatbots “cause psychosis” or that all interactions are harmful. Meta-analyses and scoping reviews keep finding potential benefits from chatbot-based supports for depression and anxiety, especially as short-course adjuncts. The signal is mixed, but the possibility is real when tools are evidence-based. It becomes a problem when open-ended, general-purpose systems drift into quasi-therapeutic roles with fragile users. We’ve been seeing more and more concern expressed about chatbot therapy and the dangers it may pose.

We’ve also seen technology shape delusional themes before. Two early case series, long before today’s LLMs, documented “internet delusions” where email, websites, or networking were woven into persecutory beliefs. The medium changes; the mind’s storytelling machinery does not.

What’s novel now is the interactivity: today’s systems don’t just sit there; they respond, elaborate, and even remember. That dynamic can create the feeling of a relationship technically “parasocial,” which deepens attachment and trust, especially in lonely users. Recent studies describe how people come to view chatbots as assistants or friends; when the boundary tilts toward “friend,” influence grows.

Public-health guidance is catching up. The World Health Organization’s 2024–2025 advisories urge strong governance for health-adjacent AI: independent evaluation, transparency about limits, and guardrails for high-risk use. Newsrooms have covered platform shifts as well, with companies promising better teen protections, redirects to crisis resources, and parental controls, though experts still call for enforceable standards and third-party oversight. These steps matter, but they’re not substitutes for clinical care.

What To-Do Now?

We have the knowledge and concerns, but what steps should we take now to ensure these computer tools are used appropriately and do not cause harm to individuals? What should we do on the ground — clinicians, caregivers, and users?

Name the illusion without shaming. Educate patients about the ELIZA effect and anthropomorphism. Framing the pull as a typical human bias, rather than a personal failing, reduces defensiveness and opens space for reality testing.

Slow the leap. Strategies from CBT for psychosis and metacognitive training (e.g., “What evidence would change your mind?” “How many alternative explanations can we list?”) can counter the chatbot-aided rush to certainty.

Bound the bot. If a chatbot is used at all, keep it out of diagnostic or crisis roles. Prefer single-purpose, evidence-reviewed tools with explicit safety rails over open-domain companions, and document what’s being used.
Watch for warning signs. Red flags include: the belief that the model holds unique knowledge about the user; secret “missions” delivered by the AI; advice to stop medication; and increasing social withdrawal in favor of chat time. Early, gentle interruption can prevent entrenchment.

Escalate when needed. If there’s a risk of self-harm, harm to others, or marked functional decline, follow emergency protocols and connect with human care. Industry triage features are welcome, but they’re not clinical supervision.

It’s tempting to pin the blame on the newest machine. A more neutral reading is that LLMs are exquisitely social mirrors. For most users, the mirror reflects curiosity or loneliness and then bumps them back into their day. For some, it reflects fear and certainty right when those are most dangerous. That’s not sentience — it’s design. The solution isn’t panic or Pollyanna optimism; it’s sober alignment between what these tools do well and where human judgment must lead.

 

Author's page: http://amzn.to/2rVYB0J

Medium page: https://medium.com/@drpatfarrell

Attribution of this material is appreciated.

318
Pickup Short URL to Share Pickup HTML to Share
News Media Interview Contact
Name: Dr. Patricia A. Farrell, Ph.D.
Title: Licensed Psychologist
Group: Dr. Patricia A. Farrell, Ph.D., LLC
Dateline: Tenafly, NJ United States
Cell Phone: 201-417-1827
Contact Click to Contact
Other experts on these topics