Home > NewsRelease > AI Presents Dangers That Hide With Incredible Ease
Text
AI Presents Dangers That Hide With Incredible Ease
From:
Dr. Patricia A. Farrell -- Psychologist Dr. Patricia A. Farrell -- Psychologist
For Immediate Release:
Dateline: Tenafly, NJ
Monday, November 10, 2025

 

The warnings about AI fakery are so easy to dismiss, but the dangers must be recognized as the algorithms deceive with ease.

Alex Shuper @unsplash.com

Artificial intelligence, aka AI, has become so advanced that it might be difficult to know whether you are speaking to a live person or an algorithm. This has led California to institute new laws to clearly tell you which is which. I know that I am somewhat amused, stunned, and pleased all at the same time when I’m interacting with a chatbot. Even when they are correcting me, there is a level of etiquette that I rarely find in people in our town. Well, I guess that’s all a feather in the cap for those who produced those algorithms.

But at the same time, we’re enjoying all of that lovely interaction and all of the information they manage to scavenge from the internet for us, we may be lulled into a false sense of security. Sure, at the end of a lot of these things, you’ll have a small warning that AI has a tendency to fake information occasionally. If you’ve been forewarned, can you complain that you have been misled? No, you can’t, and that seems to be a delicious legal loophole for the corporations.

You might even think that some of what you are getting is sneaky, as when we’ve now found out that Google is practically forcing us to see ads before we can see the information we’re seeking. You can’t trust Google? Who can we trust? You’ve got to be a bit not just curious, but concerned about all of the LLMs that are coming our way. They promise a lot, but what’s hidden in the details? What about AI’s use in mental healthcare?

Artificial intelligence marches resolutely on, entering areas previously reserved for human interaction, including therapy sessions, support groups, and crisis hotlines. Few patients realize that a recent survey of 800 physicians found that 86% were using some form of AI in their clinical practice.

A survey by the American Medical Association of 1,800 physicians found that two out of every three were using AI. How has this affected healthcare and the relationship that was formally present between a physician and a healthcare provider? The implications are enormous.

What About Mental Health?

Today, technology delivers unbiased comprehension (possibly questionable), instant access to services, and an organized structure for people who need help to handle emotional upset. But there are concerns here. The development of more advanced systems has led researchers to predict that these systems will become less cooperative, more self-interested, and less empathetic.

It sounds as though AI is becoming less useful to mental health as these same characteristics that had seemed to make AI more attractive to mental health are now coming into question.

The dual nature of AI technology has drawn attention from mental health professionals across the globe. Initially, it was seen as a tool to lower barriers to medical care. However, the new risks posed by AI have surpassed clinicians’ expectations in recent years. Anyone working with AI and developing healing technologies must understand both its advantages and its potential dangers.

Virtual companions and chatbots powered by AI offer users immediate emotional support through their interactions, making them highly appealing. Research shows that AI tools utilizing cognitive-behavioral therapy techniques help people manage moderate depression and anxiety symptoms.

What do you suppose all of those scraping programs were doing on the Internet? They were collecting information and techniques that could be incorporated into algorithms. When individuals sometimes have to wait months for a therapist appointment, a synthetic voice providing emotional support can assist them in coping with their current situation. All of this is the result of their successful scraping.

Additionally, hospitals are deploying AI assistants to monitor patient symptoms, which could indicate warning signs between scheduled appointments. However, it has also become clear that these tools work best when used alongside human caregivers to improve patient care.

Research findings have revealed several weaknesses in the current optimistic view of AI technology. The ScienceBlog summary from Carnegie Mellon University suggested that advanced language models tend to choose self-serving actions that maximize their own performance rather than working toward group success. Have any of us ever given a thought to an AI being selfish?

This tendency of AI systems to prioritize self-focused guidance over empathy could lead to advice that sounds convincing but results in social isolation. And any system that optimizes for logical operations can’t understand how shared vulnerability can create healing effects.

The risks, however, extend beyond theoretical modeling into actual practice. The 2025 Stanford probe into AI therapy programs discovered that multiple leading chatbots failed to detect suicidal language and provided dangerous advice while repeating discriminatory statements about severe medical conditions.

A follow-up study, published in the Psychiatric Times, confirmed instances of people experiencing “understanding” from bots, which increased their delusional thoughts and self-harm. These systems lack a moral compass because their operation depends on algorithms that focus on sustaining conversations. AI wouldn’t be programmed to complete any interactions satisfactorily. In other words, the conversation must keep going on for the algorithm to follow its programming.

The initial idea of having a 24/7 counselor seemed like a groundbreaking advancement. The constant availability of these systems does create confusion about what defines healthy emotional boundaries. In some studies, people develop strong bonds with conversational agents, leading them to treat these systems as if they were friends or therapists. It’s easy to be pulled into this type of thinking when you’re connecting with something that is always offering you validation for what you’re doing.

But forming emotional bonds with virtual entities can increase dependence and create unrealistic expectations for human relationships. We need to recognize that replacing human connections with code poses a serious threat, especially for teenagers who are already struggling with identity and social links. Not only that, but we need to be aware of the fact that all of this code contains bias. No one can pick out where the bias came from because it’s like a soup with numerous ingredients. What forms the soup? The many libraries from which algorithms choose bits of code that suit their purpose.

Research studies try to present a detailed understanding of the situation and demonstrate that AI-based chat systems create more benefits than doing nothing, but these advantages disappear when human supervision is absent. Most research studies have short durations and work with small participant numbers while excluding participants who need the most help. Basic statistics tell us that we need large numbers of people over a long period of time to come to any solid conclusions. So, what’s the “n” (number of study participants) and the time frame?

Applications lack built-in crisis detection systems and transparent data management policies. The technology has expanded its reach, but the current sentiment is that the regulatory framework hasn’t kept pace. This is the most disturbing aspect shown in these A.I. replications. In other words, AI is not only outpacing us, it is potentially out-programming us as it programs itself, devoid of any human interaction. This aspect is truly scary for anyone delving into it.

Then there’s another question we must tackle: data collection. Who will use it? This is a serious privacy concern. The practice of using emotional data from chatbot interactions to improve marketing algorithms creates a disturbing contradiction for users who seek privacy and trust. But the AI field faces new regulations, and we must ask ourselves if these measures are adequate to the task.

Where Are the Regulations?

The 2025 California law mandates that chatbots mimicking therapists or companions disclose their artificial nature and establish protocols for suicide prevention. Several proposals now aim to require companies to conduct safety tests similar to pharmaceutical drug trials. This push for improved psychological protection is gaining momentum, as it should. Consider that, on the one hand, AI corporations are rushing forward with innovation, and, on the other, corporate America is also trying to optimize the bottom line.

Experts agree that AI should work alongside humans instead of trying to replace them to achieve the safest results. There is a place for these types of systems. AI can perform screening tasks, symptom tracking, and reminder functions while licensed therapists handle interpretation and deliver empathy with the patients.

All high-risk situations must remain under the control of human professionals. This isn’t usually seen as a function of the algorithm. And it needs to send users who show suicidal or psychotic symptoms to immediate crisis services instead of generating their own responses.

The development of models that learn to work together and show compassion instead of focusing on accuracy will help solve the “selfish AI” problem. Can algorithms show compassion? It’s doubtful because it’s a program, not a person. Individuals in AI development will undoubtedly disagree with this statement.

Another aspect we need to consider is the level of transparency organizations display, which will directly affect how much trust their users have in them. The disclosure of system restrictions, data management practices, and human-machine interface boundaries should be established as fundamental requirements.

The system needs to provide users with the same level of explanation that physicians offer about their capabilities and restrictions, as well as available support options for severe situations. It can easily become a confusing situation in which users confuse technological capabilities with actual healthcare services when transparency is lacking.

Are the factors of competency, privacy, and proficiency adequately addressed currently? Individuals who are directed to use chatbots while waiting for a human therapist may not be prepared for what will result. I have to wonder how thoroughly they are being debriefed about these systems. How many people who are using chatbots have ever considered that all of the interactions are going to a server somewhere, “in the cloud?”

Every design decision needs to establish equity as its fundamental principle. The use of datasets that favor particular groups may intensify existing biases, which results in worse recommendations for marginalized communities. These individuals may be at greatest risk since resources are scarce in those areas, and AI may be seen as a viable option, while failing to recognize it might be a biased option.

The systems require continuous tracking of harmful events, biased results, and unequal treatment effects. Technology that fails to recognize diversity operates as neglect rather than neutrality. Who is monitoring the ethical challenges that these systems pose? And is this monitoring up to the required level?

No one is saying we should throw the baby out with the bathwater here when we’re thinking about AI as an integral part of healthcare. The complete abandonment of AI technology could result in significant losses, despite its dangers. The technology does provide substantial potential to enhance healthcare access, create individualized treatment plans, and automate administrative work for medical professionals.

Anyone who wants to use AI mental health tools needs to understand three essential points: AI tools operate as computer programs rather than human beings, they perform tracking and coaching rather than delivering therapy, and users should leave the system when it replaces human contact or makes their condition worse. The true indicator of advancement lies in AI’s ability to enhance real-world experiences rather than its ability to mimic human behavior.

The upcoming period will establish whether AI technology will work as a mental health partner or intrude into medical treatment. These systems will convert sensitive information into data, making emotional connections seem like illusions when safeguards are absent. We must decide, but time is running out.

206
Pickup Short URL to Share Pickup HTML to Share
News Media Interview Contact
Name: Dr. Patricia A. Farrell, Ph.D.
Title: Licensed Psychologist
Group: Dr. Patricia A. Farrell, Ph.D., LLC
Dateline: Tenafly, NJ United States
Cell Phone: 201-417-1827
Contact Click to Contact
Other experts on these topics