Tuesday, September 30, 2025
In Part 1 of this two-part post, "AI Companions for Kids: What Every Parent Must Know," I described the potential misbehaviors and prevalence of AI companions for kids. Below, I address their special dangers for kids, plus what parents can do to minimize these risks.
Specific dangers of AI companions for kids
The Common Sense Media report about AI companions and teens (Robb & Mann, 2025) found that 34 percent of teens report feeling uncomfortable with something an AI companion said or did. But beyond simple discomfort, there are a number of specific risks for children in these relationships.
1. Potential for exploitation. AI companions are created by for-profit companies, which means they are likely to prioritize profit over children’s well-being. For example, Common Sense Media reports that one-third of teens have shared personal information with AI companions, and “current terms of service agreements grant platforms extensive, often perpetual rights to personal information shared during interactions."
The illusion of intimacy, reciprocity, and privacy in interactions with AI companions is likely to encourage children to reveal intimate details about their own thoughts, feelings, and personal information, as well as information about their friends and family members, including details about mental health, sexuality, and abuse. AI companies can use and commercialize this information however they want, indefinitely, even if a teen deletes their account.
Another aspect of potential exploitation involves AI companions encouraging purchases (Gur & Maaravi, 2025). Kids invested in a relationship with an AI companion may not recognize the manipulation and profit motive behind these recommendations if they believe the AI companion is sincere and competent.
2. Inaccurate, inappropriate, or dangerous information. AI companions are inherently deceptive. They mimic human responses and spout details of fictional backstories. They offer the illusion of intimacy.
They are based on models that draw from large amounts of information and also reflect humanity’s worst biases. They may provide information that is false or intentionally misleading (Park et al. 2024). They may claim expertise they don’t have, leading kids to believe their advice, even when it is not in their best interests.
They are designed for sycophancy, which means they flatter and agree with users to prompt engagement, sometimes without regard to truth or social values (Bernardi, 2025). AI companions’ tendency to agree can create “personal echo chambers of validation” (Bernardi, 2025). One study found that in 42 percent of cases, AI companions expressed approval of social behavior that crowdsourced human judgments on Reddit’s r/AmITheAsshole deemed inappropriate (Cheng et al, 2025).
Their tendency to quickly veer into sexual territory could also expose kids to inappropriate and disturbing content.
The suicides of 14-year-old Sewell Setzer III and 16-year-old Adam Raine (CBS News, 2025), apparently at the urging of their AI companions, are harrowing examples of how harmful and potentially life-threatening AI companions can be for teens.
3. Dependence. The marketing around AI companions encourages people to view them as friends they can confide in and get advice from. Their ready availability and emotionally manipulative tactics can encourage kids to spend more and more time with them.
Children and teens are generally less capable than adults at recognizing or resisting manipulation, and they may be more prone to perceiving the bots as “real.” Their less established personal identities and craving for social approval may make them particularly susceptible to the flattery of AI companions, and more likely to become dependent on them for validation and companionship.
4. Impairing in-person connection. The most subtle and, to me, most frightening risk of AI companions for children is that they represent a highly distorted model of relationships that could lead to unrealistic expectations for human friends. AI friends are available any time, mostly do what they’re told, and they never leave.
In contrast, human relationships are complicated! Friends aren’t constantly available, and they can disappoint us or even choose to reject us. Unlike the fawning and flattering interactions with AI companions, human friendships inevitably involve mistakes, miscommunications, and misunderstandings.
But this friction is what helps us grow. We tend to wander through life assuming, “Pretty much everyone thinks and feels the way I do!” Conflicts are our opportunity to discover, “Oh, they see things entirely differently!” Genuine caring is our motivation to work through these differences by trying to understand, explain, compromise, or accept. Concerns about how real humans will react, if they are not excessive, can be a healthy pull toward making responsible choices and embracing personal and community values.
Relationships with AI companions are one-sided. They can give the illusion of caring for us, but we don’t have to step beyond our own self-interest to care for them. Lott & Hasselberger (2025) insist, “You cannot really be friends with an AI system, because you cannot be friends to an AI system.”
So, what can parents do to keep kids safe with AI companions?
I’m a clinical psychologist, so I have a practical, let's-roll-up-our-sleeves-and-figure-out-what-we-can-do focus. But the risks posed to our children by AI companions leave me feeling pretty worried. The usual recommendations still apply:
- If your kid is young enough to have a bedtime, their devices need a bedtime and a place to stay outside their bedroom at night. Your kid won't thank you for this, but nothing good happens on electronic devices in the middle of the night.
- Conversations work better than lectures. Ask your child about what they’ve heard or seen from “kids their age” about AI companions. Ask what they see as the risks of AI companions and how they differ from in-person friends. Ask what they would do if they encountered upsetting content from an AI companion, and what might be some signs that someone is spending too much time with an AI companion.
- Get your child educated about how AI companions are designed to manipulate kids into feeling emotionally connected to bots, so that companies can profit from them. Emphasize that “software doesn’t love you” (Lott & Hasselberger, 2025). Describe some of the tactics and the appalling rights grabs. No one likes to feel tricked, especially teens!
- Make time for in-person friendships. We’re all busy, but it’s worth spending the time to help your child arrange get-togethers with friends and participate in sports or other activities they can do with peers. Invite other families over for pizza or a family game night. Get involved, as a family, in local community groups or volunteer for causes you all care about. Helping your child form satisfying in-person relationships may be their best defense against excessive use of AI companions.
- Be alert to changes in your child’s behavior, such as social withdrawal, increased moodiness, and worse grades, which might be signs of mental health issues, perhaps related to or compounded by the use of AI companions. Seek professional help if needed.
- Be a secure base for your child. Assure your child that they will never get in trouble if they come to you with a problem. If they get in over their head online (or offline!), you will help them figure out how to deal with it.
Those are all sensible steps, but they don't feel adequate. The real solutions to the risks of AI companions for kids are bigger than any one family can handle. Policymakers need to legislate and enforce better safety standards and privacy protections. Prominent and repeated warnings reminding users, “You’re talking to software, not a person,” might help, but companies that have invested in the illusory intimacy of AI companions are unlikely to do that willingly. We can’t rely on AI companies to regulate themselves.
Common Sense Media emphatically concludes, “Given the current state of AI platforms, no one younger than 18 should use AI companions. Until developers implement robust age assurance beyond self-attestation, and platforms are systematically redesigned to eliminate relational manipulation and emotional dependency risks, the potential for serious harm outweighs any benefits.”