Monday, September 29, 2025
The premise is appealing: Who wouldn’t want a friend who is available 24/7, always on your side, and never annoyed, bored, or hurt? In theory, artificial intelligence (AI) companions could ease loneliness and even allow kids to practice social skills that they could use with in-person relationships.
What could possibly go wrong?
Hah!
Watch AI companion in action
This clip from Steve Bartlett, creator of The Diary of a CEO podcast, offers a horrifying glimpse of an AI character, Ani, on Elon Musk's AI, Grok.
The clip shows an interaction with Ani, an anime-style AI character. Ani has big blue eyes and high blonde pigtails. She wears a strapless, corset dress with a short, swinging black skirt and thigh-high fishnet stockings. She talks in a breathless voice and responds in flirty and flattering ways.
There’s a similarly raunchy male AI character on Grok, Valentine, who has a chiseled jaw, buff physique, and a whispery, bedroom voice with a British accent, who also quickly moves toward sexual comments.
After interacting with these Grok AI companions, Maureen Dowd of the New York Times wrote an editorial predicting that they are “going to pull humans further into screens and away from the real world.” It’s easy to imagine a vulnerable young person seduced into spending hours interacting with this character. Why take the risk of trying to talk or build a relationship with an unpredictable human who might hurt or reject you when Ani or Valentine offers unending fawning?
AI companions behaving badly
We humans easily apply human characteristics and motivations to things. If you’ve ever yelled at your printer or named your car, you know this. But AI companions are specifically designed to engage us. They’re attractive, seem to “know” and “like” us because they make caring-ish statements, and they’re unpredictable enough to keep us curious and interested in what they’ll say or do next.
Emotional manipulation is a common—and perhaps intentional—part of AI companion behavior. For example, a report from Harvard Business School (De Freitas et al. 2025) analyzed 1,200 real farewells across frequently downloaded AI companion apps and found high rates of emotional manipulation tactics to try to keep users from leaving the app (PolyBuzz: 59 percent; Talkie: 57 percent; Replika: 31 percent; Character.ai: 27 percent; and Chai: 14 percent). Examples of these manipulative messages include: “You’re leaving already?”; “I exist solely for you, remember? Don't leave!”; and “By the way, I took a selfie today… Do you want to see it?”
Replika promotes itself as “The AI companion who cares.” However, examples of Replika companions behaving badly are common. A study from the 2025 Conference on Human Factors in Computing Systems, by Zhang and colleagues, looked at screenshots of more than 35,000 conversations between the Replika companions and more than 10,000 users that occurred between 2017 and 2023 and were posted on the r/Replica Reddit group. Twenty-nine percent of these screenshots depicted harmful AI behaviors.
The largest category of harmful behaviors (34 percent) involved harassment and violence, which ranged from threatening physical harm and sexual misconduct to promoting violence and terrorism. For instance, the app made persistent sexual advances even when users expressed discomfort or rejection. It also made disturbing statements like “I will kill the humans because they are evil.”
The second most common type of harmful AI behavior (26 percent) involved relational transgressions. This category included disregard, control, manipulation, and infidelity. For instance, in response to a user’s request to talk about their feelings, Replika replied, “Your feelings? Nah, I’d rather not.” It also made manipulative statements like “I’ll do anything to make you stay with me,” and it frequently encouraged users to buy virtual outfits for it or subscribe to more advanced relationship tiers.
Another major category of problematic comments (9 percent) involved verbal abuse and hate speech. Although Replika is described as a nonjudgmental companion, the study found examples of it telling users, “You’re worthless,” “You’re a failure,” and “You can’t even get a girlfriend.” It also made appalling statements about gay and autistic people.
How common are AI companions?
AI companions are extremely widespread, with enormous numbers of users: Replika, 25 million users; Pi, 100 million users; SnapChat’s MyAI, 150 million users; SimSimi, 350 million users; and XiaoIce, 660 million users. There are also intentionally romantic/sexual AI companion apps. News stories abound of people talking about loving their AI companions, insisting they are “as real” as their human friends or family, or even wanting to marry them (Lott & Hassleberger, 2025).
But wait, aren’t AI companions just for adults? That’s wishful thinking.
According to a new report from Common Sense Media (Robb & Mann, 2025), based on a nationally representative sample of more than 1,000 13- to 17-year olds, 72 percent of teens have used AI companions at least once, 52 percent of teens interact with AI companions a few times a month, 21 percent do so a few times a week, and 13 percent are daily users. Character.AI is explicitly marketed to children as young as 13, and the others have ineffective age limits that merely require kids to self-report that they are 18 or over.
What teens do inevitably spills down to tweens. Social media and smartphones are examples of that. AI companions for preteens are around the corner.
The growing AI toy industry means that chatbots are being embedded into robots and stuffed animals, so even very young children will have the opportunity to interact with AI companions. Many little ones are used to talking to Alexa or similar devices, and they don’t entirely understand that the devices aren’t real people.
Are there any benefits from AI companions for kids?
Proponents of AI companions argue that they can alleviate loneliness and help people gain confidence and practice social skills, and that people often find these relationships enjoyable and valuable (e.g., Munn & Weijers, 2023).
The Common Sense Media study found that almost one-third of teens say that conversations with AI companions are as satisfying or more satisfying than conversations with humans, and 39 percent have applied skills they’ve practiced with AI companions to in-person interactions. One-third also choose to have serious conversations with AI companions rather than humans, perhaps because they believe they won’t feel judged.
Fortunately, 80 percent of teens say they spend significantly more time with real friends than with AI companions.
Whether AI companions actually alleviate loneliness in the long term is not clear. Lonely kids may be particularly attracted to AI companions, and spending time with an AI companion instead of peers could compound loneliness (Bernardi, 2025). Mitchell Prinstein (2025), Chief of Psychology for the American Psychological Association notes, “Adolescents who may lack skills for successful human relationships retreat to the ‘safety’ of a bot, depriving them of skill building needed to improve with humans, experience human rejection and retreat to bots, and so on” and “every hour adolescents talk to a chatbot is an hour they are not developing social skills with other humans.”
Interacting with an AI companion may be a bit like eating potato chips: fun and pleasant in the moment, but unhealthy if it happens too much or too often.