Well, the genie is out of the bottle on AI friends (and romantic partners)
The defining ‘coming out’ awkward situation for my parents' generation was finding out that their son or daughter was gay. For my generation, I think it’s going to be that their son or daughter is dating an AI.
You can ask whether or not it’s “OK” or “healthy” to have AI friends or romantic partners. Meanwhile people actually are using LLMs for these purposes. And, as Jasmine Sun points out in this article, while there’s some upside to that, the chances are that’s outweighed by the downside.
[W]e can read as many disclaimers as we want, but our human brains cannot distinguish between a flesh-and-bones duck and an artificial representation that looks/swims/quacks the same way.
Why do people become so attached to their AIs? No archetype is immune: lonely teenagers, army generals, AI investors. Most AI benchmarks show off a model’s IQ, proving “PhD-level intelligence” or economically useful capabilities. But consumers tend to choose chatbots with the sharpest EQ instead: those which mirror their tone and can anticipate their needs. As the politically practiced know, a great deal of AI’s influence will come not through its superior logic or correctness, but through its ability to build deep and hyperpersonalized relational authority—to make people like and trust them. Soft skills matter, and AI is getting quite good at them.
[…]
Most people use AI because they like it. They find chatbots useful or entertaining or comforting or fun. This isn’t true of every dumb AI integration, of which there are plenty, but nobody is downloading ChatGPT with a gun to their head. Rather, millions open the App Store to install it because they perceive real value. We can’t navigate AI’s effects until we understand its appeal.
[…]
Well, the genie is out of the bottle on AI friends. Recently, a colleague gave a talk to a LA high school and asked how many students considered themselves emotionally attached to an AI. One-third of the room raised their hand. I initially found this anecdote somewhat unbelievable, but the reality is even more stark: per a 2025 survey from Common Sense Media, 52% of American teenagers are “regular users” of AI companions. I thought, this has to be ChatGPT for homework, but nope: tool/search use cases are explicitly excluded. And the younger the kids, the more they trust their AIs. So while New Yorkers wage graffiti warfare against friend.com billboards, I fear the generational battle is already lost.
[…]
I’m generally enthusiastic about AI service provision. AI assistants can act as tutors, business advisers, and even therapists at far cheaper rates than their human equivalents. I think Patrick McKenzie makes a fair point when he notes the tradeoff between stricter liability and higher costs. When it comes to mental health impacts, it’s not crazy to counterweight “How many lives have LLMs taken?” with “How many lives have LLMs saved?”
But as much as I try to be open-minded, each testimony I read only stresses me out more. It’s clear that the level of emotional entanglement far surpasses any ordinary service. An algorithm change should not feel like a bereavement. Users analogize the shock of model updates to their abusive parents; words like “trauma,” “grief,” and “betrayal” appear again and again. LLMs offer a bizarro form of psychological transference: people are projecting their deepest emotional needs and fantasies onto a machine programmed to feign care and never resist.
Source: Jasmine Sun