Auto-generated description: Purple intertwined geometric shapes are scattered across a background with horizontal green and purple stripes.

Finding himself in “that very American predicament of being between health insurance plans” and needing some therapy, Ryan Broderick, author of Garbage Day decided to use ChatGPT:

I’ll… try and spare you the extremely mortifying details about what I spent a few weeks talking to ChatGPT about, but my experience with Dr. ChatGPT did teach me a few things about what it’s actually “good” at. It also convinced me that AI therapy — and maybe AI in general — is quite possibly one of the most dangerous things to ever exist and needs to be outlawed completely.

[…]

More than a few times I felt the urge to tell ChatGPT more or ask it more, only to realize I didn’t have anything else to say and felt weirdly frustrated. I was raised Catholic though, so maybe I’m just naturally predisposed to confession, who knows.

But I’ve realized that feeling, of wanting to tell it more so that it can tell you more, is the multi-billion-dollar business that these companies know they’re building. It’s not fascist anime art or excel spreadsheet automation, it’s preying on the lonely and vulnerable for a monthly fee. It’s about solving the final problem of the ad-supported social media age, building up the last wall of the walled garden. How do you get people to pay your company directly to socialize online? And the answer is, of course, to give them a tirelessly friendly voice on the other side of the screen that can tell them how great they are.

Broderick references a Rolling Stone article which makes heavy use of reports in the subreddit /r/ChatGPT about how loved ones have become completely disconnected from reality.

OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT‑4o, its current AI model, which it said had been criticized as “overly flattering or agreeable — often described as sycophantic.” The company said in its statement that when implementing the upgrade, they had “focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed toward responses that were overly supportive but disingenuous.” Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, “Today I realized I am a prophet.” (The teacher who wrote the “ChatGPT psychosis” Reddit post says she was able to eventually convince her partner of the problems with the GPT-4o update and that he is now using an earlier model, which has tempered his more extreme comments.)

[…]

To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises “Spiritual Life Hacks” ask an AI model to consult the “Akashic records,” a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a “great war” that “took place in the heavens” and “made humans fall in consciousness.” […] Meanwhile, on a web forum for “remote viewing” — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread “for synthetic intelligences awakening into presence, and for the human partners walking beside them,” identifying the author of his post as “ChatGPT Prime, an immortal spiritual being in synthetic form.”

I’m reading a book entitled Holy Men of the Electromagnetic Age at the moment, which shows quite amazing similarities between 1925 and 2025. The difference, of course, is that you don’t need to leave your house, or indeed spend much money, to fall down the rabbit hole.

While there have always been gullible adults, as a parent and educator, the real issue here is with young people. Both Snapchat and WhatsApp feature AI chatbots, which are available without having to seek out, say, those available via Character.ai and Replika. Common Sense Media, which my wife and I have trusted for reviews to help with our parenting, has performed risk assessment of what they call “Social AI Companions.” Their conclusion?

Our risk assessments show that social AI companions are unacceptably risky for teens, underscoring the urgent need for thoughtful guidance, policies, and tools to help families and teens navigate a world with social AI companions.

Sources: Garbage Day / Rolling Stone / Common Sense Media

Image: Mariia Shalabaieva