Auto-generated description: Six champagne glasses filled with a clear liquid and garnished with red cherries are lined up on a railing.

It’s not that surprising to me that people would use LLMs in their everyday screen-mediated interactions. In general, people want to appear “normal” and impress other people. Life is ambiguous, mostly unproductively so, meaning that any help in navigating that is likely to be welcome.

Having a seemingly-sentient interlocutor available to run ideas past seems like a good idea, until you realise that it doesn’t really have much clue about the context of human relationships. Generic advice is generic.

For other things, though, it can be super-useful. For example, we recently had some roofers round who encouraged us to spend £thousands replacing our roof, until Perplexity pointed out that our recent heat pump installation probably meant that the ‘leak’ might actually be condensation..

Just as the word clanker now serves as a slur for chatbots and agentic AI, a new lexicon — including secondhand thinkers, ChatNPC, sloppers, and botlickers — is being workshopped by people online to describe the kind of ChatGPT user who seems hopelessly dependent on the mediocre platform. Online, people aren’t mincing words when it comes to expressing their disdain for and irritation with “those types of people,” as Betty describes them. Escape-room employees have crowded around several viral tweets and TikToks to share stories of ChatGPT’s invasion. “So many groups of teens try to sneakily pull out their phones and ask ChatGPT how to grasp the concept of puzzles, then get mad when I tell them to use their brains and the actual person who can help them,” reads one comment. On X, same energy: “Using ChatGPT to aid you through an escape room is bonkers bozo loser killjoy dumdum shitass insanity.”

[…]

The latest and most comprehensive study in ChatGPT usage, published by OpenAI and the National Bureau of Economic Research, found that nonwork messages make up approximately 70 percent of all consumer messages. The study’s “privacy-preserving analysis of 1.5 million conversations” also found that most users value ChatGPT “as an adviser,” like a pocket librarian and portable therapist, as opposed to an assistant. Despite, or perhaps because of, the fact that ChatGPT does not consistently deliver reliable facts, people now seem more likely to use it to come up with recipes, write personal messages, and figure out what to stream than to consult it in higher-stakes situations involving work and money.

[…]

AI technologies initially elbowed their way into all the obvious places — customer service, e-commerce, productivity software — to unsatisfying results. Earlier this year, findings from MIT Media Lab’s Project NANDA said that 95 percent of the companies that invested in generative AI to boost earnings and productivity have zero gains to show for it. Turns out that AI is very bad at most jobs, and this pivot to leisure is likely indicative of the industry’s mounting desperation: Any hope of an “Intelligence Age” that would see the cure for cancer and the end of climate change is seeming less likely, given AI’s toll on the environment. And now that it has failed to “outperform humans at most economically valuable work,” contrary to the hopes of the OpenAI Charter back in 2018, these companies are happy to settle for making us dependent on the products in our leisure time.

Source: The Cut

Image: Michal Šára