Pixel art showing a blonde character licking a cartoon badger against a pink background.

I don’t use Google search and couldn’t get it to do this when I experimented, but apparently appending the word ‘meaning’ to any phrase leads to a curious result. The AI summary will make something up as if it’s some kind of folk wisdom.

It’s fun, but also if you think about it for more than a second, a bit dangerous. Those with lower digital literacy skills are likely to see the AI summary as authoritative. I even had to point this out to my GP when he quickly looked something up during a consultation!

I’d point out that DuckDuckGo, a search engine I’ve been using for over a decade, is much better on an everyday basis than Google. I mean, I spend a lot of time online and research is kinda part of my job. So take it from me, you do not need Google search.

Note: I don’t AI-generate many images these days, but I couldn’t resist it for this post!

Last week, the phrase “You can’t lick a badger twice” unexpectedly went viral on social media. The nonsense sentence—which was likely never uttered by a human before last week—had become the poster child for the newly discovered way Google search’s AI Overviews makes up plausible-sounding explanations for made-up idioms (though the concept seems to predate that specific viral post by at least a few days).

Google users quickly discovered that typing any concocted phrase into the search bar with the word “meaning” attached at the end would generate an AI Overview with a purported explanation of its idiomatic meaning. Even the most nonsensical attempts at new proverbs resulted in a confident explanation from Google’s AI Overview, created right there on the spot.

[…]

…Google’s AI Overview suggests that “you can’t lick a badger twice” means that “you can’t trick or deceive someone a second time after they’ve been tricked once. It’s a warning that if someone has already been deceived, they are unlikely to fall for the same trick again.” As an attempt to derive meaning from a meaningless phrase —which was, after all, the user’s request—that’s not half bad. Faced with a phrase that has no inherent meaning, the AI Overview still makes a good-faith effort to answer the user’s request and draw some plausible explanation out of troll-worthy nonsense.

Contrary to the computer science truism of “garbage in, garbage out, Google here is taking in some garbage and spitting out… well, a workable interpretation of garbage, at the very least.

[…]

The fact that Google’s AI Overview presents these completely made-up sources with the same self-assurance as its abstract interpretations is a big part of the problem here. It’s also a persistent problem for LLMs that tend to make up news sources and cite fake legal cases regularly. As usual, one should be very wary when trusting anything an LLM presents as an objective fact.

Source: Ars Technica

Image: DeepImg