Chip on circuitboard with letters 'AI' on it

I discovered this article via Laura who referenced it during our co-working session as we updated AILiteracy.fyi. As a fellow Garbage Day subscriber, she’d assumed I’d already seen it mentioned in that newsletter. I hadn’t.

What I like about this piece from Casey Newton is how he points out how disingenuous much of anti-AI sentiment is. There are people doing importance, nuanced work pointing out the bullshit (hi, Audrey) but there’s also some really ill-informed, clickbaity stuff that reinforces prejudice.

Of course people will use generative AI to cheat. Of course they will use it to create awful things. But what’s new there? A lot of the hand-wringing I see is from people who have evidently never used an LLM for more than five seconds. They would have been the same people warning about the “dangers” of the internet in the late 90s because “anyone can create a website and put anything online!”

The thing is, while we can’t guarantee that any individual response from a chatbot will be honest or helpful, it’s inarguable that they are much more honest and more helpful today than they were two years ago. It’s also inarguable that hundreds of millions of people are already using them, and that millions are paying to use them.

The truth is that there are no guarantees in tech. Does Google guarantee that its search engine is honest, helpful, and harmless? Does X guarantee that its posts are? Does Facebook guarantee that its network is?

Most people know these systems are flawed, and adjust their expectations and usage accordingly. The “AI is fake and sucks” crowd is hyper-fixated on the things it can’t do — count the number of r’s in strawberry, figure out that the Onion was joking when it told us to eat rocks — and weirdly uninterested in the things it can.

[…]

Ultimately, both the “fake and sucks” and “real and dangerous” crowds agree that AI could go really, really badly. To stop that from happening though, the “fake and sucks” crowd needs to accept that AI is already more capable and more embedded in our systems than they currently admit. And while it’s fine to wish that the scaling laws do break, and give us all more time to adapt to what AI will bring, all of us would do well to spend some time planning for a world where they don’t.

Source: Platformer