<img src=“https://cdn.uploads.micro.blog/139275/2024/dalle-2024-03-20-18.21.59illustrate-a-scene-where-a-large-exaggerated-bubbl.webp" width=“600” height=“342” alt=“A figure resembling Ed Zitron pops a large, shimmering “AI Hype” bubble against a tech city skyline, with digital particles and a palette of light gray, dark gray, bright red, yellow, and blue.">

This post has been going around my networks recently, so I’ve finally got around to giving it a read. The first thing that’s worth pointing out is that the author, Ed Zitron, is CEO of a tech PR firm. So it’s no surprise that it’s written in a way that’s supposed to try and pop the AI hype bubble.

I’m not unsympathetic to Zitron’s position, but when he talks about not knowing anyone using ChatGPT, I don’t think he’s telling the truth. I’m using GPT-4 every day at this point, and now supplementing it with Perplexity.ai and Claude 3. A combination of the three can be really useful for everything from speeding up idea generation to converting a bullet point list to a mindmap.

One thing I’ve found AI assistants to be incredibly powerful for is to spot things I might have missed, to provide a different perspective. Or even to put in a list of things and to generate recommendations based on that. You can do this for music playlists through to business competitors.

Every time Sam Altman speaks he almost immediately veers into the world of fan fiction, talking about both the general things that “AI” could do and non-specifically where ChatGPT might or might not fit into that without ever describing a real-world use case. And he’s done so in exactly the same way for years, failing to describe any industrial or societal need for artificial intelligence beyond a vague promise of automation and “models” that will be able to do stuff that humans can, even though OpenAI’s models continually prove themselves unable to match even the dumbest human beings alive.

Altman wants to talk about the big, sexy stories of Average General Intelligences that can take human jobs because the reality of OpenAI — and generative AI by extension — is far more boring, limited and expensive than he’d like you to know.

[…]

I believe a large part of the artificial intelligence boom is hot air, pumped through a combination of executive bullshitting and a compliant media that will gladly write stories imagining what AI can do rather than focus on what it’s actually doing. Notorious boss-advocate Chip Cutter of the Wall Street Journal wrote a piece last week about how AI is being integrated in the office, spending most of the article discussing how companies “might” use tech before digressing that every company he spoke to was using these tools experimentally and that they kept making mistakes.

[…]

Generative AI’s core problems — its hallucinations, its massive energy and unprofitable compute demands — are not close to being solved. Having now read and listened to a great deal of Murati and Altman’s interviews, I can find few cases where they’re even asked about these problems, let alone ones where they provide a cogent answer.

And I believe it’s because there isn’t one.

Source: Where’s Your Ed At?