I mentioned the podcast Your Undivided Attention in a recent post. Last summer, I listened to an episode featuring Nita Farahany which I thought was excellent. I told everyone about it.
In this interview, Farahany is interviewed alongside Aza Raskin, one of the hosts of Your Undivided Attention. I’ve focused on Raskin’s answers, but you should read the whole thing, alongside listening to the podcast episode. Excellent stuff.
Aza Raskin: I think we can frame social media as “first contact with AI.” Where is AI in social media? Well, it’s a curation AI. It’s choosing which posts, which videos, which audio hits the retinas and eardrums of humanity. And notice, this very unsophisticated kind of AI misaligned with what was best for humanity. Just maximizing for engagement was enough to create this whole slew of terrible outcomes, a world none of us really wants to live in. We see the dysfunction of the U.S. government—at the same time that we have runaway technology we have a walk-away governance system. We have polarization and mental health crises. We don’t know really what’s true or not. We’re all in our own little subgroups. We’ve had the death of a consensus reality, and that was with curation AI—first generation, first contact AI.
We’re now moving into what we call “second contact with AI.” This is creation AI, generative AI. And then the question to ask yourself is, have we fixed the misalignment with the first one? No! So we should expect to see all of those problems just magnified by the power of the new technology 10 times, 100 times, 1,000 times more.
I think this is the year that I’ve really felt that confusion between “Is it to utopia or dystopia that we go?” And the lesson we can learn from social media is that we can predict the future if you understand the incentives. As Charlie Munger, Warren Buffett’s business partner, said, “If you show me the incentives, I’ll show you the outcome.” The way we say it is: “If you name the market race people are in, we can name the result.” The race is the result. And Congress is still sort of blind to that. And so we’re stuck in this question of do we get the promise? Do we get the peril? How can we just get the promise without the peril, without an acknowledgment of, well, what’s the incentive? And the incentive is: grow as fast as possible to increase your capabilities, to increase your power so you can make more money and get more compute and hire the best people. Wash, rinse, repeat without an understanding of what are the externalities. And humanity, no doubt, has created incredible technology. But we have yet to figure out a process by which we invent technology that then doesn’t have a worse externality, which we have to invent something new for. And we’re reaching the place where the externality that we create will break the fragile civilization we live in if we don’t get there beforehand.