A remarkably sober look at the need for regulation, transparency around how models are trained, and costs in the world of AI. It makes a really good point about the UX required for machine learning to be useful at scale.

“I have learned from experience that leaving tools completely open-ended tends to confuse users more than assist,” says Kirk. “Think of it like a hall of doors that is infinite. Most humans would stand there perplexed with what to do. We have a lot of work to do to determine the optimal doors to present to users.” Mason has a similar observation, adding that “in the same way that ChatGPT was mainly a UX improvement over GPT-3, I think that we’re just at the beginning of inventing the UI metaphors we’ll need to effectively use AI models in products.”

[…]

Augmenting work with AI could be worthwhile despite these problems. This was certainly true of the computing revolution: Many people need training to use Word and Excel, but few would propose typewriters or graph paper as a better alternative. Still, it’s clear that a future in which “we automate away all the jobs, including the fulfilling ones,” is more than six months away, as the Future of Life Institute’s letter frets. The AI revolution is unfolding right now—and will still be unfolding a decade from today.

Source: AI Can’t Take Over Everyone’s Jobs Soon (If Ever) | IEEE Spectrum