This interview with Signal CEO Meredith Whittaker in Slate is so awesome. She brings the AI 'doomer' narrative back time and again both to surveillance capitalism, and the massive mismatch between marginalised people currently having harm done to them and the potential harm done to very powerful people.

What we’re calling machine learning or artificial intelligence is basically statistical systems that make predictions based on large amounts of data. So in the case of the companies we’re talking about, we’re talking about data that was gathered through surveillance, or some variant of the surveillance business model, that is then used to train these systems, that are then being claimed to be intelligent, or capable of making significant decisions that shape our lives and opportunities—even though this data is often very flimsy.

[...]

We are in a world where private corporations have unfathomably complex and detailed dossiers about billions and billions of people, and increasingly provide the infrastructures for our social and economic institutions. Whether that is providing so-called A.I. models that are outsourcing decision-making or providing cloud support that is ultimately placing incredibly sensitive information, again, in the hands of a handful of corporations that are centralizing these functions with very little transparency and almost no accountability. That is not an inevitable situation: We know who the actors are, we know where they live. We have some sense of what interventions could be healthy for moving toward something that is more supportive of the public good.

[...]

My concern with some of the arguments that are so-called existential, the most existential, is that they are implicitly arguing that we need to wait until the people who are most privileged now, who are not threatened currently, are in fact threatened before we consider a risk big enough to care about. Right now, low-wage workers, people who are historically marginalized, Black people, women, disabled people, people in countries that are on the cusp of climate catastrophe—many, many folks are at risk. Their existence is threatened or otherwise shaped and harmed by the deployment of these systems.... So my concern is that if we wait for an existential threat that also includes the most privileged person in the entire world, we are implicitly saying—maybe not out loud, but the structure of that argument is—that the threats to people who are minoritized and harmed now don’t matter until they matter for that most privileged person in the world. That’s another way of sitting on our hands while these harms play out. That is my core concern with the focus on the long-term, instead of the focus on the short-term.

Source: A.I. Doom Narratives Are Hiding What We Should Be Most Afraid Of | Slate