Hey, this is Doug, and this is Microcast number 104.

Now, someone I follow on the Fediverse posted a thread recently, where they kind of lamented the widespread uncritical acceptance of new technologies by middle management and academia. They gave examples around autonomous vehicles and augmented reality. And they were arguing that this uncritical acceptance isn’t due to this general enthusiasm in the population, but this premature consensus that this is happening without considering the implications or whether the choice that is being made, in inverted commas, is a democratic one.

In this thread, they mentioned a conversation with an academic about augmented reality, and they pointed out, like, there’s no billboard saying that you shouldn’t use this, or arguing against the technology, you only ever get the kind of the positive rather than the negative. A bit like how Nestle were advertising for formula milk in South America, and it led to a decline in breastfeeding, for example. They didn’t give that example. That’s my example.

So this person I follow was kind of expressing frustration about how there’s no critical questioning of technology adoption. And really, we should be trying harder to challenge this narrative of technological inevitability. Now, this really struck a chord for me on many levels, and I was going to write a blog post about it, but I’m not recording a podcast today with Laura, because she’s away.

So I thought, why not just turn to a microcast? I’m not mentioning this person’s name who wrote the thread, because they have a highly curated following and followers list. By default, they post just to their followers, and they have expressed frustration as well with people who have, like, screenshotted or shared thoughts that they intended only to be for the people who they are following and who are following them. So I’m not going to name names.

What’s interesting to me is, above and beyond the kind of example I gave there about Nestle, this happens everywhere in society. If you think about it, what’s happening behind the scenes? There are people with very deep pockets or who know people with very deep pockets who want to put forward a particular worldview. And you might think, well, hang on a minute, Doug, they’re not putting forward a worldview; they’re putting forward a technology that they’re trying to make money from.

But no technology is neutral. I remember Eddie Izzard, for example, talking about the National Rifle Association of America, saying “guns don’t kill people, people do”. And he jokingly said, “I think the gun helps”. Like the logic of a gun is to shoot a bullet out of. And that’s usually to destroy something else or kill something.

The problem here is that technology has built in philosophies or assumptions. So we need to bear that in mind when we’re thinking about the worldview that the technology which is being introduced to us wants to create. So you could talk about AI, for example. I find AI really useful. I’m using it to generate images for blog posts on Thought Shrapnel, probably the one that illustrates this microcast, for example.

It’s useful as like to brainstorm ideas, to fill in gaps of things that I might have missed, like as an assistant. But the worldview that’s being talked about by people who are kind of peddling AI, as in, you know, OpenAI and other large companies like that, is that it’s going to replace workers.

So really what’s happening here is that it’s a technology which could be used for lots of different things. But the narrative behind it is, well, automation is more important than, for example, people having jobs. Efficiency is more important than, for example, solidarity, like there’s definitely a worldview baked in. And you may agree with it or you may disagree with it. But you can’t argue against the fact that technologies have built in philosophies and assumptions, epistemologies, ontologies, all that kind of stuff.

Now, the difficulty is, is that because of the diffusion of innovation kind of life cycle curve, the technology adoption curve, whether or not you think that that exact model is true, different people in society adopt technologies at different times. Sometimes that’s because of interest: sometimes people just have an interest in a technology. It might be in technology generally, or it might be in technology as it affects their sector or their hobby or whatever it is.

But technologies are definitely adopted at different rates. And we can use the diffusion of innovation life cycle curve to try and think about some of that. The trouble is, by the time the technology reaches the mainstream, it has got baked in assumptions based on what’s gone before. Now, that could be because of what the venture capital funders want. It could be because of what the early adopters want.

So we end up in a world where a lot of technology isn’t very accessible, works particularly well for like white men who are of a certain age. But doesn’t work well for everyone else and doesn’t really like you see this all of the time, even open source projects who start off in English and then it’s like: “we’re going international!”. I had another one of those emails this morning like, “hey, we’re going international!” as if that shouldn’t have been the plan all along. Or maybe you should have launched with that. The idea that everything kind of starts with English and a white man’s view of it and then kind of goes from there is kind of problematic.

Now, I do follow a lot of people who critically question technology: the person that was the stimulus for this microcast, people like Helen Beetham and Audrey Watters and there’s loads of people who question what it is that we’re doing with technology.The problem is just like Cassandra and the Greek myths, they’re destined not to be listened to because there’s a lot more money and a lot more marketing going into this thing being new and exciting.

All of a sudden, if you’re a product company, for example, and you’re not using AI, then your board’s asking why, your shareholders are asking why. And it’s all just part of the milieu and the narrative and the kind of march towards this future, which has basically been invented by a press release, which is intensely problematic.

Now, you might say, well, academia is a kind of a bulwark against that, but increasingly it’s not. And that’s for a couple of reasons, I think. Firstly, you’ve got this kind of neoliberal capture of universities where students are now customers and consumers rather than creators and makers and co-creators of knowledge. So the whole logic of what a university is about has completely changed.

But secondly, when it comes to a lot of the newer technologies, and I use AI again as an example, the money that’s required to play in that sandpit is so huge that you kind of, as an academic, would have to get into bed with one of these companies in order to get your hands on the technology or certainly on the kind of the new versions of, I don’t know, Midjourney or the new training data sets or whatever it’s going to be. You’re not going to get that if you say nasty things about the company or ask or raise ethical questions, etc.

You can see this about what happened with the women at Google who kind of went against Google’s ideas around AI, around safety. They questioned lots of things about trust and disinformation and whatever, and they were effectively shown the door.

So what are we to do then in this society when we’ve got all of these problems going on? I think we can argue against it and be an outlier, and I think that’s important and necessary. And that helps move what’s called the Overton Window in terms of the kind of debate and ideas which are seen as relevant or in currency at the current time. We can definitely do that.

But that is quite a psychologically lonely and difficult road to go down. And I think what we can do is just all of us ask questions like, “is this as good as I thought it was going to be?”

There was an article I actually put on Thought Shrapnel more recently, which was by a guy, and his blog’s called ‘Where’s Your Ed At?’ Ed Zitron. He was questioning whether if you don’t just look at the kind of the future promises of what AI companies are saying, if you look at what’s available right now, it’s not that exciting. It’s the same if you think about blockchain technologies. If you think about autonomous vehicles or Uber and ride sharing, it’s not that exciting really.

With blockchain, we’re talking about back-end technologies that allow, I don’t know, you to be able to see the provenance of something or people to write to a distributed ledger. And yes, there’s little innovative things, but it’s not like changing the world. Even cryptocurrencies are little more than speculative gambling things. And I’ve made money from that. So let’s not pretend that didn’t happen. And with Uber and Lyft and all that kind of stuff, like the innovation really is that you’ve got an app and you can press a button and the taxi can pick you up from where you are. There’s not a lot going on on top of that, really. Any taxi company could have created it. We didn’t really need a huge amount of venture capital going into that.

Apart from the fact, going back to what I said earlier, is that there’s a worldview encoded inside of that. If you think that, oh, well, there’s no point in investing in public transport because in five years time, we’re all going to be going around in self-driving cars, then that has a real world tangible impact on the lives of people in a particular city or particular location. So we need to be very careful about challenging these worldviews.Instead of uncritically accepting press releases and what people say.

If you’re the kind of person who can do what Helen Beetham and Audrey Watters and this person I follow on the Fediverse do, and can be in the room and challenge people, then do that. But if you’re not that kind of person, if that’s not your personality or style, you can question or maybe not repeat the kind of the press releases and the future speculative ideas about this technology, which hasn’t been proven yet.

So that’s where we are. I think it is good to have some examples of things to point to. It could be articles. It could be case studies or thought experiments. And it can be just like flipping the idea on its head. Just asking the question of, well, “who is actually advocating against this?” And if no one’s advocating against it, is it an idea that’s currently in currency, in which case you’d expect people to be arguing for and against it? Or is this a manufactured need by people who are very, very deep pockets?

Anyway, I’ll leave it there. Otherwise, this won’t end up being a microcast.

This is indeed microcast number 104. And I’m Doug Belshaw for Thought Shrapnel.