Tag: Eli Parisier

We all think we are exceptional, and are surprised to find ourselves criticised just like anyone else

🏞️ To Mend a Broken Internet, Create Online Parks

👶 Babies’ random choices become their preferences

😨 Dystopia as Clickbait: Science Fiction, Doomscrolling, and Reviving the Idea of the Future

👃 Why Are the Noses Broken on Egyptian Statues?

🧙‍♀️ New online database tracks historic ‘witch marks’ carved into England’s trees

Quotation-as-title by Comtesse Diane. Image from top-linked post.

Software ate the world, so all the world’s problems get expressed in software

Benedict Evans recently posted his annual ‘macro trends’ slide deck. It’s incredibly insightful, and work of (minimalist) art. This article’s title comes from his conclusion, and you can see below which of the 128 slides jumped out at me from deck:

For me, what the deck as a whole does is place some of the issues I’ve been thinking about in a wider context.

My team is building a federated social network for educators, so I’m particularly tuned-in to conversations about the effect social media is having on society. A post by Harold Jarche where he writes about his experience of Twitter as a rage machine caught my attention, especially the part where he talks about how people are happy to comment based on the ‘preview’ presented to them in embedded tweets:

Research on the self-perception of knowledge shows how viewing previews without going to the original article gives an inflated sense of understanding on the subject, “audiences who only read article previews are overly confident in their knowledge, especially individuals who are motivated to experience strong emotions and, thus, tend to form strong opinions.” Social media have created a worldwide Dunning-Kruger effect. Our collective self-perception of knowledge acquired through social media is greater than it actually is.

Harold Jarche

I think our experiment with general-purpose social networks is slowly coming to an end, or at least will do over the next decade. What I mean is that, while we’ll still have places where you can broadcast anything to anyone, the digital environments we’ll spend more time will be what Venkatesh Rao calls the ‘cozyweb’:

Unlike the main public internet, which runs on the (human) protocol of “users” clicking on links on public pages/apps maintained by “publishers”, the cozyweb works on the (human) protocol of everybody cutting-and-pasting bits of text, images, URLs, and screenshots across live streams. Much of this content is poorly addressable, poorly searchable, and very vulnerable to bitrot. It lives in a high-gatekeeping slum-like space comprising slacks, messaging apps, private groups, storage services like dropbox, and of course, email.

Venkatesh Rao

That’s on a personal level. I should imagine organisational spaces will be a bit more organised. Back to Jarche:

We need safe communities to take time for reflection, consideration, and testing out ideas without getting harassed. Professional social networks and communities of practices help us make sense of the world outside the workplace. They also enable each of us to bring to bear much more knowledge and insight that we could do on our own.

Harold Jarche

…or to use Rao’s diagram which is so-awful-it’s-useful:

Image by Venkatesh Rao

Of course, blockchain/crypto could come along and solve all of our problems. Except it won’t. Humans are humans (are humans).

Ever since Eli Parisier’s TED talk urging us to beware online “filter bubbles” people have been wringing their hands about ensuring we have ‘balance’ in our networks.

Interestingly, some recent research by the Reuters Institute at Oxford University, paints a slightly different picture. The researcher, Dr Richard Fletcher begins by investigating how people access the news.

Preferred access to news
Diagram via the Reuters Institute, Oxford University

Fletcher draws a distinction between different types of personalisation:

Self-selected personalisation refers to the personalisations that we voluntarily do to ourselves, and this is particularly important when it comes to news use. People have always made decisions in order to personalise their news use. They make decisions about what newspapers to buy, what TV channels to watch, and at the same time which ones they would avoid

Academics call this selective exposure. We know that it’s influenced by a range of different things such as people’s interest levels in news, their political beliefs and so on. This is something that has pretty much always been true.

Pre-selected personalisation is the personalisation that is done to people, sometimes by algorithms, sometimes without their knowledge. And this relates directly to the idea of filter bubbles because algorithms are possibly making choices on behalf of people and they may not be aware of it.

The reason this distinction is particularly important is because we should avoid comparing pre-selected personalisation and its effects with a world where people do not do any kind of personalisation to themselves. We can’t assume that offline, or when people are self-selecting news online, they’re doing it in a completely random way. People are always engaging in personalisation to some extent and if we want to understand the extent of pre-selected personalisation, we have to compare it with the realistic alternative, not hypothetical ideals.

Dr Richard Fletcher

Read the article for the details, but the takeaways for me were twofold. First, that we might be blaming social media for wider and deeper divisons within society, and second, that teaching people to search for information (rather than stumble across it via feeds) might be the best strategy:

People who use search engines for news on average use more news sources than people who don’t. More importantly, they’re more likely to use sources from both the left and the right. 
People who rely mainly on self-selection tend to have fairly imbalanced news diets. They either have more right-leaning or more left-leaning sources. People who use search engines tend to have a more even split between the two.

Dr Richard Fletcher

Useful as it is, what I think this research misses out is the ‘black box’ algorithms that seek to keep people engaged and consuming content. YouTube is the poster child for this. As Jarche comments:

We are left in a state of constant doubt as conspiratorial content becomes easier to access on platforms like YouTube than accessing solid scientific information in a journal, much of which is behind a pay-wall and inaccessible to the general public.

Harold Jarche

This isn’t an easy problem to solve.

We might like to pretend that human beings are rational agents, but this isn’t actually true. Let’s take something like climate change. We’re not arguing about the facts here, we’re arguing about politics. Adrian Bardon, writing in Fast Company, writes:

In theory, resolving factual disputes should be relatively easy: Just present evidence of a strong expert consensus. This approach succeeds most of the time, when the issue is, say, the atomic weight of hydrogen.

But things don’t work that way when the scientific consensus presents a picture that threatens someone’s ideological worldview. In practice, it turns out that one’s political, religious, or ethnic identity quite effectively predicts one’s willingness to accept expertise on any given politicized issue.

Adrian Bardon

This is pretty obvious when we stop to think about it for a moment; beliefs are bound up with identity, and that’s not something that’s so easy to change.

In ideologically charged situations, one’s prejudices end up affecting one’s factual beliefs. Insofar as you define yourself in terms of your cultural affiliations, information that threatens your belief system—say, information about the negative effects of industrial production on the environment—can threaten your sense of identity itself. If it’s part of your ideological community’s worldview that unnatural things are unhealthful, factual information about a scientific consensus on vaccine or GM food safety feels like a personal attack.

Adrian Bardon

So how do we change people’s minds when they’re objectively wrong? Brian Resnick, writing for Vox, suggests the best approach might be ‘deep canvassing’:

Giving grace. Listening to a political opponent’s concerns. Finding common humanity. In 2020, these seem like radical propositions. But when it comes to changing minds, they work.


The new research shows that if you want to change someone’s mind, you need to have patience with them, ask them to reflect on their life, and listen. It’s not about calling people out or labeling them fill-in-the-blank-phobic. Which makes it feel like a big departure from a lot of the current political dialogue.

Brian Resnick

This approach, it seems, works:

Diagram by Stanford University, via Vox

So it seems there is some hope to fixing the world’s problems. It’s just that the solutions point towards doing the hard work of talking to people and not just treating them as containers for opinions to shoot down at a distance.

Enjoy this? Sign up for the weekly roundup and/or become a supporter!

Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box