Buried towards the bottom of an update about the Breaking Smart newsletter, Venkatesh Rao includes this diagram and links to a post where he commits to longer-term work.
In an associated Twitter thread, one tweet talks about one way of telling whether a project is a new TLP (‘Top Level Project’) or part of an existing one: does it require a new name or domain name? Interesting.
I’m trying to rationalize all my activities to be simpler and easier to manage. An important first step for me was shutting down my other newsletter, Art of Gig, a month ago. Another was adopting 2 long-term rules across my projects — no new top-level projects, and minimum 10-year commitments.
Benedict Evans recently posted his annual ‘macro trends’ slide deck. It’s incredibly insightful, and work of (minimalist) art. This article’s title comes from his conclusion, and you can see below which of the 128 slides jumped out at me from deck:
For me, what the deck as a whole does is place some of the issues I’ve been thinking about in a wider context.
My team is building a federated social network for educators, so I’m particularly tuned-in to conversations about the effect social media is having on society. A post by Harold Jarche where he writes about his experience of Twitter as a rage machine caught my attention, especially the part where he talks about how people are happy to comment based on the ‘preview’ presented to them in embedded tweets:
Research on the self-perception of knowledge shows how viewing previews without going to the original article gives an inflated sense of understanding on the subject, “audiences who only read article previews are overly confident in their knowledge, especially individuals who are motivated to experience strong emotions and, thus, tend to form strong opinions.” Social media have created a worldwide Dunning-Kruger effect. Our collective self-perception of knowledge acquired through social media is greater than it actually is.
Harold Jarche
I think our experiment with general-purpose social networks is slowly coming to an end, or at least will do over the next decade. What I mean is that, while we’ll still have places where you can broadcast anything to anyone, the digital environments we’ll spend more time will be what Venkatesh Rao calls the ‘cozyweb’:
Unlike the main public internet, which runs on the (human) protocol of “users” clicking on links on public pages/apps maintained by “publishers”, the cozyweb works on the (human) protocol of everybody cutting-and-pasting bits of text, images, URLs, and screenshots across live streams. Much of this content is poorly addressable, poorly searchable, and very vulnerable to bitrot. It lives in a high-gatekeeping slum-like space comprising slacks, messaging apps, private groups, storage services like dropbox, and of course, email.
Venkatesh Rao
That’s on a personal level. I should imagine organisational spaces will be a bit more organised. Back to Jarche:
We need safe communities to take time for reflection, consideration, and testing out ideas without getting harassed. Professional social networks and communities of practices help us make sense of the world outside the workplace. They also enable each of us to bring to bear much more knowledge and insight that we could do on our own.
Harold Jarche
…or to use Rao’s diagram which is so-awful-it’s-useful:
Image by Venkatesh Rao
Of course, blockchain/crypto could come along and solve all of our problems. Except it won’t. Humans are humans (are humans).
Ever since Eli Parisier’s TED talk urging us to beware online “filter bubbles” people have been wringing their hands about ensuring we have ‘balance’ in our networks.
Interestingly, some recent research by the Reuters Institute at Oxford University, paints a slightly different picture. The researcher, Dr Richard Fletcher begins by investigating how people access the news.
Diagram via the Reuters Institute, Oxford University
Fletcher draws a distinction between different types of personalisation:
Self-selected personalisation refers to the personalisations that we voluntarily do to ourselves, and this is particularly important when it comes to news use. People have always made decisions in order to personalise their news use. They make decisions about what newspapers to buy, what TV channels to watch, and at the same time which ones they would avoid
Academics call this selective exposure. We know that it’s influenced by a range of different things such as people’s interest levels in news, their political beliefs and so on. This is something that has pretty much always been true.
Pre-selected personalisation is the personalisation that is done to people, sometimes by algorithms, sometimes without their knowledge. And this relates directly to the idea of filter bubbles because algorithms are possibly making choices on behalf of people and they may not be aware of it.
The reason this distinction is particularly important is because we should avoid comparing pre-selected personalisation and its effects with a world where people do not do any kind of personalisation to themselves. We can’t assume that offline, or when people are self-selecting news online, they’re doing it in a completely random way. People are always engaging in personalisation to some extent and if we want to understand the extent of pre-selected personalisation, we have to compare it with the realistic alternative, not hypothetical ideals.
Dr Richard Fletcher
Read the article for the details, but the takeaways for me were twofold. First, that we might be blaming social media for wider and deeper divisons within society, and second, that teaching people to search for information (rather than stumble across it via feeds) might be the best strategy:
People who use search engines for news on average use more news sources than people who don’t. More importantly, they’re more likely to use sources from both the left and the right. People who rely mainly on self-selection tend to have fairly imbalanced news diets. They either have more right-leaning or more left-leaning sources. People who use search engines tend to have a more even split between the two.
Dr Richard Fletcher
Useful as it is, what I think this research misses out is the ‘black box’ algorithms that seek to keep people engaged and consuming content. YouTube is the poster child for this. As Jarche comments:
We are left in a state of constant doubt as conspiratorial content becomes easier to access on platforms like YouTube than accessing solid scientific information in a journal, much of which is behind a pay-wall and inaccessible to the general public.
Harold Jarche
This isn’t an easy problem to solve.
We might like to pretend that human beings are rational agents, but this isn’t actually true. Let’s take something like climate change. We’re not arguing about the facts here, we’re arguing about politics. Adrian Bardon, writing in Fast Company, writes:
In theory, resolving factual disputes should be relatively easy: Just present evidence of a strong expert consensus. This approach succeeds most of the time, when the issue is, say, the atomic weight of hydrogen.
But things don’t work that way when the scientific consensus presents a picture that threatens someone’s ideological worldview. In practice, it turns out that one’s political, religious, or ethnic identity quite effectively predicts one’s willingness to accept expertise on any given politicized issue.
Adrian Bardon
This is pretty obvious when we stop to think about it for a moment; beliefs are bound up with identity, and that’s not something that’s so easy to change.
In ideologically charged situations, one’s prejudices end up affecting one’s factual beliefs. Insofar as you define yourself in terms of your cultural affiliations, information that threatens your belief system—say, information about the negative effects of industrial production on the environment—can threaten your sense of identity itself. If it’s part of your ideological community’s worldview that unnatural things are unhealthful, factual information about a scientific consensus on vaccine or GM food safety feels like a personal attack.
Adrian Bardon
So how do we change people’s minds when they’re objectively wrong?Brian Resnick, writing for Vox, suggests the best approach might be ‘deep canvassing’:
Giving grace. Listening to a political opponent’s concerns. Finding common humanity. In 2020, these seem like radical propositions. But when it comes to changing minds, they work.
[…]
The new research shows that if you want to change someone’s mind, you need to have patience with them, ask them to reflect on their life, and listen. It’s not about calling people out or labeling them fill-in-the-blank-phobic. Which makes it feel like a big departure from a lot of the current political dialogue.
Brian Resnick
This approach, it seems, works:
Diagram by Stanford University, via Vox
So it seems there is some hope to fixing the world’s problems. It’s just that the solutions point towards doing the hard work of talking to people and not just treating them as containers for opinions to shoot down at a distance.
So said Neil Postman (via Jay Springett). Jay is one of a small number of people who’s work I find particularly thoughtful and challenging.
Another is Venkatesh Rao, who last week referenced a Twitter thread he posted earlier this year. It’s awkward to and quote the pertinent parts of such things, but I’ll give it a try:
Megatrend conclusion: if you do not build a second brain or go offline, you will BECOME the second brain.
[…]
Basically, there’s no way to actually handle the volume of information and news that all of us appear to be handling right now. Which means we are getting augmented cognition resources from somewhere. The default place is “social” media.
[…]
What those of us who are here are doing is making a deal with the devil (or an angel): in return for being 1-2 years ahead of curve, we play 2nd brain to a shared first brain. We’ve ceded control of executive attention not to evil companies, but… an emergent oracular brain.
[…]
I called it playing your part in the Global Social Computer in the Cloud (GSCITC).
[…]
Central trade-off in managing your participation in GSCITC is: The more you attempt to consciously curate your participation rather than letting it set your priorities, the less oracular power you get in return.
Venkatesh Rao
He reckons that being fully immersed in the firehose of social media is somewhat like reading the tea leaves or understanding the runes. You have to ‘go with the flow’.
Rao uses the example of the very Twitter thread he’s making. Constructing it that way versus, for example, writing a blog post or newsletter means he is in full-on ‘gonzo mode’ versus what he calls (after Henry David Thoreau) ‘Waldenponding’.
I have been generally very unimpressed with the work people seem to generate when they go waldenponding to work on supposedly important things. The comparable people who stay more plugged in seem to produce better work.
My kindest reading of people who retreat so far it actually compromises their work is that it is a mental health preservation move because they can’t handle the optimum GSCITC immersion for their project. Their work could be improved if they had the stomach for more gonzo-nausea.
My harshest reading is that they’re narcissistic snowflakes who overvalue their work simply because they did it.
Venkatesh Rao
Well, perhaps. But as someone who has attempted to drink from that firehouse for over a decade, I think the time comes when you realise something else. Who’s setting the agenda here? It’s not ‘no-one’, but neither is it any one person in particular. Rather the whole structure of what can happen within such a network depends on decisions made other than you.
For example, Dan Hon, pointed (in a supporter-only newsletter) to an article by Louise Matsakis in WIRED that explains that the social network TikTok not only doesn’t add timestamps to user-generated content, but actively blocks the clock on your smartphone. These design decisions affect what can and can’t happen, and also the kinds of things that do end up happening.
Writing in The Guardian, Leah McLaren writes about being part of the last generation to really remember life before the internet.
In this age of uncertainty, predictions have lost value, but here’s an irrefutable one: quite soon, no person on earth will remember what the world was like before the internet. There will be records, of course (stored in the intangibly limitless archive of the cloud), but the actual lived experience of what it was like to think and feel and be human before the emergence of big data will be gone. When that happens, what will be lost?
Leah McLaren
McLaren is evidently a few years older than me, as I’ve been online since I was about 15. However, I definitely reflect on a regular basis about what being hyper-connected does to my sense of self. She cites a recent study published in the official journal of the World Psychiatric Association. Part of the conclusion of that study reads:
As digital technologies become increasingly integrated with everyday life, the Internet is becoming highly proficient at capturing our attention, while producing a global shift in how people gather information, and connect with one another. In this review, we found emerging support for several hypotheses regarding the pathways through which the Internet is influencing our brains and cognitive processes, particularly with regards to: a) the multi‐faceted stream of incoming information encouraging us to engage in attentional‐switching and “multi‐tasking” , rather than sustained focus; b) the ubiquitous and rapid access to online factual information outcompeting previous transactive systems, and potentially even internal memory processes; c) the online social world paralleling “real world” cognitive processes, and becoming meshed with our offline sociality, introducing the possibility for the special properties of social media to impact on “real life” in unforeseen ways.
Firth, J., et al. (2019). The “online brain”: how the Internet may be changing our cognition. World Psychiatry, 18: 119-129.
In her Guardian article, McLaren cites the main author, Dr Joseph Firth:
“The problem with the internet,” Firth explained, “is that our brains seem to quickly figure out it’s there – and outsource.” This would be fine if we could rely on the internet for information the same way we rely on, say, the British Library. But what happens when we subconsciously outsource a complex cognitive function to an unreliable online world manipulated by capitalist interests and agents of distortion? “What happens to children born in a world where transactive memory is no longer as widely exercised as a cognitive function?” he asked.
Leah McLaren
I think this is the problem, isn’t it? I’ve got no issue with having an ‘outboard brain’ where I store things that I want to look up instead of remember. It’s also insanely useful to have a method by which the world can join together in a form of ‘hive mind’.
What is problematic is when this ‘hive mind’ (in the form of social media) is controlled by people and organisations whose interests are orthogonal to our own.
In that situation, there are three things we can do. The first is to seek out forms of nascent ‘hive mind’-like spaces which are not controlled by people focused on the problematic concept of ‘shareholder value’. Like Mastodon, for example, and other decentralised social networks.
The second is to spend time finding out the voices to which you want to pay particular attention. The chances are that they won’t only write down their thoughts via social networks. They are likely to have newsletters, blogs, and even podcasts.
Third, an apologies for the metaphor, but with such massive information consumption the chances are that we can become ‘constipated’. So if we don’t want that to happen, if we don’t want to go on an ‘information diet’, then we need to ensure a better throughput. One of the best things I’ve done is have a disciplined approach to writing (here on Thought Shrapnel, and elsewhere) about the things I’ve read and found interesting. That’s one way to extract the nutrients.
I’d love your thoughts on this. Do you agree with the above? What strategies do you have in place?