Category: Supporters (page 1 of 4)

Posts accessible only to Patreon supporters.

Quick update!

For approximately the last decade, I’ve had an annual hiatus from writing and social media, and focused on inputs rather than outputs. Sometimes that’s lasted a month, sometimes two.

This year, I’m going to be sending out weekly newsletters (only) during November, and then nothing at all in December. As a result, there won’t be any more posts on this site until January 2020.

I’d like to take this opportunity to thank everyone who has commented on my work this year, either publicly or privately. A special thanks goes to those who back Thought Shrapnel via Patreon. I really do appreciate your support!

Friday fablings

I couldn’t ignore these things this week:

  1. The 2010s Broke Our Sense Of Time (BuzzFeed News) — “Everything good, bad, and complicated flows through our phones, and for those not living some hippie Walden trip, we operate inside a technological experience that moves forward and back, and pulls you with it…. You can find yourself wondering why you’re seeing this now — or knowing too well why it is so. You can feel amazing and awful — exult in and be repelled by life — in the space of seconds. The thing you must say, the thing you’ve been waiting for — it’s always there, pulling you back under again and again and again. Who can remember anything anymore?”
  2. Telling Gareth Bale that Johnson is PM took away banterpocalypse’s sole survivor (The Guardian) — “The point is: it is more than theoretically conceivable that Johnson could be the shortest-serving prime minister in 100 years, and thus conceivable that Gareth Bale could have remained ignorant of his tenure in its entirety. Before there were smartphones and so on, big news events that happened while you were on holiday felt like they hadn’t truly happened. Clearly they HAD happened, in some philosophical sense or other, but because you hadn’t experienced them unfolding live on the nightly news, they never felt properly real.”
  3. Dreaming is Free (Learning Nuggets) — “When I was asked to keynote the Fleming College Fall Teaching & Learning Day, I thought it’d be a great chance to heed some advice from Blondie (Dreaming is free, after all) and drop a bunch of ideas for digital learning initiatives that we could do and see which ones that we can breath some life into. Each of these ideas are inspired by some open, networked and/or connectivist learning experiences that are already out there.”
  4. Omniviolence Is Coming and the World Isn’t Ready (Nautilus) — “The trouble is that if anyone anywhere can attack anyone anywhere else, then states will become—and are becoming—unable to satisfy their primary duty as referee. It’s a trend toward anarchy, “the war of all against all,” as Hobbes put it—in other words a condition of everyone living in constant fear of being harmed by their neighbors.”
  5. We never paid for Journalism (iDiallo) — “At the end of the day, the price that you and I pay, whether it is for the print copy or digital, it is only a very small part of the revenue. The price paid for the printed copy was by no means sustaining the newspaper business. It was advertisers all along. And they paid the price for the privilege of having as many eyeballs the newspaper could expose their ads to.”
  6. Crossing Divides: How a social network could save democracy from deadlock (BBC News) — “This was completely different from simply asking them to vote via an app. vTaiwan gave participants the agenda-setting power not just to determine the answer, but also define the question. And it didn’t aim to find a majority of one side over another, but achieve consensus across them.”
  7. Github removes Tsunami Democràtic’s APK after a takedown order from Spain (TechCrunch) — “While the Tsunami Democràtic app could be accused of encouraging disruption, the charge of “terrorism” is clearly overblown. Unless your definition of terrorism extends to harnessing the power of peaceful civil resistance to generate momentum for political change.”
  8. You Choose (inessential) — “You choose the web you want. But you have to do the work. A lot of people are doing the work. You could keep telling them, discouragingly, that what they’re doing is dead. Or you could join in the fun.”
  9. Agency Is Key (gapingvoid) — “People don’t innovate (“Thrive” mode) when they’re scared. Instead, they keep their heads down (“Survive” mode).”

Image by False Knees

Microcast #080 – Redecentralize and MozFest

This week’s microcast recaps my involvement in two events last weekend.

Show notes

We don’t receive wisdom; we must discover it for ourselves after a journey that no one can take us on or spare us

So said Marcel Proust, that famous connoisseur of les petites madeleines. While I don’t share his effete view of the world, I do like French cakes and definitely agree with his sentiments on wisdom.

Earlier this week, Eylan Ezekiel shared this Nesta Landscape of innovation approaches with our Slack channel. It’s what I would call ‘slidebait’ — carefully crafted to fit onto slide decks in keynotes around the world. It’s a smart move because it gets people talking about your organisation.

Nesta's Landscape of innovation approaches
Nesta’s Landscape of innovation approaches

In my opinion, how these things are made is more interesting than the end result. There are inevitably value judgements when creating anything like this, and, because Nesta have set it out as overlapping ‘spaces’, the most obvious takeaway from the above diagram is that those innovation approaches sitting within three overlapping spaces are the ‘most valuable’ or ‘most impactful’. Is that true?

A previous post on this topic from the Nesta blog explains:

Although this map is neither exhaustive nor definitive – and at some points it may seem perhaps a little arbitrary, personal choice and preference – we have tried to provide an overview of both commonly used and emerging innovation approaches.

Bas Leurs (formerly of nesta)

When you’re working for a well-respected organisation, you have to be really careful, because people can take what you produce as some sort of Gospel Truth. No matter how many caveats you add, people confuse the map with the territory.

I have some experience with creating a ‘map’ for a given area, as I was Mozilla’s Web Literacy Lead from 2013 to 2015. During that time, I worked with the community to take the Web Literacy Standard Map from v0.1 to v1.5.

Digital literacies of various types are something I’ve been paying attention to for around 15 years now. And, let me tell, you, I’ve seen some pretty bad ‘maps’ and ‘frameworks’.

For example, here’s a slide deck for a presentation I did for a European Commission Summer School last year, in which I attempted to take the audience on a journey to decide whether a particular example I showed them was any good:

If you have a look at Slide 14 onwards, you’ll see that the point I was trying to make is that you have no way of knowing whether or not a shiny, good-looking map is any good. The organisation who produced it didn’t ‘show their work’, so you have zero insight into its creation and the decisions taken in its creation. Did their intern knock it up on a short deadline? We’ll never know.

The problem with many think tanks and ‘innovation’ organisations is that they move on too quickly to the next thing. Instead of sitting with something and let it mature and flourish, as soon as the next bit of funding comes in, they’re off like a dog chasing a shiny car. I’m not sure that’s how innovation works.

Before Mozilla, I worked at Jisc, which at the time funded innovation programmes on behalf of the UK government and disseminated the outcomes. I remember a very simple overview from Jisc’s Sustaining and Embedding Innovations project that focused on three stages of innovation:

Invention                     
This is about the generation of new ideas e.g. new ways of teaching and learning or new ICT solutions.

Early Innovation
This is all about the early practical application of new inventions, often focused in specific areas e.g. a subject discipline or speciality such as distance learning or work-based learning.

Systemic Innovation
This is where an institution, for example, will aim to embed an innovation institutionally. 

Jisc

The problem with many maps and frameworks, especially around digital skills and innovation, is that they remove any room for ambiguity. So, in an attempt not to come across as vague, they instead become ‘dead metaphors’.

Continuum of ambiguity
Continuum of Ambiguity

I don’t think I’ve ever seen an example where, without any contextualisation, an individual or organisation has taken something ‘off the shelf’ and applied it to achieve uniformly fantastic results. That’s not how these things work.

Humans are complex organisms; we’re not machines. For a given input you can’t expect the same output. We’re not lossless replicators.

So although it takes time, effort, and resources, you’ve got to put in the hard yards to see an innovation through all three of those stages outlined by Jisc. Although the temptation is to nail things down initially, the opposite is actually the best way forward. Take people on a journey and get them to invest in what’s at stake. Embrace the ambiguity.

I’ve written more about this in a post I wrote about a 5-step process for creating a sustainable digital literacies curriculum. It’s something I’ll be thinking about more as I reboot my consultancy work (through our co-op) for 2020!

For now, though, remember this wonderful African proverb:

"If you want to go fast, go alone. If you want to go far, go together." (African proverb)
CC BY-ND Bryan Mathers

Microcast #079 – information environments

This week’s microcast is about information environments, the difference between technical and ‘people’ skills, and sharing your experience.

Show notes

Friday flowerings

Did you see these things this week?

  • Happy 25th year, blogging. You’ve grown up, but social media is still having a brawl (The Guardian) — “The furore over social media and its impact on democracy has obscured the fact that the blogosphere not only continues to exist, but also to fulfil many of the functions of a functioning public sphere. And it’s massive. One source, for example, estimates that more than 409 million people view more than 20bn blog pages each month and that users post 70m new posts and 77m new comments each month. Another source claims that of the 1.7 bn websites in the world, about 500m are blogs. And WordPress.com alone hosts blogs in 120 languages, 71% of them in English.”
  • Emmanuel Macron Wants to Scan Your Face (The Washington Post) — “President Emmanuel Macron’s administration is set to be the first in Europe to use facial recognition when providing citizens with a secure digital identity for accessing more than 500 public services online… The roll-out is tainted by opposition from France’s data regulator, which argues the electronic ID breaches European Union rules on consent – one of the building blocks of the bloc’s General Data Protection Regulation laws – by forcing everyone signing up to the service to use the facial recognition, whether they like it or not.”
  • This is your phone on feminism (The Conversationalist) — “Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true. This cognitive dissonance confuses and paralyses us. And look around. Everyone has a smartphone. So it’s probably not so bad, and anyway, that’s just how things work. Right?”
  • Google’s auto-delete tools are practically worthless for privacy (Fast Company) — “In reality, these auto-delete tools accomplish little for users, even as they generate positive PR for Google. Experts say that by the time three months rolls around, Google has already extracted nearly all the potential value from users’ data, and from an advertising standpoint, data becomes practically worthless when it’s more than a few months old.”
  • Audrey Watters (Uses This) — “For me, the ideal set-up is much less about the hardware or software I am using. It’s about the ideas that I’m thinking through and whether or not I can sort them out and shape them up in ways that make for a good piece of writing. Ideally, that does require some comfort — a space for sustained concentration. (I know better than to require an ideal set up in order to write. I’d never get anything done.)”
  • Computer Files Are Going Extinct (OneZero) — “Files are skeuomorphic. That’s a fancy word that just means they’re a digital concept that mirrors a physical item. A Word document, for example, is like a piece of paper, sitting on your desk(top). A JPEG is like a painting, and so on. They each have a little icon that looks like the physical thing they represent. A pile of paper, a picture frame, a manila folder. It’s kind of charming really.”
  • Why Technologists Fail to Think of Moderation as a Virtue and Other Stories About AI (The LA Review of Books) — “Speculative fiction about AI can move us to think outside the well-trodden clichés — especially when it considers how technologies concretely impact human lives — through the influence of supersized mediators, like governments and corporations.”
  • Inside Mozilla’s 18-month effort to market without Facebook (Digiday) — “The decision to focus on data privacy in marketing the Mozilla brand came from research conducted by the company four years ago into the rise of consumers who make values-based decisions on not only what they purchase but where they spend their time.”
  • Core human values not eyeballs (Cubic Garden) — “Theres so much more to do, but the aims are high and important for not just the BBC, but all public service entities around the world. Measuring the impact and quality on peoples lives beyond the shallow meaningless metrics for public service is critical.”

Image: The why is often invisible via Jessica Hagy’s Indexed

Microcast #078 — Values-based organisations

I’ve decided to post these microcasts, which I previously made available only through Patreon, here instead.

Microcasts focus on what I’ve been up to and thinking about, and also provide a way to answer questions from supporters and other readers/listeners!

This microcast covers ethics in decision-making for technology companies and (related!) some recent purchases I’ve made.

Show notes

I am not fond of expecting catastrophes, but there are cracks in the universe

So said Sydney Smith. Let’s talk about surveillance. Let’s talk about surveillance capitalism and surveillance humanitarianism. But first, let’s talk about machine learning and algorithms; in other words, let’s talk about what happens after all of that data is collected.

Writing in The Guardian, Sarah Marsh investigates local councils using “automated guidance systems” in an attempt to save money.

The systems are being deployed to provide automated guidance on benefit claims, prevent child abuse and allocate school places. But concerns have been raised about privacy and data security, the ability of council officials to understand how some of the systems work, and the difficulty for citizens in challenging automated decisions.

Sarah Marsh

The trouble is, they’re not particularly effective:

It has emerged North Tyneside council has dropped TransUnion, whose system it used to check housing and council tax benefit claims. Welfare payments to an unknown number of people were wrongly delayed when the computer’s “predictive analytics” erroneously identified low-risk claims as high risk

Meanwhile, Hackney council in east London has dropped Xantura, another company, from a project to predict child abuse and intervene before it happens, saying it did not deliver the expected benefits. And Sunderland city council has not renewed a £4.5m data analytics contract for an “intelligence hub” provided by Palantir.

Sarah Marsh

When I was at Mozilla there were a number of colleagues there who had worked on the OFA (Obama For America) campaign. I remember one of them, a DevOps guy, expressing his concern that the infrastructure being built was all well and good when there’s someone ‘friendly’ in the White House, but what comes next.

Well, we now know what comes next, on both sides of the Atlantic, and we can’t put that genie back in its bottle. Swingeing cuts by successive Conservative governments over here, coupled with the Brexit time-and-money pit means that there’s no attention or cash left.

If we stop and think about things for a second, we probably wouldn’t don’t want to live in a world where machines make decisions for us, based on algorithms devised by nerds. As Rose Eveleth discusses in a scathing article for Vox, this stuff isn’t ‘inevitable’ — nor does it constitute a process of ‘natural selection’:

Often consumers don’t have much power of selection at all. Those who run small businesses find it nearly impossible to walk away from Facebook, Instagram, Yelp, Etsy, even Amazon. Employers often mandate that their workers use certain apps or systems like Zoom, Slack, and Google Docs. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or, ‘I’m not on social media,’” says Rumman Chowdhury, a data scientist at Accenture. “You actually have to be so comfortable in your privilege that you can opt out of things.”

And so we’re left with a tech world claiming to be driven by our desires when those decisions aren’t ones that most consumers feel good about. There’s a growing chasm between how everyday users feel about the technology around them and how companies decide what to make. And yet, these companies say they have our best interests in mind. We can’t go back, they say. We can’t stop the “natural evolution of technology.” But the “natural evolution of technology” was never a thing to begin with, and it’s time to question what “progress” actually means.

Rose Eveleth

I suppose the thing that concerns me the most is people in dire need being subject to impersonal technology for vital and life-saving aid.

For example, Mark Latonero, writing in The New York Times, talks about the growing dangers around what he calls ‘surveillance humanitarianism’:

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.

Mark Latonero

It’s easy to think that this is an emergency, so we should just do whatever is necessary. But Latonero explains the risks, arguing that the risk is shifted to a later time:

If an individual or group’s data is compromised or leaked to a warring faction, it could result in violent retribution for those perceived to be on the wrong side of the conflict. When I spoke with officials providing medical aid to Syrian refugees in Greece, they were so concerned that the Syrian military might hack into their database that they simply treated patients without collecting any personal data. The fact that the Houthis are vying for access to civilian data only elevates the risk of collecting and storing biometrics in the first place.

Mark Latonero

There was a rather startling article in last weekend’s newspaper, which I’ve found online. Hannah Devlin, again writing in The Guardian (which is a good source of information for those concerned with surveillance) writes about a perfect storm of social media and improved processing speeds:

[I]n the past three years, the performance of facial recognition has stepped up dramatically. Independent tests by the US National Institute of Standards and Technology (Nist) found the failure rate for finding a target picture in a database of 12m faces had dropped from 5% in 2010 to 0.1% this year.

The rapid acceleration is thanks, in part, to the goldmine of face images that have been uploaded to Instagram, Facebook, LinkedIn and captioned news articles in the past decade. At one time, scientists would create bespoke databases by laboriously photographing hundreds of volunteers at different angles, in different lighting conditions. By 2016, Microsoft had published a dataset, MS Celeb, with 10m face images of 100,000 people harvested from search engines – they included celebrities, broadcasters, business people and anyone with multiple tagged pictures that had been uploaded under a Creative Commons licence, allowing them to be used for research. The dataset was quietly deleted in June, after it emerged that it may have aided the development of software used by the Chinese state to control its Uighur population.

In parallel, hardware companies have developed a new generation of powerful processing chips, called Graphics Processing Units (GPUs), uniquely adapted to crunch through a colossal number of calculations every second. The combination of big data and GPUs paved the way for an entirely new approach to facial recognition, called deep learning, which is powering a wider AI revolution.

Hannah Devlin

Those of you who have read this far and are expecting some big reveal are going to be disappointed. I don’t have any ‘answers’ to these problems. I guess I’ve been guilty, like many of us have, of the kind of ‘privacy nihilism’ mentioned by Ian Bogost in The Atlantic:

Online services are only accelerating the reach and impact of data-intelligence practices that stretch back decades. They have collected your personal data, with and without your permission, from employers, public records, purchases, banking activity, educational history, and hundreds more sources. They have connected it, recombined it, bought it, and sold it. Processed foods look wholesome compared to your processed data, scattered to the winds of a thousand databases. Everything you have done has been recorded, munged, and spat back at you to benefit sellers, advertisers, and the brokers who service them. It has been for a long time, and it’s not going to stop. The age of privacy nihilism is here, and it’s time to face the dark hollow of its pervasive void.

Ian Bogost

The only forces that we have to stop this are collective action, and governmental action. My concern is that we don’t have the digital savvy to do the former, and there’s definitely the lack of will in respect of the latter. Troubling times.

Friday fawnings

On this week’s rollercoaster journey, I came across these nuggets:

  • Renata Ávila: “The Internet of creation disappeared. Now we have the Internet of surveillance and control” (CCCB Lab) — “This lawyer and activist talks with a global perspective about the movements that the power of “digital colonialism” is weaving. Her arguments are essential for preventing ourselves from being crushed by the technological world, from being carried away by the current of ephemeral divertemento. For being fully aware that, as individuals, our battle is not lost, but that we can control the use of our data, refuse to give away our facial recognition or demand that the privacy laws that protect us are obeyed.”
  • Everything Is Private Equity Now (Bloomberg) — “The basic idea is a little like house flipping: Take over a company that’s relatively cheap and spruce it up to make it more attractive to other buyers so you can sell it at a profit in a few years. The target might be a struggling public company or a small private business that can be combined—or “rolled up”—with others in the same industry.”
  • Forget STEM, We Need MESH (Our Human Family) — “I would suggest a renewed focus on MESH education, which stands for Media Literacy, Ethics, Sociology, and History. Because if these are not given equal attention, we could end up with incredibly bright and technically proficient people who lack all capacity for democratic citizenship.”
  • Connecting the curious (Harold Jarche) — “If we want to change the world, be curious. If we want to make the world a better place, promote curiosity in all aspects of learning and work. There are still a good number of curious people of all ages working in creative spaces or building communities around common interests. We need to connect them.”
  • Twitter: No, really, we’re very sorry we sold your security info for a boatload of cash (The Register) — “The social networking giant on Tuesday admitted to an “error” that let advertisers have access to the private information customers had given Twitter in order to place additional security protections on their accounts.”
  • Digital tools interrupt workers 14 times a day (CIO Dive) — “The constant chime of digital workplace tools including email, instant messaging or collaboration software interrupts knowledge workers 13.9 times on an average day, according to a survey of 3,750 global workers from Workfront.”
  • Book review – Curriculum: Athena versus the Machine (TES) — “Despite the hope that the book is a cure for our educational malaise, Curriculum is a morbid symptom of the current political and intellectual climate in English education.”
  • Fight for the planet: Building an open platform and open culture at Greenpeace (Opensource.com) — “Being as open as we can, pushing the boundaries of what it means to work openly, doesn’t just impact our work. It impacts our identity.”
  • Psychodata (Code Acts in Education) — “Social-emotional learning sounds like a progressive, child-centred agenda, but behind the scenes it’s primarily concerned with new forms of child measurement.”

Image via xkcd

People will come to adore the technologies that undo their capacities to think

So said Neil Postman (via Jay Springett). Jay is one of a small number of people who’s work I find particularly thoughtful and challenging.

Another is Venkatesh Rao, who last week referenced a Twitter thread he posted earlier this year. It’s awkward to and quote the pertinent parts of such things, but I’ll give it a try:

Megatrend conclusion: if you do not build a second brain or go offline, you will BECOME the second brain.

[…]

Basically, there’s no way to actually handle the volume of information and news that all of us appear to be handling right now. Which means we are getting augmented cognition resources from somewhere. The default place is “social” media.

[…]

What those of us who are here are doing is making a deal with the devil (or an angel): in return for being 1-2 years ahead of curve, we play 2nd brain to a shared first brain. We’ve ceded control of executive attention not to evil companies, but… an emergent oracular brain.

[…]

I called it playing your part in the Global Social Computer in the Cloud (GSCITC).

[…]

Central trade-off in managing your participation in GSCITC is: The more you attempt to consciously curate your participation rather than letting it set your priorities, the less oracular power you get in return.

Venkatesh Rao

He reckons that being fully immersed in the firehose of social media is somewhat like reading the tea leaves or understanding the runes. You have to ‘go with the flow’.

Rao uses the example of the very Twitter thread he’s making. Constructing it that way versus, for example, writing a blog post or newsletter means he is in full-on ‘gonzo mode’ versus what he calls (after Henry David Thoreau) ‘Waldenponding’.

I have been generally very unimpressed with the work people seem to generate when they go waldenponding to work on supposedly important things. The comparable people who stay more plugged in seem to produce better work.

My kindest reading of people who retreat so far it actually compromises their work is that it is a mental health preservation move because they can’t handle the optimum GSCITC immersion for their project. Their work could be improved if they had the stomach for more gonzo-nausea.

My harshest reading is that they’re narcissistic snowflakes who overvalue their work simply because they did it.

Venkatesh Rao

Well, perhaps. But as someone who has attempted to drink from that firehouse for over a decade, I think the time comes when you realise something else. Who’s setting the agenda here? It’s not ‘no-one’, but neither is it any one person in particular. Rather the whole structure of what can happen within such a network depends on decisions made other than you.

For example, Dan Hon, pointed (in a supporter-only newsletter) to an article by Louise Matsakis in WIRED that explains that the social network TikTok not only doesn’t add timestamps to user-generated content, but actively blocks the clock on your smartphone. These design decisions affect what can and can’t happen, and also the kinds of things that do end up happening.


Writing in The Guardian, Leah McLaren writes about being part of the last generation to really remember life before the internet.

In this age of uncertainty, predictions have lost value, but here’s an irrefutable one: quite soon, no person on earth will remember what the world was like before the internet. There will be records, of course (stored in the intangibly limitless archive of the cloud), but the actual lived experience of what it was like to think and feel and be human before the emergence of big data will be gone. When that happens, what will be lost?

Leah McLaren

McLaren is evidently a few years older than me, as I’ve been online since I was about 15. However, I definitely reflect on a regular basis about what being hyper-connected does to my sense of self. She cites a recent study published in the official journal of the World Psychiatric Association. Part of the conclusion of that study reads:

As digital technologies become increasingly integrated with everyday life, the Internet is becoming highly proficient at capturing our attention, while producing a global shift in how people gather information, and connect with one another. In this review, we found emerging support for several hypotheses regarding the pathways through which the Internet is influencing our brains and cognitive processes, particularly with regards to: a) the multi‐faceted stream of incoming information encouraging us to engage in attentional‐switching and “multi‐tasking” , rather than sustained focus; b) the ubiquitous and rapid access to online factual information outcompeting previous transactive systems, and potentially even internal memory processes; c) the online social world paralleling “real world” cognitive processes, and becoming meshed with our offline sociality, introducing the possibility for the special properties of social media to impact on “real life” in unforeseen ways.

Firth, J., et al. (2019). The “online brain”: how the Internet may be changing our cognition. World Psychiatry, 18: 119-129.

In her Guardian article, McLaren cites the main author, Dr Joseph Firth:

“The problem with the internet,” Firth explained, “is that our brains seem to quickly figure out it’s there – and outsource.” This would be fine if we could rely on the internet for information the same way we rely on, say, the British Library. But what happens when we subconsciously outsource a complex cognitive function to an unreliable online world manipulated by capitalist interests and agents of distortion? “What happens to children born in a world where transactive memory is no longer as widely exercised as a cognitive function?” he asked.

Leah McLaren

I think this is the problem, isn’t it? I’ve got no issue with having an ‘outboard brain’ where I store things that I want to look up instead of remember. It’s also insanely useful to have a method by which the world can join together in a form of ‘hive mind’.

What is problematic is when this ‘hive mind’ (in the form of social media) is controlled by people and organisations whose interests are orthogonal to our own.

In that situation, there are three things we can do. The first is to seek out forms of nascent ‘hive mind’-like spaces which are not controlled by people focused on the problematic concept of ‘shareholder value’. Like Mastodon, for example, and other decentralised social networks.

The second is to spend time finding out the voices to which you want to pay particular attention. The chances are that they won’t only write down their thoughts via social networks. They are likely to have newsletters, blogs, and even podcasts.

Third, an apologies for the metaphor, but with such massive information consumption the chances are that we can become ‘constipated’. So if we don’t want that to happen, if we don’t want to go on an ‘information diet’, then we need to ensure a better throughput. One of the best things I’ve done is have a disciplined approach to writing (here on Thought Shrapnel, and elsewhere) about the things I’ve read and found interesting. That’s one way to extract the nutrients.


I’d love your thoughts on this. Do you agree with the above? What strategies do you have in place?