Author: Doug Belshaw (page 1 of 65)

Microcast #078 — Values-based organisations

I’ve decided to post these microcasts, which I previously made available only through Patreon, here instead.

Microcasts focus on what I’ve been up to and thinking about, and also provide a way to answer questions from supporters and other readers/listeners!

This microcast covers ethics in decision-making for technology companies and (related!) some recent purchases I’ve made.

Show notes

I am not fond of expecting catastrophes, but there are cracks in the universe

So said Sydney Smith. Let’s talk about surveillance. Let’s talk about surveillance capitalism and surveillance humanitarianism. But first, let’s talk about machine learning and algorithms; in other words, let’s talk about what happens after all of that data is collected.

Writing in The Guardian, Sarah Marsh investigates local councils using “automated guidance systems” in an attempt to save money.

The systems are being deployed to provide automated guidance on benefit claims, prevent child abuse and allocate school places. But concerns have been raised about privacy and data security, the ability of council officials to understand how some of the systems work, and the difficulty for citizens in challenging automated decisions.

Sarah Marsh

The trouble is, they’re not particularly effective:

It has emerged North Tyneside council has dropped TransUnion, whose system it used to check housing and council tax benefit claims. Welfare payments to an unknown number of people were wrongly delayed when the computer’s “predictive analytics” erroneously identified low-risk claims as high risk

Meanwhile, Hackney council in east London has dropped Xantura, another company, from a project to predict child abuse and intervene before it happens, saying it did not deliver the expected benefits. And Sunderland city council has not renewed a £4.5m data analytics contract for an “intelligence hub” provided by Palantir.

Sarah Marsh

When I was at Mozilla there were a number of colleagues there who had worked on the OFA (Obama For America) campaign. I remember one of them, a DevOps guy, expressing his concern that the infrastructure being built was all well and good when there’s someone ‘friendly’ in the White House, but what comes next.

Well, we now know what comes next, on both sides of the Atlantic, and we can’t put that genie back in its bottle. Swingeing cuts by successive Conservative governments over here, coupled with the Brexit time-and-money pit means that there’s no attention or cash left.

If we stop and think about things for a second, we probably wouldn’t don’t want to live in a world where machines make decisions for us, based on algorithms devised by nerds. As Rose Eveleth discusses in a scathing article for Vox, this stuff isn’t ‘inevitable’ — nor does it constitute a process of ‘natural selection’:

Often consumers don’t have much power of selection at all. Those who run small businesses find it nearly impossible to walk away from Facebook, Instagram, Yelp, Etsy, even Amazon. Employers often mandate that their workers use certain apps or systems like Zoom, Slack, and Google Docs. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or, ‘I’m not on social media,’” says Rumman Chowdhury, a data scientist at Accenture. “You actually have to be so comfortable in your privilege that you can opt out of things.”

And so we’re left with a tech world claiming to be driven by our desires when those decisions aren’t ones that most consumers feel good about. There’s a growing chasm between how everyday users feel about the technology around them and how companies decide what to make. And yet, these companies say they have our best interests in mind. We can’t go back, they say. We can’t stop the “natural evolution of technology.” But the “natural evolution of technology” was never a thing to begin with, and it’s time to question what “progress” actually means.

Rose Eveleth

I suppose the thing that concerns me the most is people in dire need being subject to impersonal technology for vital and life-saving aid.

For example, Mark Latonero, writing in The New York Times, talks about the growing dangers around what he calls ‘surveillance humanitarianism’:

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.

Mark Latonero

It’s easy to think that this is an emergency, so we should just do whatever is necessary. But Latonero explains the risks, arguing that the risk is shifted to a later time:

If an individual or group’s data is compromised or leaked to a warring faction, it could result in violent retribution for those perceived to be on the wrong side of the conflict. When I spoke with officials providing medical aid to Syrian refugees in Greece, they were so concerned that the Syrian military might hack into their database that they simply treated patients without collecting any personal data. The fact that the Houthis are vying for access to civilian data only elevates the risk of collecting and storing biometrics in the first place.

Mark Latonero

There was a rather startling article in last weekend’s newspaper, which I’ve found online. Hannah Devlin, again writing in The Guardian (which is a good source of information for those concerned with surveillance) writes about a perfect storm of social media and improved processing speeds:

[I]n the past three years, the performance of facial recognition has stepped up dramatically. Independent tests by the US National Institute of Standards and Technology (Nist) found the failure rate for finding a target picture in a database of 12m faces had dropped from 5% in 2010 to 0.1% this year.

The rapid acceleration is thanks, in part, to the goldmine of face images that have been uploaded to Instagram, Facebook, LinkedIn and captioned news articles in the past decade. At one time, scientists would create bespoke databases by laboriously photographing hundreds of volunteers at different angles, in different lighting conditions. By 2016, Microsoft had published a dataset, MS Celeb, with 10m face images of 100,000 people harvested from search engines – they included celebrities, broadcasters, business people and anyone with multiple tagged pictures that had been uploaded under a Creative Commons licence, allowing them to be used for research. The dataset was quietly deleted in June, after it emerged that it may have aided the development of software used by the Chinese state to control its Uighur population.

In parallel, hardware companies have developed a new generation of powerful processing chips, called Graphics Processing Units (GPUs), uniquely adapted to crunch through a colossal number of calculations every second. The combination of big data and GPUs paved the way for an entirely new approach to facial recognition, called deep learning, which is powering a wider AI revolution.

Hannah Devlin

Those of you who have read this far and are expecting some big reveal are going to be disappointed. I don’t have any ‘answers’ to these problems. I guess I’ve been guilty, like many of us have, of the kind of ‘privacy nihilism’ mentioned by Ian Bogost in The Atlantic:

Online services are only accelerating the reach and impact of data-intelligence practices that stretch back decades. They have collected your personal data, with and without your permission, from employers, public records, purchases, banking activity, educational history, and hundreds more sources. They have connected it, recombined it, bought it, and sold it. Processed foods look wholesome compared to your processed data, scattered to the winds of a thousand databases. Everything you have done has been recorded, munged, and spat back at you to benefit sellers, advertisers, and the brokers who service them. It has been for a long time, and it’s not going to stop. The age of privacy nihilism is here, and it’s time to face the dark hollow of its pervasive void.

Ian Bogost

The only forces that we have to stop this are collective action, and governmental action. My concern is that we don’t have the digital savvy to do the former, and there’s definitely the lack of will in respect of the latter. Troubling times.

Friday fawnings

On this week’s rollercoaster journey, I came across these nuggets:

  • Renata Ávila: “The Internet of creation disappeared. Now we have the Internet of surveillance and control” (CCCB Lab) — “This lawyer and activist talks with a global perspective about the movements that the power of “digital colonialism” is weaving. Her arguments are essential for preventing ourselves from being crushed by the technological world, from being carried away by the current of ephemeral divertemento. For being fully aware that, as individuals, our battle is not lost, but that we can control the use of our data, refuse to give away our facial recognition or demand that the privacy laws that protect us are obeyed.”
  • Everything Is Private Equity Now (Bloomberg) — “The basic idea is a little like house flipping: Take over a company that’s relatively cheap and spruce it up to make it more attractive to other buyers so you can sell it at a profit in a few years. The target might be a struggling public company or a small private business that can be combined—or “rolled up”—with others in the same industry.”
  • Forget STEM, We Need MESH (Our Human Family) — “I would suggest a renewed focus on MESH education, which stands for Media Literacy, Ethics, Sociology, and History. Because if these are not given equal attention, we could end up with incredibly bright and technically proficient people who lack all capacity for democratic citizenship.”
  • Connecting the curious (Harold Jarche) — “If we want to change the world, be curious. If we want to make the world a better place, promote curiosity in all aspects of learning and work. There are still a good number of curious people of all ages working in creative spaces or building communities around common interests. We need to connect them.”
  • Twitter: No, really, we’re very sorry we sold your security info for a boatload of cash (The Register) — “The social networking giant on Tuesday admitted to an “error” that let advertisers have access to the private information customers had given Twitter in order to place additional security protections on their accounts.”
  • Digital tools interrupt workers 14 times a day (CIO Dive) — “The constant chime of digital workplace tools including email, instant messaging or collaboration software interrupts knowledge workers 13.9 times on an average day, according to a survey of 3,750 global workers from Workfront.”
  • Book review – Curriculum: Athena versus the Machine (TES) — “Despite the hope that the book is a cure for our educational malaise, Curriculum is a morbid symptom of the current political and intellectual climate in English education.”
  • Fight for the planet: Building an open platform and open culture at Greenpeace (Opensource.com) — “Being as open as we can, pushing the boundaries of what it means to work openly, doesn’t just impact our work. It impacts our identity.”
  • Psychodata (Code Acts in Education) — “Social-emotional learning sounds like a progressive, child-centred agenda, but behind the scenes it’s primarily concerned with new forms of child measurement.”

Image via xkcd

People will come to adore the technologies that undo their capacities to think

So said Neil Postman (via Jay Springett). Jay is one of a small number of people who’s work I find particularly thoughtful and challenging.

Another is Venkatesh Rao, who last week referenced a Twitter thread he posted earlier this year. It’s awkward to and quote the pertinent parts of such things, but I’ll give it a try:

Megatrend conclusion: if you do not build a second brain or go offline, you will BECOME the second brain.

[…]

Basically, there’s no way to actually handle the volume of information and news that all of us appear to be handling right now. Which means we are getting augmented cognition resources from somewhere. The default place is “social” media.

[…]

What those of us who are here are doing is making a deal with the devil (or an angel): in return for being 1-2 years ahead of curve, we play 2nd brain to a shared first brain. We’ve ceded control of executive attention not to evil companies, but… an emergent oracular brain.

[…]

I called it playing your part in the Global Social Computer in the Cloud (GSCITC).

[…]

Central trade-off in managing your participation in GSCITC is: The more you attempt to consciously curate your participation rather than letting it set your priorities, the less oracular power you get in return.

Venkatesh Rao

He reckons that being fully immersed in the firehose of social media is somewhat like reading the tea leaves or understanding the runes. You have to ‘go with the flow’.

Rao uses the example of the very Twitter thread he’s making. Constructing it that way versus, for example, writing a blog post or newsletter means he is in full-on ‘gonzo mode’ versus what he calls (after Henry David Thoreau) ‘Waldenponding’.

I have been generally very unimpressed with the work people seem to generate when they go waldenponding to work on supposedly important things. The comparable people who stay more plugged in seem to produce better work.

My kindest reading of people who retreat so far it actually compromises their work is that it is a mental health preservation move because they can’t handle the optimum GSCITC immersion for their project. Their work could be improved if they had the stomach for more gonzo-nausea.

My harshest reading is that they’re narcissistic snowflakes who overvalue their work simply because they did it.

Venkatesh Rao

Well, perhaps. But as someone who has attempted to drink from that firehouse for over a decade, I think the time comes when you realise something else. Who’s setting the agenda here? It’s not ‘no-one’, but neither is it any one person in particular. Rather the whole structure of what can happen within such a network depends on decisions made other than you.

For example, Dan Hon, pointed (in a supporter-only newsletter) to an article by Louise Matsakis in WIRED that explains that the social network TikTok not only doesn’t add timestamps to user-generated content, but actively blocks the clock on your smartphone. These design decisions affect what can and can’t happen, and also the kinds of things that do end up happening.


Writing in The Guardian, Leah McLaren writes about being part of the last generation to really remember life before the internet.

In this age of uncertainty, predictions have lost value, but here’s an irrefutable one: quite soon, no person on earth will remember what the world was like before the internet. There will be records, of course (stored in the intangibly limitless archive of the cloud), but the actual lived experience of what it was like to think and feel and be human before the emergence of big data will be gone. When that happens, what will be lost?

Leah McLaren

McLaren is evidently a few years older than me, as I’ve been online since I was about 15. However, I definitely reflect on a regular basis about what being hyper-connected does to my sense of self. She cites a recent study published in the official journal of the World Psychiatric Association. Part of the conclusion of that study reads:

As digital technologies become increasingly integrated with everyday life, the Internet is becoming highly proficient at capturing our attention, while producing a global shift in how people gather information, and connect with one another. In this review, we found emerging support for several hypotheses regarding the pathways through which the Internet is influencing our brains and cognitive processes, particularly with regards to: a) the multi‐faceted stream of incoming information encouraging us to engage in attentional‐switching and “multi‐tasking” , rather than sustained focus; b) the ubiquitous and rapid access to online factual information outcompeting previous transactive systems, and potentially even internal memory processes; c) the online social world paralleling “real world” cognitive processes, and becoming meshed with our offline sociality, introducing the possibility for the special properties of social media to impact on “real life” in unforeseen ways.

Firth, J., et al. (2019). The “online brain”: how the Internet may be changing our cognition. World Psychiatry, 18: 119-129.

In her Guardian article, McLaren cites the main author, Dr Joseph Firth:

“The problem with the internet,” Firth explained, “is that our brains seem to quickly figure out it’s there – and outsource.” This would be fine if we could rely on the internet for information the same way we rely on, say, the British Library. But what happens when we subconsciously outsource a complex cognitive function to an unreliable online world manipulated by capitalist interests and agents of distortion? “What happens to children born in a world where transactive memory is no longer as widely exercised as a cognitive function?” he asked.

Leah McLaren

I think this is the problem, isn’t it? I’ve got no issue with having an ‘outboard brain’ where I store things that I want to look up instead of remember. It’s also insanely useful to have a method by which the world can join together in a form of ‘hive mind’.

What is problematic is when this ‘hive mind’ (in the form of social media) is controlled by people and organisations whose interests are orthogonal to our own.

In that situation, there are three things we can do. The first is to seek out forms of nascent ‘hive mind’-like spaces which are not controlled by people focused on the problematic concept of ‘shareholder value’. Like Mastodon, for example, and other decentralised social networks.

The second is to spend time finding out the voices to which you want to pay particular attention. The chances are that they won’t only write down their thoughts via social networks. They are likely to have newsletters, blogs, and even podcasts.

Third, an apologies for the metaphor, but with such massive information consumption the chances are that we can become ‘constipated’. So if we don’t want that to happen, if we don’t want to go on an ‘information diet’, then we need to ensure a better throughput. One of the best things I’ve done is have a disciplined approach to writing (here on Thought Shrapnel, and elsewhere) about the things I’ve read and found interesting. That’s one way to extract the nutrients.


I’d love your thoughts on this. Do you agree with the above? What strategies do you have in place?

Friday flexitarianism

Check these links out and tell me which one you like best:

  • The radical combination of degrowth and basic income (openDemocracy) — “One of the things you hear whenever you talk about degrowth is that, if the economy doesn’t grow, people are going to be without jobs, people will go hungry, and no one wants that. Rich countries might be able to afford slowing down their economies, but not poorer ones. You hear this argument mostly in countries from the Global South, like my own. This misses the point. Degrowth is a critique of our dependency on work. This idea that people have to work to stay alive, and thus the economy needs to keep growing for the sake of keeping people working.”
  • The hypersane are among us, if only we are prepared to look (Aeon) — “It is not just that the ‘sane’ are irrational but that they lack scope and range, as though they’ve grown into the prisoners of their arbitrary lives, locked up in their own dark and narrow subjectivity. Unable to take leave of their selves, they hardly look around them, barely see beauty and possibility, rarely contemplate the bigger picture – and all, ultimately, for fear of losing their selves, of breaking down, of going mad, using one form of extreme subjectivity to defend against another, as life – mysterious, magical life – slips through their fingers.”
  • “The Tragedy of the Commons”: how ecofascism was smuggled into mainstream thought (BoingBoing) — “We are reaching a “peak indifference” tipping point in the climate debate, where it’s no longer possible to deny the reality of the climate crisis. I think that many of us assumed that when that happened, we’d see a surge of support for climate justice, the diversion of resources from wealth extraction for the super-rich to climate remediation and defense centered on the public good. But that expectation overestimated the extent to which climate denial was motivated by mere greed.”
  • What Would It Take to Shut Down the Entire Internet? (Gizmodo) “One imaginative stumbling block, in playing out the implications of [this] scenario, was how something like that could happen in the first place. And so—without advocating any of the methods described below, or strongly suggesting that hundreds or thousands of like-minded heroes band together to take this sucker down once and for all—…we’ve asked a number of cybersecurity experts how exactly one would go about shutting down the entire internet.”
  • Earning, spending, saving: The currency of influence in open source (Opensource.com) — “Even though you can’t buy it, influence behaves like a form of virtual currency in an open source community: a scarce resource, always needed, but also always in short supply. One must earn it through contributions to an open source project or community. In contrast to monetary currency, however, influence is not transferable. You must earn it for yourself. You can neither give nor receive it as a gift.”
  • The Art of Topophilia: 7 Ways to Love the Place You Live (Art of Manliness) — “It’s not only possible to kindle this kind of topophilic love affair with “sexier” places chock full of well-hyped advantages, but also with so-called undesirable communities that aren’t on the cultural radar. Just as people who may initially appear lowly and unappealing, but have warm and welcoming personalities, come to seem more attractive the more we get to know them, so too can sleepier, less vaunted locales.”
  • A Like Can’t Go Anywhere, But a Compliment Can Go a Long Way (Frank Chimero) — “Passive positivity isn’t enough; active positivity is needed to counterbalance whatever sort of collective conversations and attention we point at social media. Otherwise, we are left with the skewed, inaccurate, and dangerous nature of what’s been built: an environment where most positivity is small, vague, and immobile, and negativity is large, precise, and spreadable.”
  • EU recognises “right to repair” in push to make appliances last longer (Dezeen) — “Not included in the EU right to repair rules are devices such as smart phones and laptops, whose irreplaceable batteries and performance-hampering software updates are most often accused of encouraging throwaway culture.”
  • I’m a Psychotherapist Who Sets 30-Day Challenges Instead of Long-Term Goals. Here’s Why (Inc.) — “Studies show our brains view time according to either “now deadlines” or “someday deadlines.” And “now deadlines” often fall within this calendar month.”

Image by Yung-sen Wu (via The Atlantic)

Technology is the name we give to stuff that doesn’t work properly yet

So said my namesake Douglas Adams. In fact, he said lots of wise things about technology, most of them too long to serve as a title.

I’m in a weird place, emotionally, at the moment, but sometimes this can be a good thing. Being taken out of your usual ‘autopilot’ can be a useful way to see things differently. So I’m going to take this opportunity to share three things that, to be honest, make me a bit concerned about the next few years…

Attempts to put microphones everywhere

Alexa-enabled EVERYTHING

In an article for Slate, Shannon Palus ranks all of Amazon’s new products by ‘creepiness’. The Echo Frames are, in her words:

A microphone that stays on your person all day and doesn’t look like anything resembling a microphone, nor follows any established social codes for wearable microphones? How is anyone around you supposed to have any idea that you are wearing a microphone?

Shannon Palus

When we’re not talking about weapons of mass destruction, it’s not the tech that concerns me, but the context in which the tech is used. As Palus points out, how are you going to be able to have a ‘quiet word’ with anyone wearing glasses ever again?

It’s not just Amazon, of course. Google and Facebook are at it, too.

Full-body deepfakes

Scary stuff

With the exception, perhaps, of populist politicians, I don’t think we’re ready for a post-truth society. Check out the video above, which shows Chinese technology that allows for ‘full body deepfakes’.

The video is embedded, along with a couple of others in an article for Fast Company by DJ Pangburn, who also notes that AI is learning human body movements from videos. Not only will you be able to prank your friends by showing them a convincing video of your ability to do 100 pull-ups, but the fake news it engenders will mean we can’t trust anything any more.

Neuromarketing

If you clicked on the ‘super-secret link’ in Sunday’s newsletter, you will have come across STEALING UR FEELINGS which is nothing short of incredible. As powerful as it is in showing you the kind of data that organisations have on us, it’s the tip of the iceberg.

Kaveh Waddell, in an article for Axios, explains that brains are the last frontier for privacy:

“The sort of future we’re looking ahead toward is a world where our neural data — which we don’t even have access to — could be used” against us, says Tim Brown, a researcher at the University of Washington Center for Neurotechnology.

Kaveh Waddell

This would lead to ‘neuromarketing’, with advertisers knowing what triggers and influences you better than you know yourself. Also, it will no doubt be used for discriminatory purposes and, because it’s coming directly from your brainwaves, short of literally wearing a tinfoil hat, there’s nothing much you can do.


So there we are. Am I being too fearful here?

Friday fluctuations

Have a quick skim through these links that I came across this week and found interesting:

  • Overrated: Ludwig Wittgenstein (Standpoint) — “Wittgenstein’s reputation for genius did not depend on incomprehensibility alone. He was also “tortured”, rude and unreliable. He had an intense gaze. He spent months in cold places like Norway to isolate himself. He temporarily quit philosophy, because he believed that he had solved all its problems in his 1922 Tractatus Logico-Philosophicus, and worked as a gardener. He gave away his family fortune. And, of course, he was Austrian, as so many of the best geniuses are.”
  • EdTech Resistance (Ben Williamson) ⁠— “We should not and cannot ignore these tensions and challenges. They are early signals of resistance ahead for edtech which need to be engaged with before they turn to public outrage. By paying attention to and acting on edtech resistances it may be possible to create education systems, curricula and practices that are fair and trustworthy. It is important not to allow edtech resistance to metamorphose into resistance to education itself.”
  • The Guardian view on machine learning: a computer cleverer than you? (The Guardian) — “The promise of AI is that it will imbue machines with the ability to spot patterns from data, and make decisions faster and better than humans do. What happens if they make worse decisions faster? Governments need to pause and take stock of the societal repercussions of allowing machines over a few decades to replicate human skills that have been evolving for millions of years.”
  • A nerdocratic oath (Scott Aaronson) — “I will never allow anyone else to make me a cog. I will never do what is stupid or horrible because “that’s what the regulations say” or “that’s what my supervisor said,” and then sleep soundly at night. I’ll never do my part for a project unless I’m satisfied that the project’s broader goals are, at worst, morally neutral. There’s no one on earth who gets to say: “I just solve technical problems. Moral implications are outside my scope”.”
  • Privacy is power (Aeon) — “The power that comes about as a result of knowing personal details about someone is a very particular kind of power. Like economic power and political power, privacy power is a distinct type of power, but it also allows those who hold it the possibility of transforming it into economic, political and other kinds of power. Power over others’ privacy is the quintessential kind of power in the digital age.”
  • The Symmetry and Chaos of the World’s Megacities (WIRED) — “Koopmans manages to create fresh-looking images by finding unique vantage points, often by scouting his locations on Google Earth. As a rule, he tries to get as high as he can—one of his favorite tricks is talking local work crews into letting him shoot from the cockpit of a construction crane.”
  • Green cities of the future – what we can expect in 2050 (RNZ) — “In their lush vision of the future, a hyperloop monorail races past in the foreground and greenery drapes the sides of skyscrapers that house communal gardens and vertical farms.”
  • Wittgenstein Teaches Elementary School (Existential Comics) ⁠— “And I’ll have you all know, there is no crying in predicate logic.”
  • Ask Yourself These 5 Questions to Inspire a More Meaningful Career Move (Inc.) — “Introspection on the right things can lead to the life you want.”

Image from Do It Yurtself

It’s not a revolution if nobody loses

Thanks to Clay Shirky for today’s title. It’s true, isn’t it? You can’t claim something to be a true revolution unless someone, some organisation, or some group of people loses.

I’m happy to say that it’s the turn of some older white men to be losing right now, and particularly delighted that those who have spent decades abusing and repressing people are getting their comeuppance.

Enough has been written about Epstein and the fallout from it. You can read about comments made by Richard Stallman, founder of the Free Software Foundation, in this Washington Post article. I’ve only met RMS (as he’s known) in person once, at the Indie Tech Summit five years ago, but it wasn’t a great experience. While I’m willing to cut visionary people some slack, he mostly acted like a jerk.

RMS is a revered figure in Free Software circles and it’s actually quite difficult not to agree with his stance on many political and technological matters. That being said, he deserves everything he gets though for the comments he made about child abuse, for the way he’s treated women for the past few decades, and his dictator-like approach to software projects.

In an article for WIRED entitled Richard Stallman’s Exit Heralds a New Era in Tech, Noam Cohen writes that we’re entering a new age. I certainly hope so.

This is a lesson we are fast learning about freedom as it promoted by the tech world. It is not about ensuring that everyone can express their views and feelings. Freedom, in this telling, is about exclusion. The freedom to drive others away. And, until recently, freedom from consequences.

After 40 years of excluding those who didn’t serve his purposes, however, Stallman finds himself excluded by his peers. Freedom.

Maybe freedom, defined in this crude, top-down way, isn’t the be-all, end-all. Creating a vibrant inclusive community, it turns out, is as important to a software project as a coding breakthrough. Or, to put it in more familiar terms—driving away women, investing your hopes in a single, unassailable leader is a critical bug. The best patch will be to start a movement that is respectful, inclusive, and democratic.

Noam Cohen

One of the things that the next leaders of the Free Software Movement will have to address is how to take practical steps to guarantee our basic freedoms in a world where Big Tech provides surveillance to ever-more-powerful governments.

Cory Doctorow is an obvious person to look to in this regard. He has a history of understanding what’s going on and writing about it in ways that people understand. In an article for The Globe and Mail, Doctorow notes that a decline in trust of political systems and experts more generally isn’t because people are more gullible:

40 years of rising inequality and industry consolidation have turned our truth-seeking exercises into auctions, in which lawmakers, regulators and administrators are beholden to a small cohort of increasingly wealthy people who hold their financial and career futures in their hands.

[…]

To be in a world where the truth is up for auction is to be set adrift from rationality. No one is qualified to assess all the intensely technical truths required for survival: even if you can master media literacy and sort reputable scientific journals from junk pay-for-play ones; even if you can acquire the statistical literacy to evaluate studies for rigour; even if you can acquire the expertise to evaluate claims about the safety of opioids, you can’t do it all over again for your city’s building code, the aviation-safety standards governing your next flight, the food-safety standards governing the dinner you just ordered.

Cory Doctorow

What’s this got to do with technology, and in particular Free Software?

Big Tech is part of this problem… because they have monopolies, thanks to decades of buying nascent competitors and merging with their largest competitors, of cornering vertical markets and crushing rivals who won’t sell. Big Tech means that one company is in charge of the social lives of 2.3 billion people; it means another company controls the way we answer every question it occurs to us to ask. It means that companies can assert the right to control which software your devices can run, who can fix them, and when they must be sent to a landfill.

These companies, with their tax evasion, labour abuses, cavalier attitudes toward our privacy and their completely ordinary human frailty and self-deception, are unfit to rule our lives. But no one is fit to be our ruler. We deserve technological self-determination, not a corporatized internet made up of five giant services each filled with screenshots from the other four.

Cory Doctorow

Doctorow suggests breaking up these companies to end their de facto monopolies and level the playing field.

The problem of tech monopolies is something that Stowe Boyd explored in a recent article entitled Are Platforms Commons? Citing previous precedents around railroads, Boyd has many questions, including whether successful platforms be bound with the legal principles of ‘common carriers’, and finishes with this:

However, just one more question for today: what if ecosystems were constructed so that they were governed by the participants, rather by the hypercapitalist strivings of the platform owners — such as Apple, Google, Amazon, Facebook — or the heavy-handed regulators? Is there a middle ground where the needs of the end user and those building, marketing, and shipping products and services can be balanced, and a fair share of the profits are distributed not just through common carrier laws but by the shared economics of a commons, and where the platform orchestrator gets a fair share, as well? We may need to shift our thinking from common carrier to commons carrier, in the near future.

Stowe Boyd

The trouble is, simply establishing a commons doesn’t solve all of the problems. In fact, what tends to happen next is well known:

The tragedy of the commons is a situation in a shared-resource system where individual users, acting independently according to their own self-interest, behave contrary to the common good of all users, by depleting or spoiling that resource through their collective action.

Wikipedia

An article in The Economist outlines the usual remedies to the ‘tragedy of the commons’: either governmental regulation (e.g. airspace), or property rights (e.g. land). However, the article cites the work of Elinor Ostrom, a Nobel prizewinning economist, showing that another way is possible:

An exclusive focus on states and markets as ways to control the use of commons neglects a varied menagerie of institutions throughout history. The information age provides modern examples, for example Wikipedia, a free, user-edited encyclopedia. The digital age would not have dawned without the private rewards that flowed to successful entrepreneurs. But vast swathes of the web that might function well as commons have been left in the hands of rich, relatively unaccountable tech firms.

[…]

A world rich in healthy commons would of necessity be one full of distributed, overlapping institutions of community governance. Cultivating these would be less politically rewarding than privatisation, which allows governments to trade responsibility for cash. But empowering commoners could mend rents in the civic fabric and alleviate frustration with out-of-touch elites.

The Economist

I count myself as someone on the left of politics, if that’s how we’re measuring things today. However, I don’t think we need representation at any higher level than is strictly necessary.

In a time when technology allows you, to a great extent, to represent yourself, perhaps we need ways of demonstrating how complex and multi-faceted some issues are? Perhaps we need to try ‘liquid democracy‘:

Liquid democracy lies between direct and representative democracy. In direct democracy, participants must vote personally on all issues, while in representative democracy participants vote for representatives once in certain election cycles. Meanwhile, liquid democracy does not depend on representatives but rather on a weighted and transitory delegation of votes. Liquid democracy through elections can empower individuals to become sole interpreters of the interests of the nation. It allows for citizens to vote directly on policy issues, delegate their votes on one or multiple policy areas to delegates of their choosing, delegate votes to one or more people, delegated to them as a weighted voter, or get rid of their votes’ delegations whenever they please.

WIkipedia

I think, given the state that politics is in right now, it’s well worth a try. The problem, of course, is that the losers would be the political elites, the current incumbents. But, hey, it’s not a revolution if nobody loses, right?

Saturday strikings

This week’s roundup is going out a day later than usual, as yesterday was the Global Climate Strike and Thought Shrapnel was striking too!

Here’s what I’ve been paying attention to this week:

  • How does a computer ‘see’ gender? (Pew Research Center) — “Machine learning tools can bring substantial efficiency gains to analyzing large quantities of data, which is why we used this type of system to examine thousands of image search results in our own studies. But unlike traditional computer programs – which follow a highly prescribed set of steps to reach their conclusions – these systems make their decisions in ways that are largely hidden from public view, and highly dependent on the data used to train them. As such, they can be prone to systematic biases and can fail in ways that are difficult to understand and hard to predict in advance.”
  • The Communication We Share with Apes (Nautilus) — “Many primate species use gestures to communicate with others in their groups. Wild chimpanzees have been seen to use at least 66 different hand signals and movements to communicate with each other. Lifting a foot toward another chimp means “climb on me,” while stroking their mouth can mean “give me the object.” In the past, researchers have also successfully taught apes more than 100 words in sign language.”
  • Why degrowth is the only responsible way forward (openDemocracy) — “If we free our imagination from the liberal idea that well-being is best measured by the amount of stuff that we consume, we may discover that a good life could also be materially light. This is the idea of voluntary sufficiency. If we manage to decide collectively and democratically what is necessary and enough for a good life, then we could have plenty.”
  • 3 times when procrastination can be a good thing (Fast Company) — “It took Leonardo da Vinci years to finish painting the Mona Lisa. You could say the masterpiece was created by a master procrastinator. Sure, da Vinci wasn’t under a tight deadline, but his lengthy process demonstrates the idea that we need to work through a lot of bad ideas before we get down to the good ones.”
  • Why can’t we agree on what’s true any more? (The Guardian) — “What if, instead, we accepted the claim that all reports about the world are simply framings of one kind or another, which cannot but involve political and moral ideas about what counts as important? After all, reality becomes incoherent and overwhelming unless it is simplified and narrated in some way or other.
  • A good teacher voice strikes fear into grown men (TES) — “A good teacher voice can cut glass if used with care. It can silence a class of children; it can strike fear into the hearts of grown men. A quiet, carefully placed “Excuse me”, with just the slightest emphasis on the “-se”, is more effective at stopping an argument between adults or children than any amount of reason.”
  • Freeing software (John Ohno) — “The only way to set software free is to unshackle it from the needs of capital. And, capital has become so dependent upon software that an independent ecosystem of anti-capitalist software, sufficiently popular, can starve it of access to the speed and violence it needs to consume ever-doubling quantities of to survive.”
  • Young People Are Going to Save Us All From Office Life (The New York Times) — “Today’s young workers have been called lazy and entitled. Could they, instead, be among the first to understand the proper role of work in life — and end up remaking work for everyone else?”
  • Global climate strikes: Don’t say you’re sorry. We need people who can take action to TAKE ACTUAL ACTION (The Guardian) — “Brenda the civil disobedience penguin gives some handy dos and don’ts for your civil disobedience”

All is petty, inconstant, and perishable

So said Marcus Aurelius. Today’s short article is about what happens after you die. We’re all aware of the importance of making a will, particularly if you have dependants. But that’s primarily for your analogue, offline life. What about your digital life?

In a recent TechCrunch article, Jon Evans writes:

I really wish I hadn’t had cause to write this piece, but it recently came to my attention, in an especially unfortunate way, that death in the modern era can have a complex and difficult technical aftermath. You should make a will, of course. Of course you should make a will. But many wills only dictate the disposal of your assets. What will happen to the other digital aspects of your life, when you’re gone?

Jon Evans

The article points to a template for a Digital Estate Planning Document which you can use to list all of the places that you’re active. Interestingly, the suggestion is to have a ‘digital executor’, which makes sense as the more technical you are the more likely that other members of your family might not be able to follow your instructions.

Interestingly, the Wikipedia article on digital wills has some very specific advice of which the above-mentioned document is only a part:

  1. Appoint someone as online executor
  2. State in a formal document how profiles and accounts are handled
  3. Understand privacy policies
  4. Provide online executor list of websites and logins
  5. State in the will that the online executor must have a copy of the death certificate

I hadn’t really thought about this, but the chances of identity theft after someone has died are as great, if not greater, as when they were alive:

An article by Magder in the newspaper The Gazette provides a reminder that identity theft can potentially continue to be a problem even after death if their information is released to the wrong people. This is why online networks and digital executors require proof of a death certificate from a family member of the deceased person in order to acquire access to accounts. There are instances when access may still be denied, because of the prevalence of false death certificates.

Wikipedia

Zooming out a bit, and thinking about this from my own perspective, it’s a good idea to insist on good security practices for your nearest and dearest. Ensure they know how to use password managers and use two-factor authentication on their accounts. If they do this for themselves, they’ll understand how to do it with your accounts when you’re gone.

One thing it’s made think about is the length of time for which I renew domain names. I tend to just renew mine (I have quite a few) on a yearly basis. But what if the worst happened? Those payment details would be declined, and my sites would be offline in a year or less.

All of this makes me think that the important thing here is to keep things as simple as possible. As I’ve discussed in another article, the way people remember us after we’re gone is kind of important.

Most of us could, I think, divide our online life into three buckets:

  • Really important to my legacy
  • Kind of important
  • Not important

So if, for example, I died tomorrow, the domain renewal for Thought Shrapnel lapsed next year, and a scammer took it over, that would be terrible. It’s part of the reason why I still renew domains I don’t use. So this would go in the ‘really important to my legacy’ bucket.

On the other hand, my experiments with various tools and platforms I’m less bothered about. They would probably go in the ‘not important’ bucket.

Then there’s that awkward middle space. Things like the site for my doctoral thesis when the ‘official’ copy is in the Durham University e-Theses repository.

Ultimately, it’s a conversation to have with those close to you. For me, it’s on my mind after the death of a good friend and so something I should get to before life goes back to some version of normality. After all, figuring out someone else’s digital life admin is the last thing people want when they’re already dealing with grief.