Microcast #081 - Anarchy, Federation, and the IndieWeb

    Happy New Year! It's good to be back.

    This week's microcast answers a question from John Johnston about federation and the IndieWeb. I also discuss anarchism and left-libertarianism, for good measure.

    Show notes

    Quick update!

    For approximately the last decade, I've had an annual hiatus from writing and social media, and focused on inputs rather than outputs. Sometimes that's lasted a month, sometimes two.

    This year, I'm going to be sending out weekly newsletters (only) during November, and then nothing at all in December. As a result, there won't be any more posts on this site until January 2020.

    I'd like to take this opportunity to thank everyone who has commented on my work this year, either publicly or privately. A special thanks goes to those who back Thought Shrapnel via Patreon. I really do appreciate your support!

    Microcast #080 - Redecentralize and MozFest

    We don’t receive wisdom; we must discover it for ourselves after a journey that no one can take us on or spare us

    So said Marcel Proust, that famous connoisseur of les petites madeleines. While I don't share his effete view of the world, I do like French cakes and definitely agree with his sentiments on wisdom.

    Earlier this week, Eylan Ezekiel shared this Nesta Landscape of innovation approaches with our Slack channel. It's what I would call 'slidebait' — carefully crafted to fit onto slide decks in keynotes around the world. It's a smart move because it gets people talking about your organisation.

    Nesta's Landscape of innovation approaches
    Nesta's Landscape of innovation approaches

    In my opinion, how these things are made is more interesting than the end result. There are inevitably value judgements when creating anything like this, and, because Nesta have set it out as overlapping 'spaces', the most obvious takeaway from the above diagram is that those innovation approaches sitting within three overlapping spaces are the 'most valuable' or 'most impactful'. Is that true?

    A previous post on this topic from the Nesta blog explains:

    Although this map is neither exhaustive nor definitive – and at some points it may seem perhaps a little arbitrary, personal choice and preference – we have tried to provide an overview of both commonly used and emerging innovation approaches.

    Bas Leurs (formerly of nesta)

    When you're working for a well-respected organisation, you have to be really careful, because people can take what you produce as some sort of Gospel Truth. No matter how many caveats you add, people confuse the map with the territory.

    I have some experience with creating a 'map' for a given area, as I was Mozilla's Web Literacy Lead from 2013 to 2015. During that time, I worked with the community to take the Web Literacy Standard Map from v0.1 to v1.5.

    Digital literacies of various types are something I've been paying attention to for around 15 years now. And, let me tell, you, I've seen some pretty bad 'maps' and 'frameworks'.

    For example, here's a slide deck for a presentation I did for a European Commission Summer School last year, in which I attempted to take the audience on a journey to decide whether a particular example I showed them was any good:

    If you have a look at Slide 14 onwards, you'll see that the point I was trying to make is that you have no way of knowing whether or not a shiny, good-looking map is any good. The organisation who produced it didn't 'show their work', so you have zero insight into its creation and the decisions taken in its creation. Did their intern knock it up on a short deadline? We'll never know.

    The problem with many think tanks and 'innovation' organisations is that they move on too quickly to the next thing. Instead of sitting with something and let it mature and flourish, as soon as the next bit of funding comes in, they're off like a dog chasing a shiny car. I'm not sure that's how innovation works.

    Before Mozilla, I worked at Jisc, which at the time funded innovation programmes on behalf of the UK government and disseminated the outcomes. I remember a very simple overview from Jisc's Sustaining and Embedding Innovations project that focused on three stages of innovation:

    Invention                     
    This is about the generation of new ideas e.g. new ways of teaching and learning or new ICT solutions.

    Early Innovation
    This is all about the early practical application of new inventions, often focused in specific areas e.g. a subject discipline or speciality such as distance learning or work-based learning.

    Systemic Innovation
    This is where an institution, for example, will aim to embed an innovation institutionally. 

    Jisc

    The problem with many maps and frameworks, especially around digital skills and innovation, is that they remove any room for ambiguity. So, in an attempt not to come across as vague, they instead become 'dead metaphors'.

    Continuum of ambiguity
    Continuum of Ambiguity

    I don't think I've ever seen an example where, without any contextualisation, an individual or organisation has taken something 'off the shelf' and applied it to achieve uniformly fantastic results. That's not how these things work.

    Humans are complex organisms; we're not machines. For a given input you can't expect the same output. We're not lossless replicators.

    So although it takes time, effort, and resources, you've got to put in the hard yards to see an innovation through all three of those stages outlined by Jisc. Although the temptation is to nail things down initially, the opposite is actually the best way forward. Take people on a journey and get them to invest in what's at stake. Embrace the ambiguity.

    I've written more about this in a post I wrote about a 5-step process for creating a sustainable digital literacies curriculum. It's something I'll be thinking about more as I reboot my consultancy work (through our co-op) for 2020!

    For now, though, remember this wonderful African proverb:

    "If you want to go fast, go alone. If you want to go far, go together." (African proverb)
    CC BY-ND Bryan Mathers

    Microcast #079 - information environments

    This week's microcast is about information environments, the difference between technical and 'people' skills, and sharing your experience.

    Show notes

    Microcast #078 — Values-based organisations

    I've decided to post these microcasts, which I previously made available only through Patreon, here instead.

    Microcasts focus on what I've been up to and thinking about, and also provide a way to answer questions from supporters and other readers/listeners!

    This microcast covers ethics in decision-making for technology companies and (related!) some recent purchases I've made.

    Show notes

    I am not fond of expecting catastrophes, but there are cracks in the universe

    So said Sydney Smith. Let's talk about surveillance. Let's talk about surveillance capitalism and surveillance humanitarianism. But first, let's talk about machine learning and algorithms; in other words, let's talk about what happens after all of that data is collected.

    Writing in The Guardian, Sarah Marsh investigates local councils using "automated guidance systems" in an attempt to save money.

    The systems are being deployed to provide automated guidance on benefit claims, prevent child abuse and allocate school places. But concerns have been raised about privacy and data security, the ability of council officials to understand how some of the systems work, and the difficulty for citizens in challenging automated decisions.

    Sarah Marsh

    The trouble is, they're not particularly effective:

    It has emerged North Tyneside council has dropped TransUnion, whose system it used to check housing and council tax benefit claims. Welfare payments to an unknown number of people were wrongly delayed when the computer’s “predictive analytics” erroneously identified low-risk claims as high risk

    Meanwhile, Hackney council in east London has dropped Xantura, another company, from a project to predict child abuse and intervene before it happens, saying it did not deliver the expected benefits. And Sunderland city council has not renewed a £4.5m data analytics contract for an “intelligence hub” provided by Palantir.

    Sarah Marsh

    When I was at Mozilla there were a number of colleagues there who had worked on the OFA (Obama For America) campaign. I remember one of them, a DevOps guy, expressing his concern that the infrastructure being built was all well and good when there's someone 'friendly' in the White House, but what comes next.

    Well, we now know what comes next, on both sides of the Atlantic, and we can't put that genie back in its bottle. Swingeing cuts by successive Conservative governments over here, coupled with the Brexit time-and-money pit means that there's no attention or cash left.

    If we stop and think about things for a second, we probably wouldn't don't want to live in a world where machines make decisions for us, based on algorithms devised by nerds. As Rose Eveleth discusses in a scathing article for Vox, this stuff isn't 'inevitable' — nor does it constitute a process of 'natural selection':

    Often consumers don’t have much power of selection at all. Those who run small businesses find it nearly impossible to walk away from Facebook, Instagram, Yelp, Etsy, even Amazon. Employers often mandate that their workers use certain apps or systems like Zoom, Slack, and Google Docs. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or, ‘I’m not on social media,’” says Rumman Chowdhury, a data scientist at Accenture. “You actually have to be so comfortable in your privilege that you can opt out of things.”

    And so we’re left with a tech world claiming to be driven by our desires when those decisions aren’t ones that most consumers feel good about. There’s a growing chasm between how everyday users feel about the technology around them and how companies decide what to make. And yet, these companies say they have our best interests in mind. We can’t go back, they say. We can’t stop the “natural evolution of technology.” But the “natural evolution of technology” was never a thing to begin with, and it’s time to question what “progress” actually means.

    Rose Eveleth

    I suppose the thing that concerns me the most is people in dire need being subject to impersonal technology for vital and life-saving aid.

    For example, Mark Latonero, writing in The New York Times, talks about the growing dangers around what he calls 'surveillance humanitarianism':

    By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need.

    Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.

    Mark Latonero

    It's easy to think that this is an emergency, so we should just do whatever is necessary. But Latonero explains the risks, arguing that the risk is shifted to a later time:

    If an individual or group’s data is compromised or leaked to a warring faction, it could result in violent retribution for those perceived to be on the wrong side of the conflict. When I spoke with officials providing medical aid to Syrian refugees in Greece, they were so concerned that the Syrian military might hack into their database that they simply treated patients without collecting any personal data. The fact that the Houthis are vying for access to civilian data only elevates the risk of collecting and storing biometrics in the first place.

    Mark Latonero

    There was a rather startling article in last weekend's newspaper, which I've found online. Hannah Devlin, again writing in The Guardian (which is a good source of information for those concerned with surveillance) writes about a perfect storm of social media and improved processing speeds:

    [I]n the past three years, the performance of facial recognition has stepped up dramatically. Independent tests by the US National Institute of Standards and Technology (Nist) found the failure rate for finding a target picture in a database of 12m faces had dropped from 5% in 2010 to 0.1% this year.

    The rapid acceleration is thanks, in part, to the goldmine of face images that have been uploaded to Instagram, Facebook, LinkedIn and captioned news articles in the past decade. At one time, scientists would create bespoke databases by laboriously photographing hundreds of volunteers at different angles, in different lighting conditions. By 2016, Microsoft had published a dataset, MS Celeb, with 10m face images of 100,000 people harvested from search engines – they included celebrities, broadcasters, business people and anyone with multiple tagged pictures that had been uploaded under a Creative Commons licence, allowing them to be used for research. The dataset was quietly deleted in June, after it emerged that it may have aided the development of software used by the Chinese state to control its Uighur population.

    In parallel, hardware companies have developed a new generation of powerful processing chips, called Graphics Processing Units (GPUs), uniquely adapted to crunch through a colossal number of calculations every second. The combination of big data and GPUs paved the way for an entirely new approach to facial recognition, called deep learning, which is powering a wider AI revolution.

    Hannah Devlin

    Those of you who have read this far and are expecting some big reveal are going to be disappointed. I don't have any 'answers' to these problems. I guess I've been guilty, like many of us have, of the kind of 'privacy nihilism' mentioned by Ian Bogost in The Atlantic:

    Online services are only accelerating the reach and impact of data-intelligence practices that stretch back decades. They have collected your personal data, with and without your permission, from employers, public records, purchases, banking activity, educational history, and hundreds more sources. They have connected it, recombined it, bought it, and sold it. Processed foods look wholesome compared to your processed data, scattered to the winds of a thousand databases. Everything you have done has been recorded, munged, and spat back at you to benefit sellers, advertisers, and the brokers who service them. It has been for a long time, and it’s not going to stop. The age of privacy nihilism is here, and it’s time to face the dark hollow of its pervasive void.

    Ian Bogost

    The only forces that we have to stop this are collective action, and governmental action. My concern is that we don't have the digital savvy to do the former, and there's definitely the lack of will in respect of the latter. Troubling times.

    Technology is the name we give to stuff that doesn't work properly yet

    So said my namesake Douglas Adams. In fact, he said lots of wise things about technology, most of them too long to serve as a title.

    I'm in a weird place, emotionally, at the moment, but sometimes this can be a good thing. Being taken out of your usual 'autopilot' can be a useful way to see things differently. So I'm going to take this opportunity to share three things that, to be honest, make me a bit concerned about the next few years...

    Attempts to put microphones everywhere

    Alexa-enabled EVERYTHING

    In an article for Slate, Shannon Palus ranks all of Amazon's new products by 'creepiness'. The Echo Frames are, in her words:

    A microphone that stays on your person all day and doesn’t look like anything resembling a microphone, nor follows any established social codes for wearable microphones? How is anyone around you supposed to have any idea that you are wearing a microphone?

    Shannon Palus

    When we're not talking about weapons of mass destruction, it's not the tech that concerns me, but the context in which the tech is used. As Palus points out, how are you going to be able to have a 'quiet word' with anyone wearing glasses ever again?

    It's not just Amazon, of course. Google and Facebook are at it, too.

    Full-body deepfakes

    [www.youtube.com/watch](https://www.youtube.com/watch?v=8siezzLXbNo)
    Scary stuff

    With the exception, perhaps, of populist politicians, I don't think we're ready for a post-truth society. Check out the video above, which shows Chinese technology that allows for 'full body deepfakes'.

    The video is embedded, along with a couple of others in an article for Fast Company by DJ Pangburn, who also notes that AI is learning human body movements from videos. Not only will you be able to prank your friends by showing them a convincing video of your ability to do 100 pull-ups, but the fake news it engenders will mean we can't trust anything any more.

    Neuromarketing

    If you clicked on the 'super-secret link' in Sunday's newsletter, you will have come across STEALING UR FEELINGS which is nothing short of incredible. As powerful as it is in showing you the kind of data that organisations have on us, it's the tip of the iceberg.

    Kaveh Waddell, in an article for Axios, explains that brains are the last frontier for privacy:

    "The sort of future we're looking ahead toward is a world where our neural data — which we don't even have access to — could be used" against us, says Tim Brown, a researcher at the University of Washington Center for Neurotechnology.

    Kaveh Waddell

    This would lead to 'neuromarketing', with advertisers knowing what triggers and influences you better than you know yourself. Also, it will no doubt be used for discriminatory purposes and, because it's coming directly from your brainwaves, short of literally wearing a tinfoil hat, there's nothing much you can do.


    So there we are. Am I being too fearful here?

    Friday fluctuations

    Have a quick skim through these links that I came across this week and found interesting:

    • Overrated: Ludwig Wittgenstein (Standpoint) — "Wittgenstein’s reputation for genius did not depend on incomprehensibility alone. He was also “tortured”, rude and unreliable. He had an intense gaze. He spent months in cold places like Norway to isolate himself. He temporarily quit philosophy, because he believed that he had solved all its problems in his 1922 Tractatus Logico-Philosophicus, and worked as a gardener. He gave away his family fortune. And, of course, he was Austrian, as so many of the best geniuses are."
    • EdTech Resistance (Ben Williamson) ⁠— "We should not and cannot ignore these tensions and challenges. They are early signals of resistance ahead for edtech which need to be engaged with before they turn to public outrage. By paying attention to and acting on edtech resistances it may be possible to create education systems, curricula and practices that are fair and trustworthy. It is important not to allow edtech resistance to metamorphose into resistance to education itself."
    • The Guardian view on machine learning: a computer cleverer than you? (The Guardian) — "The promise of AI is that it will imbue machines with the ability to spot patterns from data, and make decisions faster and better than humans do. What happens if they make worse decisions faster? Governments need to pause and take stock of the societal repercussions of allowing machines over a few decades to replicate human skills that have been evolving for millions of years."
    • A nerdocratic oath (Scott Aaronson) — "I will never allow anyone else to make me a cog. I will never do what is stupid or horrible because “that’s what the regulations say” or “that’s what my supervisor said,” and then sleep soundly at night. I’ll never do my part for a project unless I’m satisfied that the project’s broader goals are, at worst, morally neutral. There’s no one on earth who gets to say: “I just solve technical problems. Moral implications are outside my scope”."
    • Privacy is power (Aeon) — "The power that comes about as a result of knowing personal details about someone is a very particular kind of power. Like economic power and political power, privacy power is a distinct type of power, but it also allows those who hold it the possibility of transforming it into economic, political and other kinds of power. Power over others’ privacy is the quintessential kind of power in the digital age."
    • The Symmetry and Chaos of the World's Megacities (WIRED) — "Koopmans manages to create fresh-looking images by finding unique vantage points, often by scouting his locations on Google Earth. As a rule, he tries to get as high as he can—one of his favorite tricks is talking local work crews into letting him shoot from the cockpit of a construction crane."
    • Green cities of the future - what we can expect in 2050 (RNZ) — "In their lush vision of the future, a hyperloop monorail races past in the foreground and greenery drapes the sides of skyscrapers that house communal gardens and vertical farms."
    • Wittgenstein Teaches Elementary School (Existential Comics) ⁠— "And I'll have you all know, there is no crying in predicate logic."
    • Ask Yourself These 5 Questions to Inspire a More Meaningful Career Move (Inc.) — "Introspection on the right things can lead to the life you want."

    Image from Do It Yurtself

    It’s not a revolution if nobody loses

    Thanks to Clay Shirky for today's title. It's true, isn't it? You can't claim something to be a true revolution unless someone, some organisation, or some group of people loses.

    I'm happy to say that it's the turn of some older white men to be losing right now, and particularly delighted that those who have spent decades abusing and repressing people are getting their comeuppance.

    Enough has been written about Epstein and the fallout from it. You can read about comments made by Richard Stallman, founder of the Free Software Foundation, in this Washington Post article. I've only met RMS (as he's known) in person once, at the Indie Tech Summit five years ago, but it wasn't a great experience. While I'm willing to cut visionary people some slack, he mostly acted like a jerk.

    RMS is a revered figure in Free Software circles and it's actually quite difficult not to agree with his stance on many political and technological matters. That being said, he deserves everything he gets though for the comments he made about child abuse, for the way he's treated women for the past few decades, and his dictator-like approach to software projects.

    In an article for WIRED entitled Richard Stallman’s Exit Heralds a New Era in Tech, Noam Cohen writes that we're entering a new age. I certainly hope so.

    This is a lesson we are fast learning about freedom as it promoted by the tech world. It is not about ensuring that everyone can express their views and feelings. Freedom, in this telling, is about exclusion. The freedom to drive others away. And, until recently, freedom from consequences.

    After 40 years of excluding those who didn’t serve his purposes, however, Stallman finds himself excluded by his peers. Freedom.

    Maybe freedom, defined in this crude, top-down way, isn’t the be-all, end-all. Creating a vibrant inclusive community, it turns out, is as important to a software project as a coding breakthrough. Or, to put it in more familiar terms—driving away women, investing your hopes in a single, unassailable leader is a critical bug. The best patch will be to start a movement that is respectful, inclusive, and democratic.

    Noam Cohen

    One of the things that the next leaders of the Free Software Movement will have to address is how to take practical steps to guarantee our basic freedoms in a world where Big Tech provides surveillance to ever-more-powerful governments.

    Cory Doctorow is an obvious person to look to in this regard. He has a history of understanding what's going on and writing about it in ways that people understand. In an article for The Globe and Mail, Doctorow notes that a decline in trust of political systems and experts more generally isn't because people are more gullible:

    40 years of rising inequality and industry consolidation have turned our truth-seeking exercises into auctions, in which lawmakers, regulators and administrators are beholden to a small cohort of increasingly wealthy people who hold their financial and career futures in their hands.

    [...]

    To be in a world where the truth is up for auction is to be set adrift from rationality. No one is qualified to assess all the intensely technical truths required for survival: even if you can master media literacy and sort reputable scientific journals from junk pay-for-play ones; even if you can acquire the statistical literacy to evaluate studies for rigour; even if you can acquire the expertise to evaluate claims about the safety of opioids, you can’t do it all over again for your city’s building code, the aviation-safety standards governing your next flight, the food-safety standards governing the dinner you just ordered.

    Cory Doctorow

    What's this got to do with technology, and in particular Free Software?

    Big Tech is part of this problem... because they have monopolies, thanks to decades of buying nascent competitors and merging with their largest competitors, of cornering vertical markets and crushing rivals who won't sell. Big Tech means that one company is in charge of the social lives of 2.3 billion people; it means another company controls the way we answer every question it occurs to us to ask. It means that companies can assert the right to control which software your devices can run, who can fix them, and when they must be sent to a landfill.

    These companies, with their tax evasion, labour abuses, cavalier attitudes toward our privacy and their completely ordinary human frailty and self-deception, are unfit to rule our lives. But no one is fit to be our ruler. We deserve technological self-determination, not a corporatized internet made up of five giant services each filled with screenshots from the other four.

    Cory Doctorow

    Doctorow suggests breaking up these companies to end their de facto monopolies and level the playing field.

    The problem of tech monopolies is something that Stowe Boyd explored in a recent article entitled Are Platforms Commons? Citing previous precedents around railroads, Boyd has many questions, including whether successful platforms be bound with the legal principles of 'common carriers', and finishes with this:

    However, just one more question for today: what if ecosystems were constructed so that they were governed by the participants, rather by the hypercapitalist strivings of the platform owners — such as Apple, Google, Amazon, Facebook — or the heavy-handed regulators? Is there a middle ground where the needs of the end user and those building, marketing, and shipping products and services can be balanced, and a fair share of the profits are distributed not just through common carrier laws but by the shared economics of a commons, and where the platform orchestrator gets a fair share, as well? We may need to shift our thinking from common carrier to commons carrier, in the near future.

    Stowe Boyd

    The trouble is, simply establishing a commons doesn't solve all of the problems. In fact, what tends to happen next is well known:

    The tragedy of the commons is a situation in a shared-resource system where individual users, acting independently according to their own self-interest, behave contrary to the common good of all users, by depleting or spoiling that resource through their collective action.

    Wikipedia

    An article in The Economist outlines the usual remedies to the 'tragedy of the commons': either governmental regulation (e.g. airspace), or property rights (e.g. land). However, the article cites the work of Elinor Ostrom, a Nobel prizewinning economist, showing that another way is possible:

    An exclusive focus on states and markets as ways to control the use of commons neglects a varied menagerie of institutions throughout history. The information age provides modern examples, for example Wikipedia, a free, user-edited encyclopedia. The digital age would not have dawned without the private rewards that flowed to successful entrepreneurs. But vast swathes of the web that might function well as commons have been left in the hands of rich, relatively unaccountable tech firms.

    [...]

    A world rich in healthy commons would of necessity be one full of distributed, overlapping institutions of community governance. Cultivating these would be less politically rewarding than privatisation, which allows governments to trade responsibility for cash. But empowering commoners could mend rents in the civic fabric and alleviate frustration with out-of-touch elites.

    The Economist

    I count myself as someone on the left of politics, if that's how we're measuring things today. However, I don't think we need representation at any higher level than is strictly necessary.

    In a time when technology allows you, to a great extent, to represent yourself, perhaps we need ways of demonstrating how complex and multi-faceted some issues are? Perhaps we need to try 'liquid democracy':

    Liquid democracy lies between direct and representative democracy. In direct democracy, participants must vote personally on all issues, while in representative democracy participants vote for representatives once in certain election cycles. Meanwhile, liquid democracy does not depend on representatives but rather on a weighted and transitory delegation of votes. Liquid democracy through elections can empower individuals to become sole interpreters of the interests of the nation. It allows for citizens to vote directly on policy issues, delegate their votes on one or multiple policy areas to delegates of their choosing, delegate votes to one or more people, delegated to them as a weighted voter, or get rid of their votes' delegations whenever they please.

    WIkipedia

    I think, given the state that politics is in right now, it's well worth a try. The problem, of course, is that the losers would be the political elites, the current incumbents. But, hey, it's not a revolution if nobody loses, right?

    Saturday strikings

    This week's roundup is going out a day later than usual, as yesterday was the Global Climate Strike and Thought Shrapnel was striking too!

    Here's what I've been paying attention to this week:

    • How does a computer ‘see’ gender? (Pew Research Center) — "Machine learning tools can bring substantial efficiency gains to analyzing large quantities of data, which is why we used this type of system to examine thousands of image search results in our own studies. But unlike traditional computer programs – which follow a highly prescribed set of steps to reach their conclusions – these systems make their decisions in ways that are largely hidden from public view, and highly dependent on the data used to train them. As such, they can be prone to systematic biases and can fail in ways that are difficult to understand and hard to predict in advance."
    • The Communication We Share with Apes (Nautilus) — "Many primate species use gestures to communicate with others in their groups. Wild chimpanzees have been seen to use at least 66 different hand signals and movements to communicate with each other. Lifting a foot toward another chimp means “climb on me,” while stroking their mouth can mean “give me the object.” In the past, researchers have also successfully taught apes more than 100 words in sign language."
    • Why degrowth is the only responsible way forward (openDemocracy) — "If we free our imagination from the liberal idea that well-being is best measured by the amount of stuff that we consume, we may discover that a good life could also be materially light. This is the idea of voluntary sufficiency. If we manage to decide collectively and democratically what is necessary and enough for a good life, then we could have plenty."
    • 3 times when procrastination can be a good thing (Fast Company) — "It took Leonardo da Vinci years to finish painting the Mona Lisa. You could say the masterpiece was created by a master procrastinator. Sure, da Vinci wasn’t under a tight deadline, but his lengthy process demonstrates the idea that we need to work through a lot of bad ideas before we get down to the good ones."
    • Why can’t we agree on what’s true any more? (The Guardian) — "What if, instead, we accepted the claim that all reports about the world are simply framings of one kind or another, which cannot but involve political and moral ideas about what counts as important? After all, reality becomes incoherent and overwhelming unless it is simplified and narrated in some way or other.
    • A good teacher voice strikes fear into grown men (TES) — "A good teacher voice can cut glass if used with care. It can silence a class of children; it can strike fear into the hearts of grown men. A quiet, carefully placed “Excuse me”, with just the slightest emphasis on the “-se”, is more effective at stopping an argument between adults or children than any amount of reason."
    • Freeing software (John Ohno) — "The only way to set software free is to unshackle it from the needs of capital. And, capital has become so dependent upon software that an independent ecosystem of anti-capitalist software, sufficiently popular, can starve it of access to the speed and violence it needs to consume ever-doubling quantities of to survive."
    • Young People Are Going to Save Us All From Office Life (The New York Times) — "Today’s young workers have been called lazy and entitled. Could they, instead, be among the first to understand the proper role of work in life — and end up remaking work for everyone else?"
    • Global climate strikes: Don’t say you’re sorry. We need people who can take action to TAKE ACTUAL ACTION (The Guardian) — "Brenda the civil disobedience penguin gives some handy dos and don’ts for your civil disobedience"

    All is petty, inconstant, and perishable

    So said Marcus Aurelius. Today's short article is about what happens after you die. We're all aware of the importance of making a will, particularly if you have dependants. But that's primarily for your analogue, offline life. What about your digital life?

    In a recent TechCrunch article, Jon Evans writes:

    I really wish I hadn’t had cause to write this piece, but it recently came to my attention, in an especially unfortunate way, that death in the modern era can have a complex and difficult technical aftermath. You should make a will, of course. Of course you should make a will. But many wills only dictate the disposal of your assets. What will happen to the other digital aspects of your life, when you’re gone?

    Jon Evans

    The article points to a template for a Digital Estate Planning Document which you can use to list all of the places that you're active. Interestingly, the suggestion is to have a 'digital executor', which makes sense as the more technical you are the more likely that other members of your family might not be able to follow your instructions.

    Interestingly, the Wikipedia article on digital wills has some very specific advice of which the above-mentioned document is only a part:

    1. Appoint someone as online executor
    2. State in a formal document how profiles and accounts are handled
    3. Understand privacy policies
    4. Provide online executor list of websites and logins
    5. State in the will that the online executor must have a copy of the death certificate

    I hadn't really thought about this, but the chances of identity theft after someone has died are as great, if not greater, as when they were alive:

    An article by Magder in the newspaper The Gazette provides a reminder that identity theft can potentially continue to be a problem even after death if their information is released to the wrong people. This is why online networks and digital executors require proof of a death certificate from a family member of the deceased person in order to acquire access to accounts. There are instances when access may still be denied, because of the prevalence of false death certificates.

    Wikipedia

    Zooming out a bit, and thinking about this from my own perspective, it's a good idea to insist on good security practices for your nearest and dearest. Ensure they know how to use password managers and use two-factor authentication on their accounts. If they do this for themselves, they'll understand how to do it with your accounts when you're gone.

    One thing it's made think about is the length of time for which I renew domain names. I tend to just renew mine (I have quite a few) on a yearly basis. But what if the worst happened? Those payment details would be declined, and my sites would be offline in a year or less.

    All of this makes me think that the important thing here is to keep things as simple as possible. As I've discussed in another article, the way people remember us after we're gone is kind of important.

    Most of us could, I think, divide our online life into three buckets:

    • Really important to my legacy
    • Kind of important
    • Not important

    So if, for example, I died tomorrow, the domain renewal for Thought Shrapnel lapsed next year, and a scammer took it over, that would be terrible. It's part of the reason why I still renew domains I don't use. So this would go in the 'really important to my legacy' bucket.

    On the other hand, my experiments with various tools and platforms I'm less bothered about. They would probably go in the 'not important' bucket.

    Then there's that awkward middle space. Things like the site for my doctoral thesis when the 'official' copy is in the Durham University e-Theses repository.

    Ultimately, it's a conversation to have with those close to you. For me, it's on my mind after the death of a good friend and so something I should get to before life goes back to some version of normality. After all, figuring out someone else's digital life admin is the last thing people want when they're already dealing with grief.

    If you change nothing, nothing will change

    What would you do if you knew you had 24 hours left to live? I suppose it would depend on context. Is this catastrophe going to affect everyone, or only you? I'm not sure I'd know what to do in the former case, but once I'd said my goodbyes to my family, I'm pretty sure I know what I'd do in the latter.

    Yep, I would go somewhere by myself and write.

    To me, the reason both reading and writing can feel so freeing is that they allow you to mentally escape your physical constraints. It almost doesn't matter what's happening to your body or anything around you while you lose yourself in someone else's words, or you create your own.


    I came across an interesting blog recently. It had a single post, entitled Consume less, create more. In it, the author, 'Tom', explains that the 1,600 words he's shared were written over the course of a month after he realised that he was spending his life consuming instead of creating.

    A lot of ink has been spilled about the perils of modern technology. How it distracts us, how it promotes unhealthy comparisons with others, how it makes us fat, how it limits social interaction, how it spies on us. And all of these things are probably true, to some extent.

    But the real tragedy of modern technology is that it’s turned us into consumers. Our voracious consumption of media parallels our consumption of fossil fuels, corn syrup, and plastic straws. And although we’re starting to worry about our consumption of those physical goods, we seem less concerned about our consumption of information.

    We treat information as necessarily good, and comfort ourselves with the feeling that whatever article or newsletter we waste our time with is actually good for us. We equate reading with self improvement, even though we forget most of what we’ve read, and what we remember isn’t useful.

    TJCX

    I feel that at this juncture in history, we've perfected surveillance-via-smartphone as the perfect tool to maximise FOMO. For those growing up in the goldfish bowl of the modern world, this may feel as normal as the 'water' in which they are 'swimming'. But for the rest of us, it can still feel... odd.

    This is going to sound pretty amazing, but I don't think there's been many days in my adult life when I've been able to go somewhere without anyone else knowing. As a kid? Absolutely. I can vividly remember, for example, cycling to a corn field and finding a place to lie down and look at the sky, knowing that no-one could see me. It was time spent with myself, unmediated and unfiltered.

    This didn't used to be unusual. People had private inner lives that were manifested in private actions. In a recent column in The Guardian, Grace Dent expanded on this.

    Yes life after iPhones is marvellous, but in the 90s I ran wild across London, up to all kinds of no good, staying out for days, keeping my own counsel entirely. My parents up north would not speak to me for weeks. Sometimes, life back in the days when we had one shit Nokia and a landline between five friends seems blissful. One was permitted lost weekends and periods of secret skulduggery or just to lie about reading a paperback without the sense six people were owed a text message. Yes, things took longer, and one needed to make plans and keep them, but being off the grid was normal. Today, not replying... is a truly radical act.

    Grace Dent

    "Not replying... is a truly radical act". Wow. Let that sink in for a moment.


    Given all this, it's no wonder in our always-on culture that we have so much 'life admin' to concern ourselves with. Previous generations may have had 'pay the bills' on their to-do list, but it wasn't nudged down the to-do list by 'inform a person I kind of know on Twitter that they have incorrect view on Brexit'.

    All of these things build upon incrementally until they eventually become unsustainable. It's death by a thousand cuts. As I've quoted many times before before, Jocelyn K. Glei's question is always worth asking: who are you without the doing?


    Realistically, most of our days are likely to involve some use of digital communication tools. We can't always be throwing off our shackles to live the life of a flâneur. To facilitate space to create, therefore, it's important to draw some red lines. This is what Michael Bernstein talks about in Sorry, we can't join your Slack.

    Saying yes to joining client Slack channels would mean that down the line we’d feel more exhausted but less accomplished. We’d have more superficial “friends,” but wouldn’t know how to deal with products much better than we did now. We’d be on the hook all the time, and have less of an opportunity to consider our responses.

    Michael Bernstein

    In other words, being more available and more 'social' takes time away from more important pursuits. After all, time is the ultimate zero-sum game.


    Ultimately, I guess it's about learning to see the world differently. There very well be a 'new normal' that we've begun to internalise but, for now at least, we have a choice to use to our advantage that 'flexibility' we hear so much about.

    This is why self-reflection is so important, as Wanda Thibodeaux explains in an article for Inc.

    In sum, elimination of stress and the acceptance of peace comes not necessarily from changing the world, but rather from clearing away all the learned clutter that prevents us from changing our view of the world. Even the biggest systemic "realities" (e.g., work "HAS" to happen from 9 a.m. to 5 p.m.) are up for reinterpretation and rewriting, and arguably, inner calm and innovation both stem from the same challenge of perceptions.

    Wanda Thibodeaux

    To do this, you have to have to already have decided the purpose for which you're using your tools, including the ones provided by your smartphone.

    Need more specific advice on that? I suggest you go and read this really handy post by Ryan Holiday: A Radical Guide to Spending Less Time on Your Phone. The advice to be focused on which apps you need on your phone is excellent; I deleted over 100!

    You may also find this post useful that I wrote over on my blog a few months ago about how changing the 'launcher' on your phone can change your life.


    If you make some changes after reading this, I'd be interested in hearing how you get on. Let me know in the comments section below!


    Quotation-as-title from Rajkummar Rao.

    Friday feudalism

    Check out these things I discovered this week, and wanted to pass along:

    • Study shows some political beliefs are just historical accidents (Ars Technica) — "Obviously, these experiments aren’t exactly like the real world, where political leaders can try to steer their parties. Still, it’s another way to show that some political beliefs aren’t inviolable principles—some are likely just the result of a historical accident reinforced by a potent form of tribal peer pressure. And in the early days of an issue, people are particularly susceptible to tribal cues as they form an opinion."
    • Please, My Digital Archive. It’s Very Sick. (Lapham's Quarterly) — "An archivist’s dream is immaculate preservation, documentation, accessibility, the chance for our shared history to speak to us once more in the present. But if the preservation of digital documents remains an unsolvable puzzle, ornery in ways that print materials often aren’t, what good will our archiving do should it become impossible to inhabit the world we attempt to preserve?"
    • So You’re 35 and All Your Friends Have Already Shed Their Human Skins (McSweeney's) — "It’s a myth that once you hit 40 you can’t slowly and agonizingly mutate from a human being into a hideous, infernal arachnid whose gluttonous shrieks are hymns to the mad vampire-goddess Maggorthulax. You have time. There’s no biological clock ticking. The parasitic worms inside you exist outside of our space-time continuum."
    • Investing in Your Ordinary Powers (Breaking Smart) — "The industrial world is set up to both encourage and coerce you to discover, as early as possible, what makes you special, double down on it, and build a distinguishable identity around it. Your specialness-based identity is in some ways your Industrial True Name. It is how the world picks you out from the crowd."
    • Browser Fingerprinting: An Introduction and the Challenges Ahead (The Tor Project) — "This technique is so rooted in mechanisms that exist since the beginning of the web that it is very complex to get rid of it. It is one thing to remove differences between users as much as possible. It is a completely different one to remove device-specific information altogether."
    • What is a Blockchain Phone? The HTC Exodus explained (giffgaff) — "HTC believes that in the future, your phone could hold your passport, driving license, wallet, and other important documents. It will only be unlockable by you which makes it more secure than paper documents."
    • Debate rages in Austria over enshrining use of cash in the constitution (EURACTIV) — "Academic and author Erich Kirchler, a specialist in economic psychology, says in Austria and Germany, citizens are aware of the dangers of an overmighty state from their World War II experience."
    • Cory Doctorow: DRM Broke Its Promise (Locus magazine) — "We gave up on owning things – property now being the exclusive purview of transhuman immortal colony organisms called corporations – and we were promised flexibility and bargains. We got price-gouging and brittle­ness."
    • Five Books That Changed Me In One Summer (Warren Ellis) — "I must have been around 14. Rayleigh Library and the Oxfam shop a few doors down the high street from it, which someone was clearly using to pay things forward and warp younger minds."
Older Posts →