Category: 21st Century Society (page 1 of 5)

Don Norman on human-centred technologies

In this article, Don Norman (famous for his seminal work The Design of Everyday Things) takes to task our technology-centric view of the world:

We need to switch from a technology-centric view of the world to a people-centric one. We should start with people’s abilities and create technology that enhances people’s capabilities: Why are we doing it backwards?

Instead of focusing on what we as humans require, we start with what technology is able to provide. Norman argues that it is us serving technology rather than the other way around:

Just think about your life today, obeying the dictates of technology–waking up to alarm clocks (even if disguised as music or news); spending hours every day fixing, patching, rebooting, inventing work-arounds; answering the constant barrage of emails, tweets, text messages, and instant this and that; being fearful of falling for some new scam or phishing attack; constantly upgrading everything; and having to remember an unwieldly number of passwords and personal inane questions for security, such as the name of your least-liked friend in fourth grade. We are serving the wrong masters.

I particularly like his example of car accidents. We’re fed the line that autonomous vehicles will dramatically cut the number of accidents on our road, but is that right?

Over 90% of industrial and automobile accidents are blamed on human error with distraction listed as a major cause. Can this be true? Look, if 5% of accidents were caused by human error, I would believe it. But when it is 90%, there must be some other reason, namely, that people are asked to do tasks that people should not be doing. Tasks that violate fundamental human abilities.

Consider the words we use to describe the result: human error, distraction, lack of attention, sloppiness–all negative terms, all implying the inferiority of people. Distraction, in particular, is the byword of the day–responsible for everything from poor interpersonal relationships to car accidents. But what does the term really mean?

It’s a good article, particularly at a time when we’re thinking about robots and artificial intelligence replacing humans in the jobs market. It certainly made me think about my technology choices.

Source: Fast Company

 

On ‘radical incompetence’

One of the reasons I’ve retreated from Twitter since May of last year is the rise of angry politics. I can’t pay attention to everything that’s happening all of the time. And I certainly haven’t got the energy to deal with problems that aren’t materially affecting me or the people I care about.

Brexit, then, is a strange one. On the one hand, I participated in a democratic election to elect a government. Subsequently, a government formed from a party I didn’t vote for called a referendum on the United Kingdom’s membership of the European Union. As we all know, the result was close, and based on lies and illegal funding. Nevertheless, perhaps as a citizen I should participate democratically and then get on with my own life.

On the other hand of course, this isn’t politics as usual. There’s been a rise in nationalistic fervour that we haven’t seen since the 1930s. It’s alarming, particularly at a time when smartphones, social media, and the ever-increasing speed of the news cycle make it difficult for citizens to pay sustained attention to anything.

This article in The New York Times zooms out from the particular issues of Trump and Brexit to look at the wider picture. It’s not mentioned specifically in the article, but documentary evidence of struggles around political power and sovereignty goes back at leats to the Magna Carta in England. One way of looking at that is that King John was the Donald Trump of his time, so the barons took power from him.

It’s easy to stand for the opposite of something: you don’t have to do any of the work. All that’s necessary is to point out problems, flaws, and issues with the the person, organisation, or concept that you’re attacking. So demagogues and iconoclasts such as Boris Johnson and Donald Trump, whose lack of a coherent position wouldn’t work at any other time, all of a sudden gain credibility in times of upheaval.

Like so many political metaphors, the distinction between “hard” and “soft” is misleading. Any Brexiteer wanting to perform machismo will reach for the “hard” option. But as has become increasingly plain over the past two years, and especially over recent weeks, nobody has any idea what “hard” Brexit actually means in policy terms. It is not so much hard as abstract. “Soft” Brexit might sound weak or halfhearted, but it is also the only policy proposal that might actually work.

What appear on the surface to be policy disputes over Britain’s relationship with Brussels are actually fundamental conflicts regarding the very nature of political power. In this, the arguments underway inside Britain’s Conservative Party speak of a deeper rift within liberal democracies today, which shows no sign of healing. In conceptual terms, this is a conflict between those who are sympathetic to government and those striving to reassert sovereignty.

I’m writing this on the train home from London. I haven’t participated in or seen any of the protests around Trump’s visit to the UK. I have, however, seen plenty of people holding placards and banners, obviously on their way to, or from, a rally.

My concern about getting angry in bite-sized chunks on Twitter or reducing your issues with someone like Trump or Johnson to a placard is that you’re playing them at their own game. They’ll win. They thrive on the oxygen of attention. Cut it off and they’ll whither and be forced to slink off to whatever hole they originally crawled from.

A common thread linking “hard” Brexiteers to nationalists across the globe is that they resent the very idea of governing as a complex, modern, fact-based set of activities that requires technical expertise and permanent officials.

[…]

The more extreme fringes of British conservatism have now reached the point that American conservatives first arrived at during the Clinton administration: They are seeking to undermine the very possibility of workable government. For hard-liners such as Jacob Rees-Mogg, it is an article of faith that Britain’s Treasury Department, the Bank of England and Downing Street itself are now conspiring to deny Britain its sovereignty.

What we’re talking about here is ideology. There’s always been a fundamental difference between the left and the right of politics in a way that’s understood enough not to get into here. But issues around sovereignty, nationalism, and self-determinism actually cut across the traditional political spectrum. That’s why, for example, Jeremy Corbyn, leader of the British Labour Party, can oppose the EU for vastly different reasons to Jacob Rees-Mogg, arch-Brexiteer.

I haven’t got the energy to go into it here, but to me the crisis in confidence in expertise comes from a warping of the meritocratic system that was supposed to emancipate the working class, break down class structures, and bring forth a fairer society. What’s actually happened is that the political elites have joined with the wealthy to own the means of cultural reproduction. As a result, no-one now seems to trust them.

What happens if sections of the news media, the political classes and the public insist that only sovereignty matters and that the complexities of governing are a lie invented by liberal elites? For one thing, it gives rise to celebrity populists, personified by Mr. Trump, whose inability to engage patiently or intelligently with policy issues makes it possible to sustain the fantasy that governing is simple. What Mr. Johnson terms the “method” in Mr. Trump’s “madness” is a refusal to listen to inconvenient evidence, of the sort provided by officials and experts.

There have been many calls within my lifetime for a ‘new politics’. It’s nearly always a futile project, and just means a changing of the faces on our screens while the political elite continue their machinations. I’m not super-hopeful, but I do perhaps wonder whether our new-found connectedness, if mediated by decentralised technologies, could change that?

Source: The New York Times

On living in public

In this post, Austin Kleon, backpedaling a little from the approach he seemed to promote in Show Your Work!, talks about the problems we all face with ‘living in public’.

It seems ridiculous to say, but 2013, the year I wrote the book, was a simpler time. Social media seemed much more benign to me. Back then, the worst I felt social media did was waste your time. Now, the worst social media does is cripple democracy and ruin your soul.

Kleon quotes Warren Ellis, who writes one of my favourite newsletters (his blog is pretty good, too):

You don’t have to live in public on the internet if you don’t want to. Even if you’re a public figure, or micro-famous like me. I don’t follow anyone on my public Instagram account. No shade on those who follow me there, I’m glad you give me your time – but I need to be in my own space to get my shit done. You want a “hack” for handling the internet? Create private social media accounts, follow who you want and sit back and let your bespoke media channels flow to you. These are tools, not requirements. Don’t let them make you miserable. Tune them until they bring you pleasure.

In May 2017, after being on Twitter over a decade, I deleted my Twitter history, and now delete tweets on a weekly basis. Now, I hang out on a social network that I co-own called social.coop and which is powered by a federated, decentralised service called Mastodon.

I still publish my work, including Thought Shrapnel posts, to Twitter, LinkedIn, etc. It’s just not where I spend most of my time. On balance, I’m happier for it.

Source: Austin Kleon

Problems with the present and future of work are of our own making

This is a long essay in which the RSA announces that, along with its partners (one of which, inevitably, is Google) it’s launching the Future Work Centre. I’ve only selected quotations from the first section here.

From autonomous vehicles to cancer-detecting algorithms, and from picking and packing machines to robo-advisory tools used in financial services, every corner of the economy has begun to feel the heat of a new machine age. The RSA uses the term ‘radical technologies’ to describe these innovations, which stretch from the shiny and much talked about, including artificial intelligence and robotics, to the prosaic but equally consequential, such as smartphones and digital platforms.

I highly recommend reading Adam Greenfield’s book Radical Technologies: the design of everyday life, if you haven’t already. Greenfield isn’t beholden to corporate partners, and lets rip.

What is certain is that the world of work will evolve as a direct consequence of the invention and adoption of radical technologies — and in more ways than we might imagine. Alongside eliminating and creating jobs, these innovations will alter how workers are recruited, monitored, organised and paid. Companies like HireVue (video interviewing), Percolata (schedule setting) and Veriato (performance monitoring) are eager to reinvent all aspects of the workplace.

Indeed, and a lot of what’s going on is compliance and surveillance of workers smuggled in through the back door while people focus on ‘innovation’.

The main problems outlined with the current economy which is being ‘disrupted’ by technology are:

  1. Declining wages (in real terms)
  2. Economic insecurity (gig economy, etc.)
  3. Working conditions
  4. Bullshit jobs
  5. Work-life balance

Taken together, these findings paint a picture of a dysfunctional labour market — a world of work that offers little in the way of material security, let alone satisfaction. But that may be going too far. Overall, most workers enjoy what they do and relish the careers they have established. The British Social Attitudes survey found that twice as many people in 2015 as in 1989 strongly agreed they would enjoy having a job even if their financial circumstances did not require it.

The problem is not with work per se but rather with how it is orchestrated in the modern economy, and how rewards are meted out. As a society we have a vision of what work could and should look like — well paid, protective, meaningful, engaging — but the reality too often falls short.

I doubt the RSA would ever say it without huge caveats, but the problem is neoliberalism. It’s all very well looking to the past for examples of technological disruption, but that was qualitatively different from what’s going on now. Organisations can run on a skeleton staff and make obscene profits for a very few people.

I feel like warnings such as ‘the robots are coming’ and ‘be careful not to choose an easily-automated occupation!’ are a smokescreen for decisions that people are making about the kind of society they want to live in. It seems like that’s one where most of us (the ‘have nots’) are expendable, while the 0.01% (the ‘haves’) live in historically-unparalleled luxury.

In summary, the lives of workers will be shaped by more technologies than AI and robotics, and in more ways than through the loss of jobs.

Fears surrounding automaton should be taken seriously. Yet anxiety over job losses should not distract us from the subtler impacts of radical technologies, including on recruitment practices, employee monitoring and people’s work-life balance. Nor should we become so fixated on AI and robotics that we lose sight of the conventional technologies bringing about change in the present moment.

Exactly. Let’s fix 2018 before we start thinking about 2040, eh?

Source: The RSA

Cory Doctorow on the corruption at the heart of Facebook

I like Cory Doctorow. He’s a gifted communicator who wears his heart on his sleeve. In this article, he talks about Facebook and how what it’s wrought is a result of the corruption at its very heart.

It’s great that the privacy-matters message is finally reaching a wider audience, and it’s exciting to think that we’re approaching a tipping point for indifference to privacy and surveillance.

But while the acknowledgment of the problem of Big Tech is most welcome, I am worried that the diagnosis is wrong.

The problem is that we’re confusing automated persuasion with automated targeting. Laughable lies about Brexit, Mexican rapists, and creeping Sharia law didn’t convince otherwise sensible people that up was down and the sky was green.

Rather, the sophisticated targeting systems available through Facebook, Google, Twitter, and other Big Tech ad platforms made it easy to find the racist, xenophobic, fearful, angry people who wanted to believe that foreigners were destroying their country while being bankrolled by George Soros.

So, for example, people seem to think that Facebook advertisement caused people to vote for Trump. As if they were going to vote for someone else, and then changed their mind as a direct result of viewing ads. That’s not how it works.

Companies such as Cambridge Analytica might claim that they can rig elections and change people’s minds, but they’re not actually that sophisticated.

Cambridge Analytica are like stage mentalists: they’re doing something labor-intensive and pretending that it’s something supernatural. A stage mentalist will train for years to learn to quickly memorize a deck of cards and then claim that they can name your card thanks to their psychic powers. You never see the unglamorous, unimpressive memorization practice. Cambridge Analytica uses Facebook to find racist jerks and tell them to vote for Trump and then they claim that they’ve discovered a mystical way to get otherwise sensible people to vote for maniacs.

This isn’t to say that persuasion is impossible. Automated disinformation campaigns can flood the channel with contradictory, seemingly plausible accounts for the current state of affairs, making it hard for a casual observer to make sense of events. Long-term repetition of a consistent narrative, even a manifestly unhinged one, can create doubt and find adherents – think of climate change denial, or George Soros conspiracies, or the anti-vaccine movement.

These are long, slow processes, though, that make tiny changes in public opinion over the course of years, and they work best when there are other conditions that support them – for example, fascist, xenophobic, and nativist movements that are the handmaidens of austerity and privation. When you don’t have enough for a long time, you’re ripe for messages blaming your neighbors for having deprived you of your fair share.

Advertising and influencing works best when you provide a message that people already agree with in a way that they can easily share with others. The ‘long, slow processes’ that Doctorow refers to have been practised offline as well (think of Nazi propaganda, for example). Dark adverts on Facebook are tapping into feelings and reactions that aren’t peculiar to the digital world.

Facebook has thrived by providing ways for people to connect and communicate with one another. Unfortunately, because they’re so focused on profit over people, they’ve done a spectacularly bad job at making sure that the spaces in which people connect are healthy spaces that respect democracy.

There’s an old-fashioned word for this: corruption. In corrupt systems, a few bad actors cost everyone else billions in order to bring in millions – the savings a factory can realize from dumping pollution in the water supply are much smaller than the costs we all bear from being poisoned by effluent. But the costs are widely diffused while the gains are tightly concentrated, so the beneficiaries of corruption can always outspend their victims to stay clear.

Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters.

That last phrase is right on the money.

Source: Locus magazine

Attention scarcity as an existential threat

This post is from Albert Wenger, a partner a New York-based early stage VC firm focused on investing in disruptive networks. It’s taken from his book World After Capital, currently in draft form.

In this section, Wenger is concerned with attention scarcity, which he believes to be both a threat to humanity, and an opportunity for us.

On the threat side, for example, we are not working nearly hard enough on how to recapture CO2 and other greenhouse gases from the atmosphere. Or on monitoring asteroids that could strike earth, and coming up with ways of deflecting them. Or containing the outbreak of the next avian flu: we should have a lot more collective attention dedicated to early detection and coming up with vaccines and treatments.

The reason the world’s population is so high is almost entirely due to the technological progress we’ve made. We’re simply better at keeping human beings alive.

On the opportunity side, far too little human attention is spent on environmental cleanup, free educational resources, and basic research (including the foundations of science), to name just a few examples. There are so many opportunities we could dedicate attention to that over time have the potential to dramatically improve quality of life here on Earth not just for humans but also for other species.

Interestingly, he comes up with a theory as to why we haven’t heard from any alien species yet:

I am proposing this as a (possibly new) explanation for the Fermi Paradox, which famously asks why we have not yet detected any signs of intelligent life elsewhere in our rather large universe. We now even know that there are plenty of goldilocks planets available that could harbor life forms similar to those on Earth. Maybe what happens is that all civilizations get far enough to where they generate huge amounts of information, but then they get done in by attention scarcity. They collectively take their eye off the ball of progress and are not prepared when something really bad happens such as a global pandemic.

Attention scarcity, then, has the opportunity to become an existential threat to our species. Pay attention to the wrong things and we could either neglect to avoid a disaster, or cause one of our own making.

Source: Continuations

Our irresistible screens of splendour

Apple is touting a new feature in the latest version of iOS that helps you reduce the amount of time you spend on your smartphone. Facebook are doing something similar. As this article in The New York Times notes, that’s no accident:

There’s a reason tech companies are feeling this tension between making phones better and worrying they are already too addictive. We’ve hit what I call Peak Screen.

For much of the last decade, a technology industry ruled by smartphones has pursued a singular goal of completely conquering our eyes. It has given us phones with ever-bigger screens and phones with unbelievable cameras, not to mention virtual reality goggles and several attempts at camera-glasses.

The article even gives the example of Augmented Reality LEGO play sets which actively encourage you to stop building and spend more time on screens!

Tech has now captured pretty much all visual capacity. Americans spend three to four hours a day looking at their phones, and about 11 hours a day looking at screens of any kind.

So tech giants are building the beginning of something new: a less insistently visual tech world, a digital landscape that relies on voice assistants, headphones, watches and other wearables to take some pressure off our eyes.

[…]

Screens are insatiable. At a cognitive level, they are voracious vampires for your attention, and as soon as you look at one, you are basically toast.

It’s not enough to tell people not to do things. Technology can be addictive, just like anything else, so we need to find better ways of achieving similar ends.

But in addition to helping us resist phones, the tech industry will need to come up with other, less immersive ways to interact with digital world. Three technologies may help with this: voice assistants, of which Amazon’s Alexa and Google Assistant are the best, and Apple’s two innovations, AirPods and the Apple Watch.

All of these technologies share a common idea. Without big screens, they are far less immersive than a phone, allowing for quick digital hits: You can buy a movie ticket, add a task to a to-do list, glance at a text message or ask about the weather without going anywhere near your Irresistible Screen of Splendors.

The issue I have is that it’s going to take tightly-integrated systems to do this well, at least at first. So the chances are that Apple or Google will create an ecosystem that only works with their products, providing another way to achieve vendor lock-in.

Source: The New York Times

Inequality, anarchy, and the course of human history

Sometimes I’m reminded of the fact that I haven’t checked in with someone’s worth for a few weeks, months, or even years. I’m continually impressed with the work of my near-namesake Dougald Hine. I hope to meet him in person one day.

Going back through his recent work led me to a long article in Eurozine by David Graeber and David Wengrow about how we tend to frame history incorrectly.

Overwhelming evidence from archaeology, anthropology, and kindred disciplines is beginning to give us a fairly clear idea of what the last 40,000 years of human history really looked like, and in almost no way does it resemble the conventional narrative. Our species did not, in fact, spend most of its history in tiny bands; agriculture did not mark an irreversible threshold in social evolution; the first cities were often robustly egalitarian. Still, even as researchers have gradually come to a consensus on such questions, they remain strangely reluctant to announce their findings to the public­ – or even scholars in other disciplines – let alone reflect on the larger political implications. As a result, those writers who are reflecting on the ‘big questions’ of human history – Jared Diamond, Francis Fukuyama, Ian Morris, and others – still take Rousseau’s question (‘what is the origin of social inequality?’) as their starting point, and assume the larger story will begin with some kind of fall from primordial innocence.

Graeber and Wengrow essentially argue that most people start from the assumption that we have a choice between a life that is ‘nasty, brutish, and short’ (i.e. most of human history) or one that is more civilised (i.e. today). If we want the latter, we have to put up with inequality.

‘Inequality’ is a way of framing social problems appropriate to technocratic reformers, the kind of people who assume from the outset that any real vision of social transformation has long since been taken off the political table. It allows one to tinker with the numbers, argue about Gini coefficients and thresholds of dysfunction, readjust tax regimes or social welfare mechanisms, even shock the public with figures showing just how bad things have become (‘can you imagine? 0.1% of the world’s population controls over 50% of the wealth!’), all without addressing any of the factors that people actually object to about such ‘unequal’ social arrangements: for instance, that some manage to turn their wealth into power over others; or that other people end up being told their needs are not important, and their lives have no intrinsic worth. The latter, we are supposed to believe, is just the inevitable effect of inequality, and inequality, the inevitable result of living in any large, complex, urban, technologically sophisticated society.

But inequality is not the inevitable result of living in a civilised society, as they point out with some in-depth examples. I haven’t got space to go through them here, but suffice to say that it seems a classic case of historians cherry-picking their evidence.

As Claude Lévi-Strauss often pointed out, early Homo sapiens were not just physically the same as modern humans, they were our intellectual peers as well. In fact, most were probably more conscious of society’s potential than people generally are today, switching back and forth between different forms of organization every year. Rather than idling in some primordial innocence, until the genie of inequality was somehow uncorked, our prehistoric ancestors seem to have successfully opened and shut the bottle on a regular basis, confining inequality to ritual costume dramas, constructing gods and kingdoms as they did their monuments, then cheerfully disassembling them once again.

If so, then the real question is not ‘what are the origins of social inequality?’, but, having lived so much of our history moving back and forth between different political systems, ‘how did we get so stuck?’

Definitely worth a read, particularly if you think that ‘anarchy’ is the opposite of ‘civilisation’.

Source: Eurozine (via Dougald Hine)


Image CC BY-NC-SA xina

The New Octopus: going beyond managerial interventions for internet giants

This article in Logic magazine was brought to my attention by a recent issue of Ian O’Byrne’s excellent TL;DR newsletter. It’s a long read, focusing on the structural power of internet giants such as Amazon, Facebook, and Google.

The author, K. Sabeel Rahman, is an assistant professor of law at Brooklyn Law School and a fellow at the Roosevelt Institute. He uses historical analogues to make his points, while noting how different the current state of affairs is from a century ago.

As in the Progressive Era, technological revolutions have radically transformed our social, economic, and political life. Technology platforms, big data, AI—these are the modern infrastructures for today’s economy. And yet the question of what to do about technology is fraught, for these technological systems paradoxically evoke both bigness and diffusion: firms like Amazon and Alphabet and Apple are dominant, yet the internet and big data and AI are technologies that are by their very nature diffuse.

The problem, however, is not bigness per se. Even for Brandeisians, the central concern was power: the ability to arbitrarily influence the decisions and opportunities available to others. Such unchecked power represented a threat to liberty. Therefore, just as the power of the state had to be tamed through institutional checks and balances, so too did this private power have to be contested—controlled, held to account.

This emphasis on power and contestation, rather than literal bigness, helps clarify the ways in which technology’s particular relationship to scale poses a challenge to ideals of democracy, liberty, equality—and what to do about it.

I think this is the thing that concerns me most. Just as the banks were ‘too big to fail’ during the economic crisis and had to be bailed out by the taxpayer, so huge technology companies are increasingly playing that kind of role elsewhere in our society.

The problem of scale, then, has always been a problem of power and contestability. In both our political and our economic life, arbitrary power is a threat to liberty. The remedy is the institutionalization of checks and balances. But where political checks and balances take a common set of forms—elections, the separation of powers—checks and balances for private corporate power have proven trickier to implement.

These various mechanisms—regulatory oversight, antitrust laws, corporate governance, and the countervailing power of organized labor— together helped create a relatively tame, and economically dynamic, twentieth-century economy. But today, as technology creates new kinds of power and new kinds of scale, new variations on these strategies may be needed.

“Arbitrary power is a threat to liberty.” Absolutely, no matter whether the company holding that power has been problematic in the past, has a slogan promising not to do anything wrong, or is well-liked by the public.

We need more than regulatory oversight of such organisations because of how insidious their power can be — much like the image of Luks’ octopus that accompanies this and the original post.

Rahman explains three types of power held by large internet companies:

First, there is transmission power. This is the ability of a firm to control the flow of data or goods. Take Amazon: as a shipping and logistics infrastructure, it can be seen as directly analogous to the railroads of the nineteenth century, which enjoyed monopolized mastery over the circulation of people, information, and commodities. Amazon provides the literal conduits for commerce.

[…]

A second type of power arises from what we might think of as a gatekeeping power. Here, the issue is not necessarily that the firm controls the entire infrastructure of transmission, but rather that the firm controls the gateway to an otherwise decentralized and diffuse landscape.

This is one way to understand the Facebook News Feed, or Google Search. Google Search does not literally own and control the entire internet. But it is increasingly true that for most users, access to the internet is mediated through the gateway of Google Search or YouTube’s suggested videos. By controlling the point of entry, Google exercises outsized influence on the kinds of information and commerce that users can ultimately access—a form of control without complete ownership.

[…]

A third kind of power is scoring power, exercised by ratings systems, indices, and ranking databases. Increasingly, many business and public policy decisions are based on big data-enabled scoring systems. Thus employers will screen potential applicants for the likelihood that they may quit, be a problematic employee, or participate in criminal activity. Or judges will use predictive risk assessments to inform sentencing and bail decisions.

These scoring systems may seem objective and neutral, but they are built on data and analytics that bake into them existing patterns of racial, gender, and economic bias.

[…]

Each of these forms of power is infrastructural. Their impact grows as more and more goods and services are built atop a particular platform. They are also more subtle than explicit control: each of these types of power enable a firm to exercise tremendous influence over what might otherwise look like a decentralized and diffused system.

As I quote Adam Greenfield as saying in Microcast #021 (supporters only!) this infrastructural power is less obvious because of the immateriality of the world controlled by internet giants. We need more than managerial approaches to solving the problems faced by their power.

A more radical response, then, would be to impose structural restraints: limits on the structure of technology firms, their powers, and their business models, to forestall the dynamics that lead to the most troubling forms of infrastructural power in the first place.

One solution would be to convert some of these infrastructures into “public options”—publicly managed alternatives to private provision. Run by the state, these public versions could operate on equitable, inclusive, and nondiscriminatory principles. Public provision of these infrastructures would subject them to legal requirements for equal service and due process. Furthermore, supplying a public option would put competitive pressures on private providers.

[…]

We can also introduce structural limits on technologies with the goal of precluding dangerous concentrations of power. While much of the debate over big data and privacy has tended to emphasize the concerns of individuals, we might view a robust privacy regime as a kind of structural limit: if firms are precluded from collecting or using certain types of data, that limits the kinds of power they can exercise.

Some of this is already happening, thankfully, through structural limitations such as GDPR. I hope this is the first step in a more coordinated response to internet giants who increasingly have more impact on the day-to-day lives of citizens than their governments.

Moving fast and breaking things is inevitable in moments of change. The issue is which things we are willing to break—and how broken we are willing to let them become. Moving fast may not be worth it if it means breaking the things upon which democracy depends.

It’s a difficult balance. However, just as GDPR has put in place mechanisms to prevent the over-reaching of governments and of companies, I think we could think differently about perhaps organisations with non-profit status and community ownership that could provide some of the infrastructure being built by shareholder-owned organisations.

Having just finished reading Utopia for Realists, I definitely think the left needs to think bigger than it’s currently doing, and really push that Overton window.

Source: Logic magazine (via Ian O’Byrne)

Trust and the cult of your PLN

This is a long article with a philosophical take on one of my favourite subjects: social networks and the flow of information. The author, C Thi Nguyen, is an assistant professor of philosophy at Utah Valley University and distinguishes between two things that he things have been conflated:

Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.

Teasing things apart a bit, Nguyen gives some definitions:

Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission.

[…]

An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited.

[…]

In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.

It feels like towards the end of my decade as an active user of Twitter there was a definite shift from it being an ‘epistemic bubble’ towards being an ‘echo chamber’. My ‘Personal Learning Network’ (or ‘PLN’) seemed to be a bit more militant in its beliefs.

Nguyen goes on to talk at length about fake news, sociological theories, and Cartesian epistemology. Where he ends up, however, is where I would: trust.

As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain. Ask yourself: could you tell a good statistician from an incompetent one? A good biologist from a bad one? A good nuclear engineer, or radiologist, or macro-economist, from a bad one? Any particular reader might, of course, be able to answer positively to one or two such questions, but nobody can really assess such a long chain for herself. Instead, we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.

That puts us a double-bind. We need to make ourselves vulnerable in order to participate in a society built on trust, but that very vulnerability puts us at danger of being manipulated.

I see this in fanatical evangelism of blockchain solutions to the ‘problem’ of operating in a trustless environment. To my mind, we need to be trusting people more, not less. Of course, there are obvious exceptions, but breaches of trust are near the top of the list of things we should punish most in a society.

Is there anything we can do, then, to help an echo-chamber member to reboot? We’ve already discovered that direct assault tactics – bombarding the echo-chamber member with ‘evidence’ – won’t work. Echo-chamber members are not only protected from such attacks, but their belief systems will judo such attacks into further reinforcement of the echo chamber’s worldview. Instead, we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.

So the way forward is for people to develop empathy and to show trust. Not present people with evidence that they’re wrong. That’s never worked in the past, and it won’t work now. Our problem isn’t a deficit in access to information, it’s a deficit in trust.

Source: Aeon (via Ian O’Byrne)