Tag: EFF

What kind of world do we want? (or, why regulation matters)

I saw a thread on Mastodon recently, which included this image:

Three images with the title 'Space required to Transport 48 People'. Each image is the same, with cars backed up down a road. The caption for each image is 'Car', 'Electric Car' and 'Autonomous Car', respectively.

Someone else replied with a meme showing a series of images with the phrase “They feed us poison / so we buy their ‘cures’ / while they ban our medicine”. The poison in this case being cars burning fossil fuels, the cures being electric and/or autonomous cars, and the medicine public transport.

There’s similar kind of thinking in the world of tech, with at least one interviewee in the documentary The Social Dilemma saying that people should be paid for their data. I’ve always been uneasy about this, so it’s good to see the EFF come out strongly against it:

Let’s be clear: getting paid for your data—probably no more than a handful of dollars at most—isn’t going to fix what’s wrong with privacy today. Yes, a data dividend may sound at first blush like a way to get some extra money and stick it to tech companies. But that line of thinking is misguided, and falls apart quickly when applied to the reality of privacy today. In truth, the data dividend scheme hurts consumers, benefits companies, and frames privacy as a commodity rather than a right.

EFF strongly opposes data dividends and policies that lay the groundwork for people to think of the monetary value of their data rather than view it as a fundamental right. You wouldn’t place a price tag on your freedom to speak. We shouldn’t place one on our privacy, either.

Hayley Tsukayama, Why Getting Paid for Your Data Is a Bad Deal (EFF)

As the EFF points out, who would get to set the price of that data, anyway? Also, individual data is useful to companies, but so is data in aggregate. Is that covered by such plans?

Facebook makes around $7 per user, per quarter. Even if they gave you all of that, is that a fair exchange?

Those small checks in exchange for intimate details about you are not a fairer trade than we have now. The companies would still have nearly unlimited power to do what they want with your data. That would be a bargain for the companies, who could then wipe their hands of concerns about privacy. But it would leave users in the lurch.

All that adds up to a stark conclusion: if where we’ve been is any indication of where we’re going, there won’t be much benefit from a data dividend. What we really need is stronger privacy laws to protect how businesses process our data—which we can, and should do, as a separate and more protective measure.

Hayley Tsukayama, Why Getting Paid for Your Data Is a Bad Deal (EFF)

As the rest of the article goes on to explain, we’re already in a world of ‘pay for privacy’ which is exacerbating the gulf between the haves and the have-nots. We need regulation and legislation to curb this before it gallops away from us.

You can’t tech your way out of problems the tech didn’t create

The Electronic Frontier Foundation (EFF), is a US-based non-profit that exists to defend civil liberties in the digital world. They’ve been around for 30 years, and I support them financially on a monthly basis.

In this article by Corynne McSherry, EFF’s Legal Director, she outlines the futility in attempts by ‘Big Social’ to do content moderation at scale:

[C]ontent moderation is a fundamentally broken system. It is inconsistent and confusing, and as layer upon layer of policy is added to a system that employs both human moderators and automated technologies, it is increasingly error-prone. Even well-meaning efforts to control misinformation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression.

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

Ultimately, these monolithic social networks have a problem around false positives. It’s in their interests to be over-zealous, as they’re increasingly under the watchful eye of regulators and governments.

We have been watching closely as Facebook, YouTube, and Twitter, while disclaiming any interest in being “the arbiters of truth,” have all adjusted their policies over the past several months to try arbitrate lies—or at least flag them. And we’re worried, especially when we look abroad. Already this year, an attempt by Facebook to counter election misinformation targeting Tunisia, Togo, Côte d’Ivoire, and seven other African countries resulted in the accidental removal of accounts belonging to dozens of Tunisian journalists and activists, some of whom had used the platform during the country’s 2011 revolution. While some of those users’ accounts were restored, others—mostly belonging to artists—were not.

Corynne McSherry, Content Moderation and the U.S. Election: What to Ask, What to Demand (EFF)

McSherry’s analysis is spot-on: it’s the algorithms that are a problem here. Social networks employ these algorithms because of their size and structure, and because of the cost of human-based content moderation. After all, these are companies with shareholders.

Algorithms used by Facebook’s Newsfeed or Twitter’s timeline make decisions about which news items, ads, and user-generated content to promote and which to hide. That kind of curation can play an amplifying role for some types of incendiary content, despite the efforts of platforms like Facebook to tweak their algorithms to “disincentivize” or “downrank” it. Features designed to help people find content they’ll like can too easily funnel them into a rabbit hole of disinformation.

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

She includes useful questions for social networks to answer about content moderation:

  • Is the approach narrowly tailored or a categorical ban?
  • Does it empower users?
  • Is it transparent?
  • Is the policy consistent with human rights principles?

But, ultimately…

You can’t tech your way out of problems the tech didn’t create. And even where content moderation has a role to play, history tells us to be wary. Content moderation at scale is impossible to do perfectly, and nearly impossible to do well, even under the most transparent, sensible, and fair conditions

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

I’m so pleased that I don’t use Facebook products, and that I only use Twitter these days as a place to publish links to my writing.

Instead, I’m much happier on the Fediverse, a place where if you don’t like the content moderation approach of the instance you’re on, you can take your digital knapsack and decide to call another place home. You can find me here (for now!).

When people are free to do as they please, they usually imitate each other

Graphic showing a hospital, face masks, and hand washing

😷 How do pandemics end?

🙆 How I talk to the victims of conspiracy theories

🔒 The Github youtube-dl Takedown Isn’t Just a Problem of American Law

🖥️ The Raspberry Pi 400 – Teardown and Review

🐧 As a former social media analyst, I’m quitting Twitter


Quotation-as-title by Eric Hoffer. Image from top-linked post.

Using WhatsApp is a (poor) choice that you make

People often ask me about my stance on Facebook products. They can understand that I don’t use Facebook itself, but what about Instagram? And surely I use WhatsApp? Nope.

Given that I don’t usually have a single place to point people who want to read about the problems with WhatsApp, I thought I’d create one.


WhatsApp is a messaging app that was acquired by Facebook for the eye-watering amount of $19 billion in 2014. Interestingly, a BuzzFeed News article from 2018 cites documents confidential documents from the time leading up to the acquisition that were acquired by the UK’s Department for Culture, Media, and Sport. They show the threat WhatsApp posed to Facebook at the time.

US mobile messenger apps (iPhone) graph from August 2012 to March 2013
A document obtained by the DCMS as part of their investigations

As you can see from the above chart, Facebook executives were shown in 2013 that WhatsApp (8.6% reach) was growing rapidly and posed a huge threat to Facebook Messenger (13.7% reach).

So Facebook bought WhatsApp. But what did they buy? If, as we’re led to believe, WhatsApp is ‘end-to-end encrypted’ then Facebook don’t have access to the messages of users. So what’s so valuable?


Brian Acton, one of the founders of WhatsApp (and a man who got very rich through its sale) has gone on record saying that he feels like he sold his users’ privacy to Facebook.

Facebook, Acton says, had decided to pursue two ways of making money from WhatsApp. First, by showing targeted ads in WhatsApp’s new Status feature, which Acton felt broke a social compact with its users. “Targeted advertising is what makes me unhappy,” he says. His motto at WhatsApp had been “No ads, no games, no gimmicks”—a direct contrast with a parent company that derived 98% of its revenue from advertising. Another motto had been “Take the time to get it right,” a stark contrast to “Move fast and break things.”

Facebook also wanted to sell businesses tools to chat with WhatsApp users. Once businesses were on board, Facebook hoped to sell them analytics tools, too. The challenge was WhatsApp’s watertight end-to-end encryption, which stopped both WhatsApp and Facebook from reading messages. While Facebook didn’t plan to break the encryption, Acton says, its managers did question and “probe” ways to offer businesses analytical insights on WhatsApp users in an encrypted environment.

Parmy Olson (Forbes)

The other way Facebook wanted to make money was to sell tools to businesses allowing them to chat with WhatsApp users. These tools would also give “analytical insights” on how users interacted with WhatsApp.

Facebook was allowed to acquire WhatsApp (and Instagram) despite fears around monopolistic practices. This was because they made a promise not to combine data from various platforms. But, guess what happened next?

In 2014, Facebook bought WhatsApp for $19b, and promised users that it wouldn’t harvest their data and mix it with the surveillance troves it got from Facebook and Instagram. It lied. Years later, Facebook mixes data from all of its properties, mining it for data that ultimately helps advertisers, political campaigns and fraudsters find prospects for whatever they’re peddling. Today, Facebook is in the process of acquiring Giphy, and while Giphy currently doesn’t track users when they embed GIFs in messages, Facebook could start doing that anytime.

Cory Doctorow (EFF)

So Facebook is harvesting metadata from its various platforms, tracking people around the web (even if they don’t have an account), and buying up data about offline activities.

All of this creates a profile. So yes, because of end-ot-end encryption, Facebook might not know the exact details of your messages. But they know that you’ve started messaging a particular user account around midnight every night. They know that you’ve started interacting with a bunch of stuff around anxiety. They know how the people you message most tend to vote.


Do I have to connect the dots here? This is a company that sells targeted adverts, the kind of adverts that can influence the outcome of elections. Of course, Facebook will never admit that its platforms are the problem, it’s always the responsibility of the user to be ‘vigilant’.

Man reading a newspaper
A WhatsApp advert aiming to ‘fighting false information’ (via The Guardian)

So you might think that you’re just messaging your friend or colleague on a platform that ‘everyone’ uses. But your decision to go with the flow has consequences. It has implications for democracy. It has implications on creating a de facto monopoly for our digital information. And it has implications around the dissemination of false information.

The features that would later allow WhatsApp to become a conduit for conspiracy theory and political conflict were ones never integral to SMS, and have more in common with email: the creation of groups and the ability to forward messages. The ability to forward messages from one group to another – recently limited in response to Covid-19-related misinformation – makes for a potent informational weapon. Groups were initially limited in size to 100 people, but this was later increased to 256. That’s small enough to feel exclusive, but if 256 people forward a message on to another 256 people, 65,536 will have received it.

[…]

A communication medium that connects groups of up to 256 people, without any public visibility, operating via the phones in their pockets, is by its very nature, well-suited to supporting secrecy. Obviously not every group chat counts as a “conspiracy”. But it makes the question of how society coheres, who is associated with whom, into a matter of speculation – something that involves a trace of conspiracy theory. In that sense, WhatsApp is not just a channel for the circulation of conspiracy theories, but offers content for them as well. The medium is the message.

William Davies (The Guardian)

I cannot control the decisions others make, nor have I forced my opinions on my two children, who (despite my warnings) both use WhatsApp to message their friends. But, for me, the risk to myself and society of using WhatsApp is not one I’m happy with taking.

Just don’t say I didn’t warn you.


Header image by Rachit Tank

Saturday shoutings

The link I’m most enthusiastic about sharing this week is one to a free email-based course I’ve created with my co-op colleagues. It’s entitled The 7 Habits of Highly Effective Virtual Meetings and part of a new series we’re working on.

Skills for the New Normal

The other links are slightly fewer in number this week because time, it turns out, is finite.


Clean Language: David Grove Questioning Method

Developing Questions
“(And) what kind of X (is that X)?”
“(And) is there anything else about X?”
“(And) where is X? or (And) whereabouts is X?”
“(And) that’s X like what?”
“(And) is there a relationship between X and Y?”
“(And) when X, what happens to Y?

Sequence and Source Questions
“(And) then what happens? or (And) what happens next?”
“(And) what happens just before X?”
“(And) where could X come from?”

Intention Questions
“(And) what would X like to have happen?”
“(And) what needs to happen for X?”
“(And) can X (happen)?”

The first two questions: “What kind of X (is that X)?” and “Is there anything else about X?” are the most commonly used.

As a general guide, these two questions account for around 50% of the questions asked in a typical Clean Language session.

BusinessBalls

I had a great chat with Kristian Still this week, for the first time in about a decade. Kristian was part of EdTechRoundUp back in the day, and early EduTwitter. Among the many things we discussed is his enthusiasm for “clean questioning” which I’m going to investigate further.


How ‘Sustainable’ Web Design Can Help Fight Climate Change

Even our throwaway habits can add up to a mountain of carbon. Consider all the little social emails we shoot back and forth—“thanks,” “got it,” “lol.” The UK energy firm Ovo examined email usage and—using data from Lancaster University professor Mike Berners-Lee, who analyzes carbon footprints—they found that if every adult in the UK just sent one less “thank you” email per day, it would cut 16 tons of carbon each year, equal to 22 round-trip flights between New York and London. They also found that 49 percent of us often send thank-you emails to people “within talking distance.” We can lower our carbon output if we’d just take the headphones off for a minute and stop behaving like a bunch of morlocks.

Clive Thompson (WIRED)

Small differences all add up. Our design choices and the decisions we make about technology all have a part to play in fighting climate change.


Apple, Big Sur, and the rise of Neumorphism

When you boil it down, neumorphism is a focus on how light moves in three-dimensional space. Its predecessor, skeumorphism, created realism in digital interfaces by simulating textures on surfaces like felt on a poker table or the brushed metal of a tape recorder. An ancillary — though under-developed — aspect of this design style was lighting that interacted realistically with the materials that were being represented; this is why shadows and darkness were so prevalent in those early interfaces.

Jack Koloskus (Input)

The dominant design language over the last five years, without doubt, has been Google’s Material Design. Will a neumorphic approach take over? It’s certainly an interesting approach.


Snowden: Tech Workers Are Complicit in How Their Companies Hurt Society

He called on those in the tech industry to look at the bigger picture regarding their work and its implications beyond simply a project—and to think deeply and take a stronger stand with regards to who their labor actually serves.

“It’s not enough to read, it’s not enough to believe in something, it’s not enough to write something, you have to eventually stand for something if you want things to change,” he said.

Kevin Truong (Motherboard)

The tech industry is an interesting one as it’s a relatively new and immature one, at least in its current guise. As a result, the ethics, and the checks and balances aren’t quite there yet.

To my mind, things like unions and professional associations show maturity and the kind of coming together that don’t put moral decisions on the shoulders of individuals, but rather on the whole sector.


Brexit

Tea, Biscuits, and Empire: The Long Con of Britishness

[T]here is a narrative chasm between the twee and borderless dreamscape of fantasy Britain and actual, material Britain, where rents are rising and racists are running brave. The chasm is wide, and a lot of people are falling into it. The omnishambles of British politics is what happens when you get scared and mean and retreat into the fairytales you tell about yourself. When you can no longer live within your own contradictions. When you want to hold on to the belief that Britain is the land of Jane Austen and John Lennon and Sir Winston Churchill, the war hero who has been repeatedly voted the greatest Englishman of all time. When you want to forget that Britain is also the land of Cecil Rhodes and Oswald Mosley and Sir Winston Churchill, the brutal colonial administrator who sanctioned the building of the first concentration camps and condemned millions of Indians to death by starvation. These are not contradictions, even though the drive to separate them is cracking the country apart. If you love your country and don’t own its difficulties and its violence, you don’t actually love your country. You’re just catcalling it as it goes by.

Laurie Penny (Longreads)

I always find looking at my country through the lens of foreigners cringe-inducing. I suppose it’s a narrative produced for tourists but, sadly, we seem to have believed our own rhetoric, and look where it’s gotten us…


How Big Tech Monopolies Distort Our Public Discourse

The idea that Big Tech can mold discourse through bypassing our critical faculties by spying on and analyzing us is both self-serving (inasmuch as it helps Big Tech sell ads and influence services) and implausible, and should be viewed with extreme skepticism

But you don’t have to accept extraordinary claims to find ways in which Big Tech is distorting and degrading our public discourse. The scale of Big Tech makes it opaque and error-prone, even as it makes the job of maintaining a civil and productive space for discussion and debate impossible.

Cory Doctorow (EFF)

A tour de force from Doctorow, who eviscerates the companies that make up ‘Big Tech’ and the role they have in hollowing-out civic society.


Header image by Andrea Piacquadio

Most human beings have an almost infinite capacity for taking things for granted

So said Aldous Huxley. Recently, I discovered a episode of the podcast The Science of Success in which Dan Carlin was interviewed. Now Dan is the host of one of my favourite podcasts, Hardcore History as well as one he’s recently discontinued called Common Sense.

The reason the latter is on ‘indefinite hiatus’ was discussed on The Science of Success podcast. Dan feels that, after 30 years as a journalist, if he can’t get a grip on the current information landscape, then who can? It’s shaken him up a little.

One of the quotations he just gently lobbed into the conversation was from John Stuart Mill, who at one time or another was accused by someone of being ‘inconsistent’ in his views. Mill replied:

When the facts change, I change my mind. What do you do, sir?

John Stuart Mill

Now whether or not Mill said those exact words, the sentiment nevertheless stands. I reckon human beings have always made up their minds first and then chosen ‘facts’ to support their opinions. These days, I just think that it’s easier than ever to find ‘news’ outlets and people sharing social media posts to support your worldview. It’s as simple as that.


Last week I watched a stand-up comedy routine by Kevin Bridges on BBC iPlayer as part of his 2018 tour. As a Glaswegian, he made the (hilarious) analogy of social media as being like going into a pub.

(As an aside, this is interesting, as a decade ago people would often use the analogy of using social media as being like going to an café. The idea was that you could overhear, and perhaps join in with, interesting conversations that you hear. No-one uses that analogy any more.)

Bridges pointed out that if you entered a pub, sat down for a quiet pint, and the person next to you was trying to flog you Herbalife products, constantly talking about how #blessed they felt, or talking ambiguously for the sake of attention, you’d probably find another pub.

He was doing it for laughs, but I think he was also making a serious point. Online, we tolerate people ranting on and generally being obnoxious in ways we would never do offline.

The underlying problem of course is that any platform that takes some segment of the real world and brings it into software will also bring in all that segment’s problems. Amazon took products and so it has to deal with bad and fake products (whereas one might say that Facebook took people, and so has bad and fake people).

Benedict Evans

I met Clay Shirky at an event last month, which kind of blew my mind given that it was me speaking at it rather than him. After introducing myself, we spoke for a few minutes about everything from his choice of laptop to what he’s been working on recently. Curiously, he’s not writing a book at the moment. After a couple of very well-received books (Here Comes Everybody and Cognitive Surplus) Shirky has actually only published a slightly obscure book about Chinese smartphone manufacturing since 2010.

While I didn’t have time to dig into things there and then, and it would been a bit presumptuous of me to do so, it feels to me like Shirky may have ‘walked back’ some of his pre-2010 thoughts. This doesn’t surprise me at all, given that many of the rest of us have, too. For example, in 2014 he published a Medium article explaining why he banned his students from using laptops in lectures. Such blog posts and news articles are common these days, but it felt like was one of the first.


The last decade from 2010 to 2019, which Audrey Watters has done a great job of eviscerating, was, shall we say, somewhat problematic. The good news is that we connected 4.5 billion people to the internet. The bad news is that we didn’t really harness that for much good. So we went from people sharing pictures of cats, to people sharing pictures of cats and destroying western democracy.

Other than the ‘bad and fake people’ problem cited by Ben Evans above, another big problem was the rise of surveillance capitalism. In a similar way to climate change, this has been repackaged as a series of individual failures on the part of end users. But, as Lindsey Barrett explains for Fast Company, it’s not really our fault at all:

In some ways, the tendency to blame individuals simply reflects the mistakes of our existing privacy laws, which are built on a vision of privacy choices that generally considers the use of technology to be a purely rational decision, unconstrained by practical limitations such as the circumstances of the user or human fallibility. These laws are guided by the idea that providing people with information about data collection practices in a boilerplate policy statement is a sufficient safeguard. If people don’t like the practices described, they don’t have to use the service.

Lindsey Barrett

The problem is that we have monopolistic practices in the digital world. Fast Company also reports the four most downloaded apps of the 2010s were all owned by Facebook:

I don’t actually think people really understand that their data from WhatsApp and Instagram is being hoovered up by Facebook. I don’t then think they understand what Facebook then do with that data. I tried to lift the veil on this a little bit at the event where I met Clay Shirky. I know at least one person who immediately deleted their Facebook account as a result of it. But I suspect everyone else will just keep on keeping on. And yes, I have been banging my drum about this for quite a while now. I’ll continue to do so.

The truth is, and this is something I’ll be focusing on in upcoming workshops I’m running on digital literacies, that to be an ‘informed citizen’ these days means reading things like the EFF’s report into the current state of corporate surveillance. It means deleting accounts as a result. It means slowing down, taking time, and reading stuff before sharing it on platforms that you know care for the many, not the few. It means actually caring about this stuff.

All of this might just look and feel like a series of preferences. I prefer decentralised social networks and you prefer Facebook. Or I like to use Signal and you like WhatsApp. But it’s more than that. It’s a whole lot more than that. Democracy as we know it is at stake here.


As Prof. Scott Galloway has discussed from an American point of view, we’re living in times of increasing inequality. The tools we’re using exacerbate that inequality. All of a sudden you have to be amazing at your job to even be able to have a decent quality of life:

The biggest losers of the decade are the unremarkables. Our society used to give remarkable opportunities to unremarkable kids and young adults. Some of the crowding out of unremarkable white males, including myself, is a good thing. More women are going to college, and remarkable kids from low-income neighborhoods get opportunities. But a middle-class kid who doesn’t learn to code Python or speak Mandarin can soon find she is not “tracking” and can’t catch up.

Prof. Scott Galloway

I shared an article last Friday, about how you shouldn’t have to be good at your job. The whole point of society is that we look after one another, not compete with one another to see which of us can ‘extract the most value’ and pile up more money than he or she can ever hope to spend. Yes, it would be nice if everyone was awesome at all they did, but the optimisation of everything isn’t the point of human existence.

So once we come down the stack from social networks, to surveillance capitalism, to economic and markets eating the world we find the real problem behind all of this: decision-making. We’ve sacrificed stability for speed, and seem to be increasingly happy with dictator-like behaviour in both our public institutions and corporate lives.

Dictatorships can be more efficient than democracies because they don’t have to get many people on board to make a decision. Democracies, by contrast, are more robust, but at the cost of efficiency.

Taylor Pearson

A selectorate, according to Pearson, “represents the number of people who have influence in a government, and thus the degree to which power is distributed”. Aside from the fact that dictatorships tend to be corrupt and oppressive, they’re just not a good idea in terms of decision-making:

Said another way, much of what appears efficient in the short term may not be efficient but hiding risk somewhere, creating the potential for a blow-up. A large selectorate tends to appear to be working less efficiently in the short term, but can be more robust in the long term, making it more efficient in the long term as well. It is a story of the Tortoise and the Hare: slow and steady may lose the first leg, but win the race.

Taylor Pearson

I don’t think we should be optimising human beings for their role in markets. I think we should be optimising markets (if in fact we need them) for their role in human flourishing. The best way of doing that is to ensure that we distribute power and decision-making well.


So it might seem that my continual ragging on Facebook (in particular) is a small thing in the bigger picture. But it’s actually part of the whole deal. When we have super-powerful individuals whose companies have the ability to surveil us at will; who then share that data to corrupt regimes; who in turn reinforce the worst parts of the status quo; then I think we have a problem.

This year I’ve made a vow to be more radical. To speak my mind even more, and truth to power, especially when it’s inconvenient. I hope you’ll join me ✊

What the EU’s copyright directive means in practice

The EU is certainly coming out swinging against Big Tech this year. Or at least it thinks it is. Yesterday, the European Parliament voted in favour of three proposals, outlined by the EFF’s indefatigable Cory Doctorow as:

1. Article 13: the Copyright Filters. All but the smallest platforms will have to defensively adopt copyright filters that examine everything you post and censor anything judged to be a copyright infringement.

2. Article 11: Linking to the news using more than one word from the article is prohibited unless you’re using a service that bought a license from the news site you want to link to. News sites can charge anything they want for the right to quote them or refuse to sell altogether, effectively giving them the right to choose who can criticise them. Member states are permitted, but not required, to create exceptions and limitations to reduce the harm done by this new right.

3. Article 12a: No posting your own photos or videos of sports matches. Only the “organisers” of sports matches will have the right to publicly post any kind of record of the match. No posting your selfies, or short videos of exciting plays. You are the audience, your job is to sit where you’re told, passively watch the game and go home.

Music Week pointed out that Article 13 is particularly problematic for artists:

While the Copyright Directive covers a raft of digital issues, a sticking point within the music industry had been the adoption of Article 13 which seeks to put the responsibility on online platforms to police copyright in advance of posting user generated content on their services, either by restricting posts or by obtaining full licenses for copyrighted material.

The proof of the pudding, as The Verge points out, will be in the interpretation and implementation by EU member states:

However, those backing these provisions say the arguments above are the result of scaremongering by big US tech companies, eager to keep control of the web’s biggest platforms. They point to existing laws and amendments to the directive as proof it won’t be abused in this way. These include exemptions for sites like GitHub and Wikipedia from Article 13, and exceptions to the “link tax” that allow for the sharing of mere hyperlinks and “individual words” describing articles without constraint.

I can’t help but think this is a ham-fisted way of dealing with a non-problem. As Doctorow also states, part of the issue here is the assumption that competition in a free market is at the core of creativity. I’d argue that’s untrue, that culture is built by respectfully appropriating and building on the work of others. These proposals, as they currently stand (and as I currently understand them) actively undermine internet culture.

Source: Music Week / EFF / The Verge

The security guide as literary genre

I stumbled across this conference presentation from back in January by Jeffrey Monro, “a doctoral student in English at the University of Maryland, College Park, where [he studies] the textual and material histories of media technologies”.

It’s a short, but very interesting one, taking a step back from the current state of play to ask what we’re actually doing as a society.

Over the past year, in an unsurprising response to a host of new geopolitical realities, we’ve seen a cottage industry of security recommendations pop up in venues as varied as The New York TimesVice, and even Teen Vogue. Together, these recommendations form a standard suite of answers to some of the most messy questions of our digital lives. “How do I stop advertisers from surveilling me?” “How do I protect my internet history from the highest bidder?” And “how do I protect my privacy in the face of an invasive or authoritarian government?”

It’s all very well having a plethora of guides to secure ourselves against digital adversaries, but this isn’t something that we need to really think about in a physical setting within the developed world. When I pop down to the shops, I don’t think about the route I take in case someone robs me at gunpoint.

So Monro is thinking about these security guides as a kind of ‘literary genre’:

I’m less interested in whether or not these tools are effective as such. Rather, I want to ask how these tools in particular orient us toward digital space, engage imaginaries of privacy and security, and structure relationships between users, hackers, governments, infrastructures, or machines themselves? In short: what are we asking for when we construe security as a browser plugin?

There’s a wider issue here about the pace of digital interactions, security theatre, and most of us getting news from an industry hyper-focused on online advertising. A recent article in the New York Times was thought-provoking in that sense, comparing what it’s like going back to (or in some cases, getting for the first time) all of your news from print media.

We live in a digital world where everyone’s seemingly agitated and angry, all of the time:

The increasing popularity of these guides evinces a watchful anxiety permeating even the most benign of online interactions, a paranoia that emerges from an epistemological collapse of the categories of “private” and “public.” These guides offer a way through the wilderness, techniques by which users can harden that private/public boundary.

The problem with this ‘genre’ of security guide, says Monro, is that even the good ones from groups like EFF (of which I’m a member) make you feel like locking down everything. The problem with that, of course, is that it’s very limiting.

Communication, by its very nature, demands some dimension of insecurity, some material vector for possible attack. Communication is always already a vulnerable act. The perfectly secure machine, as Chun notes, would be unusable: it would cease to be a computer at all. We can then only ever approach security asymptotically, always leaving avenues for attack, for it is precisely through those avenues that communication occurs.

I’m a great believer in serendipity, but the problem with that from a technical point of view is that it increases my attack surface. It’s a source of tension that I actually feel most days.

There is no room, or at least less room, in a world of locked-down browsers, encrypted messaging apps, and verified communication for qualities like serendipity or chance encounters. Certainly in a world chock-full with bad actors, I am not arguing for less security, particularly for those of us most vulnerable to attack online… But I have to wonder how our intensive speculative energies, so far directed toward all possibility for attack, might be put to use in imagining a digital world that sees vulnerability as a value.

At the end of the day, this kind of article serves to show just how different our online, digital environment is from our physical reality. It’s a fascinating sideways look, looking at the security guide as a ‘genre’. A recommended read in its entirety — and I really like the look of his blog!

Source: Jeffrey Moro

Platform censorship and the threat to democracy

TorrentFreak reports that Science Hub (commonly referred to as ‘Sci-Hub’) has had its account with Cloudflare terminated. Sci-Hub is sometimes known as ‘the Piratebay of Science’ as, in the words of Wikipedia, it “bypasses publisher paywalls by allowing access through educational institution proxies”:

Cloudflare’s actions are significant because the company previously protested a similar order. When the RIAA used the permanent injunction in the MP3Skull case to compel Cloudflare to disconnect the site, the CDN provider refused.

The RIAA argued that Cloudflare was operating “in active concert or participation” with the pirates. The CDN provider objected, but the court eventually ordered Cloudflare to take action, although it did not rule on the “active concert or participation” part.

In the Sci-Hub case “active concert or participation” is also a requirement for the injunction to apply. While it specifically mentions ISPs and search engines, ACS Director Glenn Ruskin previously stressed that companies won’t be targeted for simply linking users to Sci-Hub.

Cloudflare is a Content Delivery Network (CDN), and I use their service on my sites, to improve web performance and security. They are the subject of some controversy at the moment, as the Electronic Frontier Foundation note:

From Cloudflare’s headline-making takedown of the Daily Stormer last autumn to YouTube’s summer restrictions on LGBTQ content, there’s been a surge in “voluntary” platform censorship. Companies—under pressure from lawmakers, shareholders, and the public alike—have ramped up restrictions on speech, adding new rules, adjusting their still-hidden algorithms and hiring more staff to moderate content. They have banned ads from certain sources and removed “offensive” but legal content.

It’s a big deal, as intermediaries that are required for the optimisation in speed of large website succumb to political pressure.

Given this history, we’re worried about how platforms are responding to new pressures. Not because there’s a slippery slope from judicious moderation to active censorship — but because we are already far down that slope. Regulation of our expression, thought, and association has already been ceded to unaccountable executives and enforced by minimally-trained, overworked staff, and hidden algorithms. Doubling down on this approach will not make it better. And yet, no amount of evidence has convinced the powers that be at major platforms like Facebook—or in governments around the world. Instead many, especially in policy circles, continue to push for companies to—magically and at scale—perfectly differentiate between speech that should be protected and speech that should be erased.

We live in contentious times, which are setting the course for a digitally mediate future. For every positive development (such as GDPR), there’s stuff like this…

Sources: TorrentFreak / EFF

Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box