Tag: Cory Doctorow

Cory Doctorow on Big Tech, monopolies, and decentralisation

I’m not one to watch a 30-minute video, as usually it’s faster and more interesting to read the transcription. I’ll always make an exception, however, for Cory Doctorow who not only speaks almost as fast as I can read, but is so enthusiastic and passionate about his work that it’s a lot more satisfying to see him speak.

You have to watch his keynote at the Decentralized Web Summit last month. It’s not only a history lesson and a warning, but he puts in ways that really make you see what the problem is. Inspiring stuff.

Source: Boing Boing

What the EU’s copyright directive means in practice

The EU is certainly coming out swinging against Big Tech this year. Or at least it thinks it is. Yesterday, the European Parliament voted in favour of three proposals, outlined by the EFF’s indefatigable Cory Doctorow as:

1. Article 13: the Copyright Filters. All but the smallest platforms will have to defensively adopt copyright filters that examine everything you post and censor anything judged to be a copyright infringement.

2. Article 11: Linking to the news using more than one word from the article is prohibited unless you’re using a service that bought a license from the news site you want to link to. News sites can charge anything they want for the right to quote them or refuse to sell altogether, effectively giving them the right to choose who can criticise them. Member states are permitted, but not required, to create exceptions and limitations to reduce the harm done by this new right.

3. Article 12a: No posting your own photos or videos of sports matches. Only the “organisers” of sports matches will have the right to publicly post any kind of record of the match. No posting your selfies, or short videos of exciting plays. You are the audience, your job is to sit where you’re told, passively watch the game and go home.

Music Week pointed out that Article 13 is particularly problematic for artists:

While the Copyright Directive covers a raft of digital issues, a sticking point within the music industry had been the adoption of Article 13 which seeks to put the responsibility on online platforms to police copyright in advance of posting user generated content on their services, either by restricting posts or by obtaining full licenses for copyrighted material.

The proof of the pudding, as The Verge points out, will be in the interpretation and implementation by EU member states:

However, those backing these provisions say the arguments above are the result of scaremongering by big US tech companies, eager to keep control of the web’s biggest platforms. They point to existing laws and amendments to the directive as proof it won’t be abused in this way. These include exemptions for sites like GitHub and Wikipedia from Article 13, and exceptions to the “link tax” that allow for the sharing of mere hyperlinks and “individual words” describing articles without constraint.

I can’t help but think this is a ham-fisted way of dealing with a non-problem. As Doctorow also states, part of the issue here is the assumption that competition in a free market is at the core of creativity. I’d argue that’s untrue, that culture is built by respectfully appropriating and building on the work of others. These proposals, as they currently stand (and as I currently understand them) actively undermine internet culture.

Source: Music Week / EFF / The Verge

Cory Doctorow on the corruption at the heart of Facebook

I like Cory Doctorow. He’s a gifted communicator who wears his heart on his sleeve. In this article, he talks about Facebook and how what it’s wrought is a result of the corruption at its very heart.

It’s great that the privacy-matters message is finally reaching a wider audience, and it’s exciting to think that we’re approaching a tipping point for indifference to privacy and surveillance.

But while the acknowledgment of the problem of Big Tech is most welcome, I am worried that the diagnosis is wrong.

The problem is that we’re confusing automated persuasion with automated targeting. Laughable lies about Brexit, Mexican rapists, and creeping Sharia law didn’t convince otherwise sensible people that up was down and the sky was green.

Rather, the sophisticated targeting systems available through Facebook, Google, Twitter, and other Big Tech ad platforms made it easy to find the racist, xenophobic, fearful, angry people who wanted to believe that foreigners were destroying their country while being bankrolled by George Soros.

So, for example, people seem to think that Facebook advertisement caused people to vote for Trump. As if they were going to vote for someone else, and then changed their mind as a direct result of viewing ads. That’s not how it works.

Companies such as Cambridge Analytica might claim that they can rig elections and change people’s minds, but they’re not actually that sophisticated.

Cambridge Analytica are like stage mentalists: they’re doing something labor-intensive and pretending that it’s something supernatural. A stage mentalist will train for years to learn to quickly memorize a deck of cards and then claim that they can name your card thanks to their psychic powers. You never see the unglamorous, unimpressive memorization practice. Cambridge Analytica uses Facebook to find racist jerks and tell them to vote for Trump and then they claim that they’ve discovered a mystical way to get otherwise sensible people to vote for maniacs.

This isn’t to say that persuasion is impossible. Automated disinformation campaigns can flood the channel with contradictory, seemingly plausible accounts for the current state of affairs, making it hard for a casual observer to make sense of events. Long-term repetition of a consistent narrative, even a manifestly unhinged one, can create doubt and find adherents – think of climate change denial, or George Soros conspiracies, or the anti-vaccine movement.

These are long, slow processes, though, that make tiny changes in public opinion over the course of years, and they work best when there are other conditions that support them – for example, fascist, xenophobic, and nativist movements that are the handmaidens of austerity and privation. When you don’t have enough for a long time, you’re ripe for messages blaming your neighbors for having deprived you of your fair share.

Advertising and influencing works best when you provide a message that people already agree with in a way that they can easily share with others. The ‘long, slow processes’ that Doctorow refers to have been practised offline as well (think of Nazi propaganda, for example). Dark adverts on Facebook are tapping into feelings and reactions that aren’t peculiar to the digital world.

Facebook has thrived by providing ways for people to connect and communicate with one another. Unfortunately, because they’re so focused on profit over people, they’ve done a spectacularly bad job at making sure that the spaces in which people connect are healthy spaces that respect democracy.

There’s an old-fashioned word for this: corruption. In corrupt systems, a few bad actors cost everyone else billions in order to bring in millions – the savings a factory can realize from dumping pollution in the water supply are much smaller than the costs we all bear from being poisoned by effluent. But the costs are widely diffused while the gains are tightly concentrated, so the beneficiaries of corruption can always outspend their victims to stay clear.

Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters.

That last phrase is right on the money.

Source: Locus magazine

No-one wants a single identity, online or offline

It makes sense for companies reliant on advertising to not only get as much data as they can about you, but to make sure that you have a single identity on their platform to which to associate it.

This article by Cory Doctorow in BoingBoing reports on some research around young people and social media. As Doctorow states:

Social media has always had a real-names problem. Social media companies want their users to use their real names because it makes it easier to advertise to them. Users want to be able to show different facets of their identities to different people, because only a sociopath interacts with their boss, their kids, and their spouse in the same way.

I was talking to one of my Moodle colleagues about how, in our mid-thirties, we’re a ‘bridging’ generation between those who only went online in adulthood, and those who have only ever known a world with the internet. I got online for the first time when I was about fourteen or fifteen.

Those younger than me are well aware of the perils and pitfalls of a single online identity:

Amy Lancaster from the Journalism and Digital Communications school at the University of Central Lancashire studies the way that young people resent “the way Facebook ties them into a fixed self…[linking] different areas of a person’s life, carrying over from school to university to work.”

I think Doctorow has made an error around Amy’s surname, which is given as ‘Binns’ instead of ‘Lancaster’ both in the journal article and the original post.

Binns writes:

Young people know their future employers, parents and grandparents are present online, and so they behave accordingly. And it’s not only older people that affect behaviour.

My research shows young people dislike the way Facebook ties them into a fixed self. Facebook insists on real names and links different areas of a person’s life, carrying over from school to university to work. This arguably restricts the freedom to explore new identities – one of the key benefits of the web.

The desire for escapable transience over damning permanence has driven Snapchat’s success, precisely because it’s a messaging app that allows users to capture videos and pictures that are quickly removed from the service.

This is important for the work I’m leading around Project MoodleNet. It’s not just teenagers who want “escapable transience over damning permanence”.

Source: BoingBoing

Attention is an arms race

Cory Doctorow writes:

There is a war for your attention, and like all adversarial scenarios, the sides develop new countermeasures and then new tactics to overcome those countermeasures.

Using a metaphor from virology, he notes that we become to immune to certain types of manipulation over time:

When a new attentional soft spot is discovered, the world can change overnight. One day, every­one you know is signal boosting, retweeting, and posting Upworthy headlines like “This video might hurt to watch. Luckily, it might also explain why,” or “Most Of These People Do The Right Thing, But The Guys At The End? I Wish I Could Yell At Them.” The style was compelling at first, then reductive and simplistic, then annoying. Now it’s ironic (at best). Some people are definitely still susceptible to “This Is The Most Inspiring Yet Depressing Yet Hilarious Yet Horrifying Yet Heartwarming Grad Speech,” but the rest of us have adapted, and these headlines bounce off of our attention like pre-penicillin bacteria being batted aside by our 21st century immune systems.

However, the thing I’m concerned about is the kind of AI-based manipulation that is forever shape-shifting. How do we become immune to a moving target?

Source: Locus magazine