Tag: EFF (page 1 of 3)

What kind of world do we want? (or, why regulation matters)

I saw a thread on Mastodon recently, which included this image:

Three images with the title 'Space required to Transport 48 People'. Each image is the same, with cars backed up down a road. The caption for each image is 'Car', 'Electric Car' and 'Autonomous Car', respectively.

Someone else replied with a meme showing a series of images with the phrase “They feed us poison / so we buy their ‘cures’ / while they ban our medicine”. The poison in this case being cars burning fossil fuels, the cures being electric and/or autonomous cars, and the medicine public transport.

There’s similar kind of thinking in the world of tech, with at least one interviewee in the documentary The Social Dilemma saying that people should be paid for their data. I’ve always been uneasy about this, so it’s good to see the EFF come out strongly against it:

Let’s be clear: getting paid for your data—probably no more than a handful of dollars at most—isn’t going to fix what’s wrong with privacy today. Yes, a data dividend may sound at first blush like a way to get some extra money and stick it to tech companies. But that line of thinking is misguided, and falls apart quickly when applied to the reality of privacy today. In truth, the data dividend scheme hurts consumers, benefits companies, and frames privacy as a commodity rather than a right.

EFF strongly opposes data dividends and policies that lay the groundwork for people to think of the monetary value of their data rather than view it as a fundamental right. You wouldn’t place a price tag on your freedom to speak. We shouldn’t place one on our privacy, either.

Hayley Tsukayama, Why Getting Paid for Your Data Is a Bad Deal (EFF)

As the EFF points out, who would get to set the price of that data, anyway? Also, individual data is useful to companies, but so is data in aggregate. Is that covered by such plans?

Facebook makes around $7 per user, per quarter. Even if they gave you all of that, is that a fair exchange?

Those small checks in exchange for intimate details about you are not a fairer trade than we have now. The companies would still have nearly unlimited power to do what they want with your data. That would be a bargain for the companies, who could then wipe their hands of concerns about privacy. But it would leave users in the lurch.

All that adds up to a stark conclusion: if where we’ve been is any indication of where we’re going, there won’t be much benefit from a data dividend. What we really need is stronger privacy laws to protect how businesses process our data—which we can, and should do, as a separate and more protective measure.

Hayley Tsukayama, Why Getting Paid for Your Data Is a Bad Deal (EFF)

As the rest of the article goes on to explain, we’re already in a world of ‘pay for privacy’ which is exacerbating the gulf between the haves and the have-nots. We need regulation and legislation to curb this before it gallops away from us.

You can’t tech your way out of problems the tech didn’t create

The Electronic Frontier Foundation (EFF), is a US-based non-profit that exists to defend civil liberties in the digital world. They’ve been around for 30 years, and I support them financially on a monthly basis.

In this article by Corynne McSherry, EFF’s Legal Director, she outlines the futility in attempts by ‘Big Social’ to do content moderation at scale:

[C]ontent moderation is a fundamentally broken system. It is inconsistent and confusing, and as layer upon layer of policy is added to a system that employs both human moderators and automated technologies, it is increasingly error-prone. Even well-meaning efforts to control misinformation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression.

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

Ultimately, these monolithic social networks have a problem around false positives. It’s in their interests to be over-zealous, as they’re increasingly under the watchful eye of regulators and governments.

We have been watching closely as Facebook, YouTube, and Twitter, while disclaiming any interest in being “the arbiters of truth,” have all adjusted their policies over the past several months to try arbitrate lies—or at least flag them. And we’re worried, especially when we look abroad. Already this year, an attempt by Facebook to counter election misinformation targeting Tunisia, Togo, Côte d’Ivoire, and seven other African countries resulted in the accidental removal of accounts belonging to dozens of Tunisian journalists and activists, some of whom had used the platform during the country’s 2011 revolution. While some of those users’ accounts were restored, others—mostly belonging to artists—were not.

Corynne McSherry, Content Moderation and the U.S. Election: What to Ask, What to Demand (EFF)

McSherry’s analysis is spot-on: it’s the algorithms that are a problem here. Social networks employ these algorithms because of their size and structure, and because of the cost of human-based content moderation. After all, these are companies with shareholders.

Algorithms used by Facebook’s Newsfeed or Twitter’s timeline make decisions about which news items, ads, and user-generated content to promote and which to hide. That kind of curation can play an amplifying role for some types of incendiary content, despite the efforts of platforms like Facebook to tweak their algorithms to “disincentivize” or “downrank” it. Features designed to help people find content they’ll like can too easily funnel them into a rabbit hole of disinformation.

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

She includes useful questions for social networks to answer about content moderation:

  • Is the approach narrowly tailored or a categorical ban?
  • Does it empower users?
  • Is it transparent?
  • Is the policy consistent with human rights principles?

But, ultimately…

You can’t tech your way out of problems the tech didn’t create. And even where content moderation has a role to play, history tells us to be wary. Content moderation at scale is impossible to do perfectly, and nearly impossible to do well, even under the most transparent, sensible, and fair conditions

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

I’m so pleased that I don’t use Facebook products, and that I only use Twitter these days as a place to publish links to my writing.

Instead, I’m much happier on the Fediverse, a place where if you don’t like the content moderation approach of the instance you’re on, you can take your digital knapsack and decide to call another place home. You can find me here (for now!).

When people are free to do as they please, they usually imitate each other

Graphic showing a hospital, face masks, and hand washing

😷 How do pandemics end?

🙆 How I talk to the victims of conspiracy theories

🔒 The Github youtube-dl Takedown Isn’t Just a Problem of American Law

🖥️ The Raspberry Pi 400 – Teardown and Review

🐧 As a former social media analyst, I’m quitting Twitter


Quotation-as-title by Eric Hoffer. Image from top-linked post.