Tag: moderation

You can’t tech your way out of problems the tech didn’t create

The Electronic Frontier Foundation (EFF), is a US-based non-profit that exists to defend civil liberties in the digital world. They’ve been around for 30 years, and I support them financially on a monthly basis.

In this article by Corynne McSherry, EFF’s Legal Director, she outlines the futility in attempts by ‘Big Social’ to do content moderation at scale:

[C]ontent moderation is a fundamentally broken system. It is inconsistent and confusing, and as layer upon layer of policy is added to a system that employs both human moderators and automated technologies, it is increasingly error-prone. Even well-meaning efforts to control misinformation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression.

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

Ultimately, these monolithic social networks have a problem around false positives. It’s in their interests to be over-zealous, as they’re increasingly under the watchful eye of regulators and governments.

We have been watching closely as Facebook, YouTube, and Twitter, while disclaiming any interest in being “the arbiters of truth,” have all adjusted their policies over the past several months to try arbitrate lies—or at least flag them. And we’re worried, especially when we look abroad. Already this year, an attempt by Facebook to counter election misinformation targeting Tunisia, Togo, Côte d’Ivoire, and seven other African countries resulted in the accidental removal of accounts belonging to dozens of Tunisian journalists and activists, some of whom had used the platform during the country’s 2011 revolution. While some of those users’ accounts were restored, others—mostly belonging to artists—were not.

Corynne McSherry, Content Moderation and the U.S. Election: What to Ask, What to Demand (EFF)

McSherry’s analysis is spot-on: it’s the algorithms that are a problem here. Social networks employ these algorithms because of their size and structure, and because of the cost of human-based content moderation. After all, these are companies with shareholders.

Algorithms used by Facebook’s Newsfeed or Twitter’s timeline make decisions about which news items, ads, and user-generated content to promote and which to hide. That kind of curation can play an amplifying role for some types of incendiary content, despite the efforts of platforms like Facebook to tweak their algorithms to “disincentivize” or “downrank” it. Features designed to help people find content they’ll like can too easily funnel them into a rabbit hole of disinformation.

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

She includes useful questions for social networks to answer about content moderation:

  • Is the approach narrowly tailored or a categorical ban?
  • Does it empower users?
  • Is it transparent?
  • Is the policy consistent with human rights principles?

But, ultimately…

You can’t tech your way out of problems the tech didn’t create. And even where content moderation has a role to play, history tells us to be wary. Content moderation at scale is impossible to do perfectly, and nearly impossible to do well, even under the most transparent, sensible, and fair conditions

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

I’m so pleased that I don’t use Facebook products, and that I only use Twitter these days as a place to publish links to my writing.

Instead, I’m much happier on the Fediverse, a place where if you don’t like the content moderation approach of the instance you’re on, you can take your digital knapsack and decide to call another place home. You can find me here (for now!).

The shoe that fits one person pinches another; there is no recipe for living that suits all cases

Twitter, the Fediverse, and MoodleNet

In a recent blog post, Twitter made a big deal of the fact that they are testing new conversation settings.

While some people don’t necessarily think this is a good idea, I think it’s a step forward. In fact, I’ve actually already tried out this functionality… on the Fediverse.

The Fediverse (a portmanteau of “federation” and “universe”) is the ensemble of federated (i.e. interconnected) servers that are used for web publishing (i.e. social networking, microblogging, blogging, or websites) and file hosting, but which, while independently hosted, can intercommunicate with each other.

Wikipedia

That’s a mouthful. Let’s get to the details of that in a moment and deal with a concrete example instead. Here is a screenshot showing what Twitter has learned from Mastodon (and other federated social networks) in terms of how to make conversations better.

Composing a ‘toot’ in Mastodon and choosing who can see it

The Fediverse feels like a very different place to Twitter. There’s a reason why you will find the marginalised, the oppressed, and very niche interests here: it’s a safe space. And, despite macho right-leaning posturing, we all need spaces online where we can be ourselves.


Of course ‘federation’ and ‘decentralisation’ aren’t words that most of us tend to use on a day-to-day basis. So it’s important to define terms here so you can see the inherent difference between using something like Twitter and something like Mastodon.

Note: I can pretty much guarantee by 2030 you’ll be using a federated social network of some description. After all, in 2007 people told me Twitter would never catch on, yet a few years later pretty much everyone was using it.)

Taken from docs.joinmastodon.org

Check out the diagram above. On the left, is the representation of a centralised platform. An example of that would be Facebook. You’re either on Facebook, or you’re not on Facebook. I don’t use any of Facebook’s products out of a concern for privacy, civil liberties, and the threat they pose to democracy. As a result, my ethical stance means that anything posted to Facebook, Instagram, or WhatsApp is inaccessible to me.It’s either have an account on their servers, or you don’t.

On the right of the diagram, you can the representation of a distributed social network. Here, every server has a copy of what is on every other server. This is how bittorrent works, and is great for resilience and ensuring things are fault-tolerant. There are a couple of examples of social networks that use this approach (e.g. Scuttlebutt), but they’re primarily used for situations where users have intermittent internet access.

Then, in the middle is a federated social network. This is what I’m focusing on in this article. It’s kind of how email works; you can email anyone else in the world no matter which email platform they use. GMail users email Outlook users email Fastmail users. Only the data you send and receive with the person you are communicating with resides on each email server; you don’t have a copy of everyone in the whole network’s email!

So, just as with email, federated social networks have an underlying protocol to ensure that messages from one platform can be understood, displayed, and replied to by another. Those making the platform, of course, have to bake that functionality in; Facebook, Twitter, and the like choose not to do so.

What does this mean in practice? Well, let’s take three examples. The first is around 10 years ago when I decided to delete my Facebook account. That means I haven’t had an account there, or been able to access any non-public information on that social network for a decade.

On the other hand, about five years ago, I ditched GMail for Protonmail because I wanted to improve the privacy and security of my personal email account. Leaving GMail didn’t mean giving up having an email account.

Likewise, a couple of years ago, I decided to leave my Mastodon-powered social.coop account as I was getting some hassle. Instead of quitting the social network, as I would have had to do if this had happened on Facebook, I could quickly and easily move my account to mastodon.social. All of my settings were imported, including all of the people I was following!


An aside about moderation. What Twitter is doing with its new functionality is giving its users tools to do some of their own moderation. Other than that, the only moderation possible within the Twitter network is to ‘report’ tweets for spam or abuse. Moderators, acting on a network-wide scale then need to figure out whether the tweet contravened their guidelines. Having reported tweets before, this can take days and is often not resolved to anyone’s satisfaction.

Contrast that with the Fediverse, where people join instances depending on a range of factors including their geographic location, languages spoken, political and religious beliefs, tolerance for profanity, and so on. Fediverse users are accessing the wider network through a server that is moderated by people they trust. If they stop trusting those moderators they can move their account elsewhere, or even host their own server.

This leads to much faster, more local, and more effective moderation. Instance-level blocking is common, as it should be. After all, you have the right to discuss with other people things I find hateful, but it doesn’t mean I have to see them on my timeline.


Post using PixelFed
Post using PixelFed

You may be wondering about what how this looks and feels in practice. The above screenshot is from PixelFed, a federated social network that is a bit like Instagram. The difference, as I’m sure you’ve already guessed, is that it’s federated!

Mastodon timeline showing update from PixelFed

Check out the two posts on my Mastodon timeline above.

The top post is an example of someone on Mastodon ‘republishing’ the same thing they’ve posted on Twitter. They’ve literally had to do the manual work of separately uploading the image and entering the text on each social network, and have to maintain two separate accounts.

The bottom post, on the other hand, is my PixelFed post showing up in my Mastodon feed. No extra work was involved here: anyone’s Mastodon account can follow anyone’s PixelFed account, and it’s all down to the magic of open, federated protocols. In this case, ActivityPub.

There are many federated social networks ⁠— many more, in fact, than are listed on the Wikipedia page for Fediverse. One of my favourites is Misskey just because it’s so… Japanese. You can choose whatever suits you, and everything works together.

As the Electronic Frontier Foundation said back in 2011 when writing about federated social networks:

The best way for online social networking to become safer, more flexible, and more innovative is to distribute the ability and authority to the world’s users and developers, whose various needs and imaginations can do far more than what any single company could achieve.

Richard Esguerra (EFF)

As many people reading this will be aware, I have skin in this game, a dog in this fight, a horse in this race because of MoodleNet. The difference is that MoodleNet is not only a federated social network, but a decentralised digital commons. Educators join communities to curate collections of openly-licensed resources.

This poses additional design challenges to those faced by existing federated social networks. We’re pretty close now to v1.0 beta and have built upon the fantastic thinking and approaches of other federated social networks. In addition, we’ve added functionality that is specific (at the moment, at least) to MoodleNet, and suits our target audience.

No video above? Try this!

So not so much as a ‘conclusion’ to this particular piece of writing as a screencast video to show you what I mean with MoodleNet, as well as the judicious use of this emoji: 🤔


Quotation-as-title from Carl Jung. Header image by Md. Zahid Hasan Joy

Friday flowerings

Did you see these things this week?

  • Happy 25th year, blogging. You’ve grown up, but social media is still having a brawl (The Guardian) — “The furore over social media and its impact on democracy has obscured the fact that the blogosphere not only continues to exist, but also to fulfil many of the functions of a functioning public sphere. And it’s massive. One source, for example, estimates that more than 409 million people view more than 20bn blog pages each month and that users post 70m new posts and 77m new comments each month. Another source claims that of the 1.7 bn websites in the world, about 500m are blogs. And WordPress.com alone hosts blogs in 120 languages, 71% of them in English.”
  • Emmanuel Macron Wants to Scan Your Face (The Washington Post) — “President Emmanuel Macron’s administration is set to be the first in Europe to use facial recognition when providing citizens with a secure digital identity for accessing more than 500 public services online… The roll-out is tainted by opposition from France’s data regulator, which argues the electronic ID breaches European Union rules on consent – one of the building blocks of the bloc’s General Data Protection Regulation laws – by forcing everyone signing up to the service to use the facial recognition, whether they like it or not.”
  • This is your phone on feminism (The Conversationalist) — “Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true. This cognitive dissonance confuses and paralyses us. And look around. Everyone has a smartphone. So it’s probably not so bad, and anyway, that’s just how things work. Right?”
  • Google’s auto-delete tools are practically worthless for privacy (Fast Company) — “In reality, these auto-delete tools accomplish little for users, even as they generate positive PR for Google. Experts say that by the time three months rolls around, Google has already extracted nearly all the potential value from users’ data, and from an advertising standpoint, data becomes practically worthless when it’s more than a few months old.”
  • Audrey Watters (Uses This) — “For me, the ideal set-up is much less about the hardware or software I am using. It’s about the ideas that I’m thinking through and whether or not I can sort them out and shape them up in ways that make for a good piece of writing. Ideally, that does require some comfort — a space for sustained concentration. (I know better than to require an ideal set up in order to write. I’d never get anything done.)”
  • Computer Files Are Going Extinct (OneZero) — “Files are skeuomorphic. That’s a fancy word that just means they’re a digital concept that mirrors a physical item. A Word document, for example, is like a piece of paper, sitting on your desk(top). A JPEG is like a painting, and so on. They each have a little icon that looks like the physical thing they represent. A pile of paper, a picture frame, a manila folder. It’s kind of charming really.”
  • Why Technologists Fail to Think of Moderation as a Virtue and Other Stories About AI (The LA Review of Books) — “Speculative fiction about AI can move us to think outside the well-trodden clichés — especially when it considers how technologies concretely impact human lives — through the influence of supersized mediators, like governments and corporations.”
  • Inside Mozilla’s 18-month effort to market without Facebook (Digiday) — “The decision to focus on data privacy in marketing the Mozilla brand came from research conducted by the company four years ago into the rise of consumers who make values-based decisions on not only what they purchase but where they spend their time.”
  • Core human values not eyeballs (Cubic Garden) — “Theres so much more to do, but the aims are high and important for not just the BBC, but all public service entities around the world. Measuring the impact and quality on peoples lives beyond the shallow meaningless metrics for public service is critical.”

Image: The why is often invisible via Jessica Hagy’s Indexed

Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box