Tag: social networking (page 3 of 7)

It’s time to accept that centralised social media won’t change

A great blog post by Chris Trottier about actually doing something about the problems with centralised social media, by refusing to be a part of it any more.

As an aside, once you see the problem with capitalism mediating every human relationship and interest, you can’t un-see it. For example, I’m extremely hostile to advertising. I really can’t stand it these days.

Centralized social media won’t change. No regulatory bodies are coming to the rescue. If you hang around Twitter or Facebook long enough, no benevolent CEO will sprinkle magic pixie dust to make it better.

Acceptance is no small thing. If you’ve spent years on a social network, investing in relationships, it’s hard to accept that all that effort was a waste. I’m not talking about the people you build friendships with, but the companies and services that connect you. Twitter and Facebook are the nuclear ooze of the Internet, and nothing’s going to make them better.

It’s time to let go. Toxic social media doesn’t care about you, it just wants to exploit you. To them, you’re inventory, a blip in a database.

[…]

Getting rid of toxic social media is about building a future without it. There’s thousands of developers working on an open web, all who are dedicated to building a better Internet. Still, if we want those walled gardens to be dismantled, we must let developers know it’s worth while to code an alternative.

Thus, it’s time to accept centralized social media for what it is: it is toxic and won’t change. Once you accept this, vote with your feet. Then vote with your wallet.

Source: What should we do about toxic social media? | Peerverse

Get off Twitter if you want to see your friends’ posts

Tyler Freeman wrote a script to analyse the tweets he’s shown in his algorithmic Twitter timeline. 90% of his friends (i.e. the people he chose to follow) never made it to the main feed.

The diagram below shows the 90% in grey, withthe people he follows in orange, strangers are in blue, and ads are pink. This is what happens when you have software with shareholders.

I am following over 2,000 people, so to only see tweets from 10 percent of them is disconcerting; 90 percent of the people I intentionally follow, and want to hear from, are being ignored/hidden from me. When we dig deeper, it gets even worse.

[…]

The way I see it, the centralized path via government regulation is a short-term fix which may be necessary given the amount of power our current societal structures allot to social media corporations, but the long-term fix is to put the power into the hands of each user instead—especially considering that centralized power structures are how we got into this mess in the first place. I’m eager to see what this new world of decentralization will bring us, and how it could afford us more agency in how we donate our attention and how we manage our privacy.

Source: Does Twitter’s Algorithm Hate Your Friends? | Nightingale

Reducing offensive social media messages by intervening during content-creation

Six per cent isn’t a lot, but perhaps a number of approaches working together can help with this?

The proliferation of harmful and offensive content is a problem that many online platforms face today. One of the most common approaches for moderating offensive content online is via the identification and removal after it has been posted, increasingly assisted by machine learning algorithms. More recently, platforms have begun employing moderation approaches which seek to intervene prior to offensive content being posted. In this paper, we conduct an online randomized controlled experiment on Twitter to evaluate a new intervention that aims to encourage participants to reconsider their offensive content and, ultimately, seeks to reduce the amount of offensive content on the platform. The intervention prompts users who are about to post harmful content with an opportunity to pause and reconsider their Tweet. We find that users in our treatment prompted with this intervention posted 6% fewer offensive Tweets than non-prompted users in our control. This decrease in the creation of offensive content can be attributed not just to the deletion and revision of prompted Tweets — we also observed a decrease in both the number of offensive Tweets that prompted users create in the future and the number of offensive replies to prompted Tweets. We conclude that interventions allowing users to reconsider their comments can be an effective mechanism for reducing offensive content online.

Source: Reconsidering Tweets: Intervening During Tweet Creation Decreases Offensive Content | arXiv.org