Tag: abuse

Reducing offensive social media messages by intervening during content-creation

Six per cent isn’t a lot, but perhaps a number of approaches working together can help with this?

The proliferation of harmful and offensive content is a problem that many online platforms face today. One of the most common approaches for moderating offensive content online is via the identification and removal after it has been posted, increasingly assisted by machine learning algorithms. More recently, platforms have begun employing moderation approaches which seek to intervene prior to offensive content being posted. In this paper, we conduct an online randomized controlled experiment on Twitter to evaluate a new intervention that aims to encourage participants to reconsider their offensive content and, ultimately, seeks to reduce the amount of offensive content on the platform. The intervention prompts users who are about to post harmful content with an opportunity to pause and reconsider their Tweet. We find that users in our treatment prompted with this intervention posted 6% fewer offensive Tweets than non-prompted users in our control. This decrease in the creation of offensive content can be attributed not just to the deletion and revision of prompted Tweets — we also observed a decrease in both the number of offensive Tweets that prompted users create in the future and the number of offensive replies to prompted Tweets. We conclude that interventions allowing users to reconsider their comments can be an effective mechanism for reducing offensive content online.

Source: Reconsidering Tweets: Intervening During Tweet Creation Decreases Offensive Content | arXiv.org

Abusing AI girlfriends

I don’t often share this kind of thing because I find it distressing. We shouldn’t be surprised, though, that the kind of people who physically, sexually, and emotionally abuse other humans beings also do so in virtual worlds, too.

In general, chatbot abuse is disconcerting, both for the people who experience distress from it and the people who carry it out. It’s also an increasingly pertinent ethical dilemma as relationships between humans and bots become more widespread — after all, most people have used a virtual assistant at least once.

On the one hand, users who flex their darkest impulses on chatbots could have those worst behaviors reinforced, building unhealthy habits for relationships with actual humans. On the other hand, being able to talk to or take one’s anger out on an unfeeling digital entity could be cathartic.

But it’s worth noting that chatbot abuse often has a gendered component. Although not exclusively, it seems that it’s often men creating a digital girlfriend, only to then punish her with words and simulated aggression. These users’ violence, even when carried out on a cluster of code, reflect the reality of domestic violence against women.

Source: Men Are Creating AI Girlfriends and Then Verbally Abusing Them | Futurism

Microcast #093 — Boring hot dogs


Everything from life-shortening foods to Twitter’s attempt to control feuds.

Show notes

Image via Pexels

Background music: Shimmers by Synth Soundscapes (aka Mentat)