Tag: social networks (page 1 of 2)

Our nature is such that the common duties of human relationships occupy a great part of the course of our life

Michel de Montaigne, one of my favourite writers, had a very good friend, a ‘soulmate’ in the form of Étienne de la Boétie. He seems to have been quite the character, and an early influence for anarchist thought, before dying of the plague in 1563 at the age of 32.

His main work is translated into English as The Politics of Obedience: The Discourse of Voluntary Servitude where he suggests that the reason we get tyrants and other oppressors is because we, the people, allow them to have power over us. It all seems very relevant to our times, despite being written around 450 years ago!

We live in a time of what Patrick Stokes in New Philosopher calls ‘false media balance’. It’s worth quoting at length, I think:

The problem is that very often the controversy in question is over whether there even is a controversy to begin with. Some people think the world is flat: does that mean the shape of the world is a controversial topic? If you think the mere fact of disagreement means there’s a controversy there, then pretty much any topic you care to mention will turn out to be controversial if you look hard enough. But in a more substantial sense, there’s no real controversy here at all. The scientific journals aren’t full of heated arguments over the shape of the planet. The university geography departments aren’t divided into warring camps of flattists and spherists. There is no serious flat-earth research program in the geology literature.

So far, so obvious. But think about certain other scientific ‘controversies’ where competing arguments do get media time, such as climate change, or the safety and efficacy of vaccination. On the one side you have the overwhelming weight of expert opinion; on the other side amateur, bad-faith pseudoscience. In the substantial sense there aren’t even ‘two sides’ here after all.

Yet that’s not what we see; we just see two talking heads, offering competing views. The very fact both ‘heads’ were invited to speak suggests someone, somewhere has decided they are worth listening to. In other words, the very format implicitly drags every viewpoint to the same level and treats them as serious candidates for being true. That’s fine, you might reply: sapere aude! Smart and savvy viewers will see the bad arguments or shoddy claims for what they are, right? Except there’s some evidence that precisely the opposite happens. The message that actually sticks with viewers is not “the bad or pseudoscientific arguments are nonsense”, but rather that “there’s a real controversy here”.

There’s a name for this levelling phenomenon: false balance. The naïve view of balance versus bias contains no room for ‘true’ versus ‘false’ balance. Introducing a truth-value means we are not simply talking about neutrality anymore – which, as we’ve seen, nobody can or should achieve fully anyway. False balance occurs when we let in views that haven’t earned their place, or treat non-credible views as deserving the same seat at the table.

To avoid false balance, the media needs to make important and context-sensitive discriminations about what is a credible voice and what isn’t. They need balance as a verb, rather than a noun. To balance is an act, one that requires ongoing effort and constant readjustment. The risk, after all, is falling – perhaps right off the edge of the world.

Patrick Stokes

For many people, we receive a good proportion of our news via social networks. This means that, instead of being filtered by the mainstream media (who are doing a pretty bad job), the news it’s filtered by all of us, who are extremely partisan. We share things that validate our political, economic, moral, and social beliefs, and rail against those who state the opposite.

While we can wring our hands about the free speech aspect of this, it’s important to note the point that’s being made by the xkcd cartoon that accompanies today’s article: we don’t have to listen to other people if we don’t want to.

In a great post from 2015, Audrey Watters explains how she uses some auto-blocking apps to make her continued existence on Twitter tolerable. Again, it’s worth quoting at length:

I currently block around 3800 accounts on Twitter.

By using these automated blocking tools – particularly blocking accounts with few followers – I know that I’ve blocked a few folks in error. Teachers new to Twitter are probably the most obvious example. Of course, if someone feels as though I’ve accidentally blocked them, they can still contact me through other means. (And sometimes they do. And sometimes I unblock.)

But I’m not going to give up this little bit of safety and sanity I’ve found thanks to these collaborative blocking tools for fear of upsetting a handful of people who have mistakenly ended up being blocked by me. I’m sorry. I’m just not.

And I’m not in the least bit worried that, by blocking accounts, I’m somehow trapping myself in a “filter bubble.” I don’t need to be exposed to harassment and violence to know that harassment and violence are rampant. I don’t need to be exposed to racism and misogyny to know that racism and misogyny exist. I see that shit, I live that shit already daily, whether I block accounts on social media or not.

My blocking trolls doesn’t damage civic discourse; indeed, it helps me be able to be a part of it. Despite all the talk about the Internet and democratization of ideas and voices, the architecture of many of the technologies we use is designed to amplify certain ideas and voices and silence others, protect certain voices, expose others to violence. My blocking trolls doesn’t silence anybody. But it does help me have the stamina to maintain my voice.

People need not feel bad about blocking, worry that it’s impolitic or impolite. It’s already hard work to be online. Often, it’s emotional work. (And it’s work we do for free, I might add.) People – particularly people of color, women, marginalized groups – shouldn’t have to take on the extra work of dealing with abusers and harassers and trolls. Block. Block. Block. Save your energy for other battles, ones that you choose to engage in.

Audrey Watters

Blocking on the individual level is one thing, but what about whole instances running social networking software blocking other instances with which they’re technically interoperable?

There’s some really interesting conversations happening on the Fediverse at the moment. A ‘free speech’ social network called Gab, which was was forced to shut down as a centralised service will be soon relaunching as a fork of Mastodon.

In practice, this means that Gab can’t easily be easily shut down, and there’s many people on Mastodon, Pleroma, Misskey, and other social networks that make up the Fediverse, who are concerned about that. Those who have found a home on the Fediverse are disproportionately likely to have met with trolling, bullying, and abuse on centralised services such as Twitter.

Any service like Gab that’s technically compatible with popular Fediverse services such as Mastodon can, by default, piggyback on the latter’s existing ecosystem of apps. Some of these apps have decided to fight back. For example Tusky has taken a stand, as can be seen by this update from its main developer:

Before I go off to celebrate Midsummer by being in bed sick (Swedish woes), I want to share a small update.

Tusky will keep blocking servers which actively promote fascism. This in particular means Gab.

We will get our next release out just in time for the 4th of July.

Don’t even try to debate us about Free Speech. This is our speech, exercising #ANTIFA views. And we will keep doing it

We will post a bigger update at a later time about what this all really means.

@Tusky@mastodon.social

Some may wonder why, exactly, there’s such a problem here. After all, can’t individual users do what Audrey Watters is doing with Twitter, and block people on the individual level — either automatically, or manually?

The problem is that, due to practices such as sealioning, certain communities ‘sniff blood’ and then pile on:

Sealioning (also spelled sea-lioning and sea lioning) is a type of trolling or harassment which consists of pursuing people with persistent requests for evidence or repeated questions, while maintaining a pretense of civility. It may take the form of “incessant, bad-faith invitations to engage in debate”.

Wikipedia

So it feels like we’re entering a time with the balkanisation of the internet because of geo-politics (the so-called Splinternet), but also a retreat into online social interactions that are more… bounded.

It’s going to be interesting to see where the next 18 months takes us, I think. I can definitely see a decline in centralised social networks, especially among certain demographics. If I’m correct, and these people end up on federated social networks, then it’s up to those of already there to set not only the technical standards, but the moral standards, too.


Also check out:

  • The secret rules of the internet (The Verge) — “The moderators of these platforms — perched uneasily at the intersection of corporate profits, social responsibility, and human rights — have a powerful impact on free speech, government dissent, the shaping of social norms, user safety, and the meaning of privacy. What flagged content should be removed? Who decides what stays and why? What constitutes newsworthiness? Threat? Harm? When should law enforcement be involved?”
  • The New Wilderness (Idle Words) — “Ambient privacy is not a property of people, or of their data, but of the world around us. Just like you can’t drop out of the oil economy by refusing to drive a car, you can’t opt out of the surveillance economy by forswearing technology (and for many people, that choice is not an option). While there may be worthy reasons to take your life off the grid, the infrastructure will go up around you whether you use it or not.”
  • IQ rates are dropping in many developed countries and that doesn’t bode well for humanity (Think) — “Details vary from study to study and from place to place given the available data. IQ shortfalls in Norway and Denmark appear in longstanding tests of military conscripts, whereas information about France is based on a smaller sample and a different test. But the broad pattern has become clearer: Beginning around the turn of the 21st century, many of the most economically advanced nations began experiencing some kind of decline in IQ.”

Header image via xkcd

Sometimes even to live is an act of courage

Thank you to Seneca for the quotation for today’s title, which sprang to mind after reading Rosie Spinks’ claim in Quartz that we’ve reached ‘peak influencer’.

Where once the social network was basically lunch and sunsets, it’s now a parade of strategically-crafted life updates, career achievements, and public vows to spend less time online (usually made by people who earn money from social media)—all framed with the carefully selected language of a press release. Everyone is striving, so very hard.

Thank goodness for that. The selfie-obsessed influencer brigade is an insidious effect of the neoliberalism that permeates western culture:

For the internet influencer, everything from their morning sun salutation to their coffee enema (really) is a potential money-making opportunity. Forget paying your dues, or working your way up—in fact, forget jobs. Work is life, and getting paid to live your best life is the ultimate aspiration.

[…]

“Selling out” is not just perfectly OK in the influencer economy—it’s the raison d’etre. Influencers generally do not have a craft or discipline to stay loyal to in the first place, and by definition their income comes from selling a version of themselves.

As Yascha Mounk, writing in The Atlantic, explains the problem isn’t necessarily with social networks. It’s that you care about them. Social networks flatten everything into a never-ending stream. That stream makes it very difficult to differentiate between gossip and (for example) extremely important things that are an existential threat to democratic institutions:

“When you’re on Twitter, every controversy feels like it’s at the same level of importance,” one influential Democratic strategist told me. Over time, he found it more and more difficult to tune Twitter out: “People whose perception of reality is shaped by Twitter live in a different world and a different country than those off Twitter.”

It’s easier for me to say these days that our obsession with Twitter and Instagram is unhealthy. While I’ve never used Instagram (because it’s owned by Facebook) a decade ago I was spending hours each week on Twitter. My relationship with the service has changed as I’ve grown up and it has changed — especially after it became a publicly-traded company in 2013.

Twitter, in particular, now feels like a neverending soap opera similar to EastEnders. There’s always some outrage or drama running. Perhaps it’s better, as Catherine Price suggests in The New York Times, just to put down our smartphones?

Until now, most discussions of phones’ biochemical effects have focused on dopamine, a brain chemical that helps us form habits — and addictions. Like slot machines, smartphones and apps are explicitly designed to trigger dopamine’s release, with the goal of making our devices difficult to put down.

This manipulation of our dopamine systems is why many experts believe that we are developing behavioral addictions to our phones. But our phones’ effects on cortisol are potentially even more alarming.

Cortisol is our primary fight-or-flight hormone. Its release triggers physiological changes, such as spikes in blood pressure, heart rate and blood sugar, that help us react to and survive acute physical threats.

Depending on how we use them, social networks can stoke the worst feelings in us: emotions such as jealousy, anger, and worry. This is not conducive to healthy outcomes, especially for children where stress has a direct correlation to the take-up of addictive substances, and to heart disease in later life.

I wonder how future generations will look back at this time period?


Also check out:

Decentralisation and networked agency

I came to know of Ton Zylstra through some work I did with Jeroen de Boer and the Bibliotheekservice Fryslân team in the Netherlands last year. While I haven’t met Zylstra in person, I’m a fan of his ideas.

In a recent post he talks about the problems of generic online social networks:

Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.

Although he goes on to talk about federation, it’s his analysis of the current problem that I’m particularly interested in here. He mentions in passing some work that he’s done on ‘networked agency‘, a term that could be particularly useful. It’s akin to Nassim Nicholas Taleb’s notion of ‘skin in the game‘.

Zylstra writes:

Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.

What we’re building with MoodleNet is very intentionally focused on communities who come together to collectively curate and build. I think it’s set to be a very different environment from what we’ve (unfortunately) come to expect from social networks such as Twitter and Facebook.

Source: Ton Zylstra

Trust and the cult of your PLN

This is a long article with a philosophical take on one of my favourite subjects: social networks and the flow of information. The author, C Thi Nguyen, is an assistant professor of philosophy at Utah Valley University and distinguishes between two things that he things have been conflated:

Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.

Teasing things apart a bit, Nguyen gives some definitions:

Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission.

[…]

An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited.

[…]

In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.

It feels like towards the end of my decade as an active user of Twitter there was a definite shift from it being an ‘epistemic bubble’ towards being an ‘echo chamber’. My ‘Personal Learning Network’ (or ‘PLN’) seemed to be a bit more militant in its beliefs.

Nguyen goes on to talk at length about fake news, sociological theories, and Cartesian epistemology. Where he ends up, however, is where I would: trust.

As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain. Ask yourself: could you tell a good statistician from an incompetent one? A good biologist from a bad one? A good nuclear engineer, or radiologist, or macro-economist, from a bad one? Any particular reader might, of course, be able to answer positively to one or two such questions, but nobody can really assess such a long chain for herself. Instead, we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.

That puts us a double-bind. We need to make ourselves vulnerable in order to participate in a society built on trust, but that very vulnerability puts us at danger of being manipulated.

I see this in fanatical evangelism of blockchain solutions to the ‘problem’ of operating in a trustless environment. To my mind, we need to be trusting people more, not less. Of course, there are obvious exceptions, but breaches of trust are near the top of the list of things we should punish most in a society.

Is there anything we can do, then, to help an echo-chamber member to reboot? We’ve already discovered that direct assault tactics – bombarding the echo-chamber member with ‘evidence’ – won’t work. Echo-chamber members are not only protected from such attacks, but their belief systems will judo such attacks into further reinforcement of the echo chamber’s worldview. Instead, we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.

So the way forward is for people to develop empathy and to show trust. Not present people with evidence that they’re wrong. That’s never worked in the past, and it won’t work now. Our problem isn’t a deficit in access to information, it’s a deficit in trust.

Source: Aeon (via Ian O’Byrne)

On your deathbed, you’re not going to wish that you’d spent more time on Facebook

As many readers of my work will know, I don’t have a Facebook account. This article uses Facebook as a proxy for something that, whether you’ve got an account on the world’s largest social network or not, will be familiar:

An increasing number of us are coming to realize that our relationships with our phones are not exactly what a couples therapist would describe as “healthy.” According to data from Moment, a time-tracking app with nearly five million users, the average person spends four hours a day interacting with his or her phone.

The trick, like anything to which you’re psychologically addicted, is to reframe what you’re doing:

Many people equate spending less time on their phones with denying themselves pleasure — and who likes to do that? Instead, think of it this way: The time you spend on your phone is time you’re not spending doing other pleasurable things, like hanging out with a friend or pursuing a hobby. Instead of thinking of it as “spending less time on your phone,” think of it as “spending more time on your life.”

The thing I find hardest is to leave my phone in a different room, or not take it with me when I go out. There’s always a reason for this (usually ‘being contactable’) but not having it constantly alongside you is probably a good idea:

Leave your phone at home while you go for a walk. Stare out of a window during your commute instead of checking your email. At first, you may be surprised by how powerfully you crave your phone. Pay attention to your craving. What does it feel like in your body? What’s happening in your mind? Keep observing it, and eventually, you may find that it fades away on its own.

There’s a great re-adjustment happening with our attitude towards devices and the services we use on them. In a separate BBC News article, Amol Rajan outlines some reasons why Facebook usage may have actually peaked:

  1. A drop in users
  2. A drop in engagement
  3. Advertiser enmity
  4. Disinformation and fake news
  5. Former executives speak out
  6. Regulatory mood is hardening
  7. GDPR
  8. Antagonism with the news industry

Interesting times.

Source: The New York Times / BBC News

Legislating against manipulated ‘facts’ is a slippery slope

In this day and age it’s hard to know who to trust. I was raised to trust in authority but was particularly struck when I did a deep-dive into Vinay Gupta’s blog about the state being special only because it holds a monopoly on (legal) violence.

As an historian, I’m all too aware of the times that the state (usually represented by a monarch) has served to repress its citizens/subjects. It at least could pretend that it was protecting the majority of the people. As this article states:

Lies masquerading as news are as old as news itself. What is new today is not fake news but the purveyors of such news. In the past, only governments and powerful figures could manipulate public opinion. Today, it’s anyone with internet access. Just as elite institutions have lost their grip over the electorate, so their ability to act as gatekeepers to news, defining what is and is not true, has also been eroded.

So in the interaction between social networks such as Facebook, Twitter, and Instagram on the one hand, and various governments on the other hand, both are interested in power, not the people. Or even any notion of truth, it would seem:

This is why we should be wary of many of the solutions to fake news proposed by European politicians. Such solutions do little to challenge the culture of fragmented truths. They seek, rather, to restore more acceptable gatekeepers – for Facebook or governments to define what is and isn’t true. In Germany, a new law forces social media sites to take down posts spreading fake news or hate speech within 24 hours or face fines of up to €50m. The French president, Emmanuel Macron, has promised to ban fake news on the internet during election campaigns. Do we really want to rid ourselves of today’s fake news by returning to the days when the only fake news was official fake news?

We need to be vigilant. Those we trust today may not be trustworthy tomorrow.

Source: The Guardian

Designing social systems

This article is too long and written in a way that could be more direct, but it still makes some good points. Perhaps the best bit is the comparison of iOS lockscreen (left) with a redesigned one (right).

Most platforms encourage us to act against our values: less humbly, less honestly, less thoughtfully, and so on. Using these platforms while sticking to our values would mean constantly fighting their design. Unless we’re prepared for that fight, we’ll regret our choices.

When we’re joining in with conversations online, then we’re not always part of a group, sometimes we’re part of a network. It seems to me like most of the points the author is making pertain to social networks like Facebook, as opposed to those like Twitter and Mastodon.

He does, however, make a good point about a shift towards people feeling they have to act in a particular way:

Groups are held together by a particular kind of conversation, which I’ll call wisdom. It’s a kind of conversation that people are starved for right now—even amidst nonstop communication, amidst a torrent of articles, videos, and posts.

When this type of conversation is missing, people feel that no one understands or cares about what’s important to them. People feel their values are unheeded and unrecognized.

[T]his situation is easy to exploit, and the media and fake news ecosystems have done just that. As a result, conversations become ideological and polarized, and elections are manipulated.

Tribal politics in social networks are caused by people not having strong offline affinity groups, so they seek their ‘tribe’ online.

If social platforms can make it easier to share our personal values (like small town living) directly, and to acknowledge one another and rally around them, we won’t need to turn them into ideologies or articles. This would do more to heal politics and media than any “fake news” initiative. To do this, designers need to know what this kind of conversation sounds like, how to encourage it, and how to avoid drowning it out.

Ultimately, the author has no answer and (wisely) turns to the community for help. I like the way he points to exercises we can do and groups we can form. I’m not sure it’ll scale, though…

Source: Human Systems

Ethical design in social networks

I’m thinking a lot about privacy and ethical design at the moment as part of my role leading Project MoodleNet. This article gives a short but useful overview of the Ethical Design Manifesto, along with some links for further reading:

There is often a disconnect between what digital designers originally intend with a product or feature, and how consumers use or interpret it.

Ethical user experience design – meaning, for example, designing technologies in ways that promote good online behaviour and intuit how they might be used – may help bridge that gap.

There’s already people (like me) making choices about the technology and social networks they used based on ethics:

User experience design and research has so far mainly been applied to designing tech that is responsive to user needs and locations. For example, commercial and digital assistants that intuit what you will buy at a local store based on your previous purchases.

However, digital designers and tech companies are beginning to recognise that there is an ethical dimension to their work, and that they have some social responsibility for the well-being of their users.

Meeting this responsibility requires designers to anticipate the meanings people might create around a particular technology.

In addition to ethical design, there are other elements to take into consideration:

Contextually aware design is capable of understanding the different meanings that a particular technology may have, and adapting in a way that is socially and ethically responsible. For example, smart cars that prevent mobile phone use while driving.

Emotional design refers to technology that elicits appropriate emotional responses to create positive user experiences. It takes into account the connections people form with the objects they use, from pleasure and trust to fear and anxiety.

This includes the look and feel of a product, how easy it is to use and how we feel after we have used it.

Anticipatory design allows technology to predict the most useful interaction within a sea of options and make a decision for the user, thus “simplifying” the experience. Some companies may use anticipatory design in unethical ways that trick users into selecting an option that benefits the company.

Source: The Conversation

A useful IndieWeb primer

I’ve followed the IndieWeb movement since its inception, but it’s always seemed a bit niche. I love (and use) the POSSE model, for example, but expecting everyone to have domain of their own stacked with open source software seems a bit utopian right now.

I was surprised and delighted, therefore, to see a post on the GoDaddy blog extolling the virtues of the IndieWeb for business owners. The author explains that the IndieWeb movement was born of frustration:

Frustration from software developers who like the idea of social media, but who do not want to hand over their content to some big, unaccountable internet company that unilaterally decides who gets to see what.

Frustration from writers and content creators who do not want a third party between them and the people they want to reach.

Frustration from researchers and journalists who need a way to get their message out without depending on the whim of a big company that monitors, and sometimes censors, what they have to say.

He does a great job of explaining, with an appropriate level of technical detail, how to get started. The thing I’d really like to see in particular is people publishing details of events at a public URL instead of (just) on Facebook:

Importantly, with IndieAuth, you can log into third-party websites using your own domain name. And your visitors can log into your website with their domain name. Or, if you organize events, you can post your event announcement right on your website, and have attendees RSVP either from their own IndieWeb sites, or natively on a social site.

A recommended read. I’ll be pointing people to this in future!

Source: GoDaddy

More on Facebook’s ‘trusted news’ system

Mike Caulfield reflects on Facebook’s announcement that they’re going to allow users to rate the sources of news in terms of trustworthiness. Like me, and most people who have thought about this for more than two seconds, he thinks it’s a bad idea.

Instead, he thinks Facebook should try Google’s approach:

Most people misunderstand what the Google system looks like (misreporting on it is rife) but the way it works is this. Google produces guidance docs for paid search raters who use them to rate search results (not individual sites). These documents are public, and people can argue about whether Google’s take on what constitutes authoritative sources is right — because they are public.

Facebook’s algorithms are opaque by design, whereas, Caulfield argues, Google’s approach is documented:

I’m not saying it doesn’t have problems — it does. It has taken Google some time to understand the implications of some of their decisions and I’ve been critical of them in the past. But I am able to be critical partially because we can reference a common understanding of what Google is trying to accomplish and see how it was falling short, or see how guidance in the rater docs may be having unintended consequences.

This is one of the major issues of our time, particularly now that people have access to the kind of CGI only previously available to Hollywood. And what are they using this AI-powered technology for? Fake celebrity (and revenge) porn, of course.

Source: Hapgood