Tag: social networks (page 1 of 2)

Decentralisation and networked agency

I came to know of Ton Zylstra through some work I did with Jeroen de Boer and the Bibliotheekservice Fryslân team in the Netherlands last year. While I haven’t met Zylstra in person, I’m a fan of his ideas.

In a recent post he talks about the problems of generic online social networks:

Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.

Although he goes on to talk about federation, it’s his analysis of the current problem that I’m particularly interested in here. He mentions in passing some work that he’s done on ‘networked agency‘, a term that could be particularly useful. It’s akin to Nassim Nicholas Taleb’s notion of ‘skin in the game‘.

Zylstra writes:

Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.

What we’re building with MoodleNet is very intentionally focused on communities who come together to collectively curate and build. I think it’s set to be a very different environment from what we’ve (unfortunately) come to expect from social networks such as Twitter and Facebook.

Source: Ton Zylstra

Trust and the cult of your PLN

This is a long article with a philosophical take on one of my favourite subjects: social networks and the flow of information. The author, C Thi Nguyen, is an assistant professor of philosophy at Utah Valley University and distinguishes between two things that he things have been conflated:

Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.

Teasing things apart a bit, Nguyen gives some definitions:

Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission.

[…]

An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited.

[…]

In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.

It feels like towards the end of my decade as an active user of Twitter there was a definite shift from it being an ‘epistemic bubble’ towards being an ‘echo chamber’. My ‘Personal Learning Network’ (or ‘PLN’) seemed to be a bit more militant in its beliefs.

Nguyen goes on to talk at length about fake news, sociological theories, and Cartesian epistemology. Where he ends up, however, is where I would: trust.

As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain. Ask yourself: could you tell a good statistician from an incompetent one? A good biologist from a bad one? A good nuclear engineer, or radiologist, or macro-economist, from a bad one? Any particular reader might, of course, be able to answer positively to one or two such questions, but nobody can really assess such a long chain for herself. Instead, we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.

That puts us a double-bind. We need to make ourselves vulnerable in order to participate in a society built on trust, but that very vulnerability puts us at danger of being manipulated.

I see this in fanatical evangelism of blockchain solutions to the ‘problem’ of operating in a trustless environment. To my mind, we need to be trusting people more, not less. Of course, there are obvious exceptions, but breaches of trust are near the top of the list of things we should punish most in a society.

Is there anything we can do, then, to help an echo-chamber member to reboot? We’ve already discovered that direct assault tactics – bombarding the echo-chamber member with ‘evidence’ – won’t work. Echo-chamber members are not only protected from such attacks, but their belief systems will judo such attacks into further reinforcement of the echo chamber’s worldview. Instead, we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.

So the way forward is for people to develop empathy and to show trust. Not present people with evidence that they’re wrong. That’s never worked in the past, and it won’t work now. Our problem isn’t a deficit in access to information, it’s a deficit in trust.

Source: Aeon (via Ian O’Byrne)

On your deathbed, you’re not going to wish that you’d spent more time on Facebook

As many readers of my work will know, I don’t have a Facebook account. This article uses Facebook as a proxy for something that, whether you’ve got an account on the world’s largest social network or not, will be familiar:

An increasing number of us are coming to realize that our relationships with our phones are not exactly what a couples therapist would describe as “healthy.” According to data from Moment, a time-tracking app with nearly five million users, the average person spends four hours a day interacting with his or her phone.

The trick, like anything to which you’re psychologically addicted, is to reframe what you’re doing:

Many people equate spending less time on their phones with denying themselves pleasure — and who likes to do that? Instead, think of it this way: The time you spend on your phone is time you’re not spending doing other pleasurable things, like hanging out with a friend or pursuing a hobby. Instead of thinking of it as “spending less time on your phone,” think of it as “spending more time on your life.”

The thing I find hardest is to leave my phone in a different room, or not take it with me when I go out. There’s always a reason for this (usually ‘being contactable’) but not having it constantly alongside you is probably a good idea:

Leave your phone at home while you go for a walk. Stare out of a window during your commute instead of checking your email. At first, you may be surprised by how powerfully you crave your phone. Pay attention to your craving. What does it feel like in your body? What’s happening in your mind? Keep observing it, and eventually, you may find that it fades away on its own.

There’s a great re-adjustment happening with our attitude towards devices and the services we use on them. In a separate BBC News article, Amol Rajan outlines some reasons why Facebook usage may have actually peaked:

  1. A drop in users
  2. A drop in engagement
  3. Advertiser enmity
  4. Disinformation and fake news
  5. Former executives speak out
  6. Regulatory mood is hardening
  7. GDPR
  8. Antagonism with the news industry

Interesting times.

Source: The New York Times / BBC News

Legislating against manipulated ‘facts’ is a slippery slope

In this day and age it’s hard to know who to trust. I was raised to trust in authority but was particularly struck when I did a deep-dive into Vinay Gupta’s blog about the state being special only because it holds a monopoly on (legal) violence.

As an historian, I’m all too aware of the times that the state (usually represented by a monarch) has served to repress its citizens/subjects. It at least could pretend that it was protecting the majority of the people. As this article states:

Lies masquerading as news are as old as news itself. What is new today is not fake news but the purveyors of such news. In the past, only governments and powerful figures could manipulate public opinion. Today, it’s anyone with internet access. Just as elite institutions have lost their grip over the electorate, so their ability to act as gatekeepers to news, defining what is and is not true, has also been eroded.

So in the interaction between social networks such as Facebook, Twitter, and Instagram on the one hand, and various governments on the other hand, both are interested in power, not the people. Or even any notion of truth, it would seem:

This is why we should be wary of many of the solutions to fake news proposed by European politicians. Such solutions do little to challenge the culture of fragmented truths. They seek, rather, to restore more acceptable gatekeepers – for Facebook or governments to define what is and isn’t true. In Germany, a new law forces social media sites to take down posts spreading fake news or hate speech within 24 hours or face fines of up to €50m. The French president, Emmanuel Macron, has promised to ban fake news on the internet during election campaigns. Do we really want to rid ourselves of today’s fake news by returning to the days when the only fake news was official fake news?

We need to be vigilant. Those we trust today may not be trustworthy tomorrow.

Source: The Guardian

Designing social systems

This article is too long and written in a way that could be more direct, but it still makes some good points. Perhaps the best bit is the comparison of iOS lockscreen (left) with a redesigned one (right).

Most platforms encourage us to act against our values: less humbly, less honestly, less thoughtfully, and so on. Using these platforms while sticking to our values would mean constantly fighting their design. Unless we’re prepared for that fight, we’ll regret our choices.

When we’re joining in with conversations online, then we’re not always part of a group, sometimes we’re part of a network. It seems to me like most of the points the author is making pertain to social networks like Facebook, as opposed to those like Twitter and Mastodon.

He does, however, make a good point about a shift towards people feeling they have to act in a particular way:

Groups are held together by a particular kind of conversation, which I’ll call wisdom. It’s a kind of conversation that people are starved for right now—even amidst nonstop communication, amidst a torrent of articles, videos, and posts.

When this type of conversation is missing, people feel that no one understands or cares about what’s important to them. People feel their values are unheeded and unrecognized.

[T]his situation is easy to exploit, and the media and fake news ecosystems have done just that. As a result, conversations become ideological and polarized, and elections are manipulated.

Tribal politics in social networks are caused by people not having strong offline affinity groups, so they seek their ‘tribe’ online.

If social platforms can make it easier to share our personal values (like small town living) directly, and to acknowledge one another and rally around them, we won’t need to turn them into ideologies or articles. This would do more to heal politics and media than any “fake news” initiative. To do this, designers need to know what this kind of conversation sounds like, how to encourage it, and how to avoid drowning it out.

Ultimately, the author has no answer and (wisely) turns to the community for help. I like the way he points to exercises we can do and groups we can form. I’m not sure it’ll scale, though…

Source: Human Systems

Ethical design in social networks

I’m thinking a lot about privacy and ethical design at the moment as part of my role leading Project MoodleNet. This article gives a short but useful overview of the Ethical Design Manifesto, along with some links for further reading:

There is often a disconnect between what digital designers originally intend with a product or feature, and how consumers use or interpret it.

Ethical user experience design – meaning, for example, designing technologies in ways that promote good online behaviour and intuit how they might be used – may help bridge that gap.

There’s already people (like me) making choices about the technology and social networks they used based on ethics:

User experience design and research has so far mainly been applied to designing tech that is responsive to user needs and locations. For example, commercial and digital assistants that intuit what you will buy at a local store based on your previous purchases.

However, digital designers and tech companies are beginning to recognise that there is an ethical dimension to their work, and that they have some social responsibility for the well-being of their users.

Meeting this responsibility requires designers to anticipate the meanings people might create around a particular technology.

In addition to ethical design, there are other elements to take into consideration:

Contextually aware design is capable of understanding the different meanings that a particular technology may have, and adapting in a way that is socially and ethically responsible. For example, smart cars that prevent mobile phone use while driving.

Emotional design refers to technology that elicits appropriate emotional responses to create positive user experiences. It takes into account the connections people form with the objects they use, from pleasure and trust to fear and anxiety.

This includes the look and feel of a product, how easy it is to use and how we feel after we have used it.

Anticipatory design allows technology to predict the most useful interaction within a sea of options and make a decision for the user, thus “simplifying” the experience. Some companies may use anticipatory design in unethical ways that trick users into selecting an option that benefits the company.

Source: The Conversation

A useful IndieWeb primer

I’ve followed the IndieWeb movement since its inception, but it’s always seemed a bit niche. I love (and use) the POSSE model, for example, but expecting everyone to have domain of their own stacked with open source software seems a bit utopian right now.

I was surprised and delighted, therefore, to see a post on the GoDaddy blog extolling the virtues of the IndieWeb for business owners. The author explains that the IndieWeb movement was born of frustration:

Frustration from software developers who like the idea of social media, but who do not want to hand over their content to some big, unaccountable internet company that unilaterally decides who gets to see what.

Frustration from writers and content creators who do not want a third party between them and the people they want to reach.

Frustration from researchers and journalists who need a way to get their message out without depending on the whim of a big company that monitors, and sometimes censors, what they have to say.

He does a great job of explaining, with an appropriate level of technical detail, how to get started. The thing I’d really like to see in particular is people publishing details of events at a public URL instead of (just) on Facebook:

Importantly, with IndieAuth, you can log into third-party websites using your own domain name. And your visitors can log into your website with their domain name. Or, if you organize events, you can post your event announcement right on your website, and have attendees RSVP either from their own IndieWeb sites, or natively on a social site.

A recommended read. I’ll be pointing people to this in future!

Source: GoDaddy

More on Facebook’s ‘trusted news’ system

Mike Caulfield reflects on Facebook’s announcement that they’re going to allow users to rate the sources of news in terms of trustworthiness. Like me, and most people who have thought about this for more than two seconds, he thinks it’s a bad idea.

Instead, he thinks Facebook should try Google’s approach:

Most people misunderstand what the Google system looks like (misreporting on it is rife) but the way it works is this. Google produces guidance docs for paid search raters who use them to rate search results (not individual sites). These documents are public, and people can argue about whether Google’s take on what constitutes authoritative sources is right — because they are public.

Facebook’s algorithms are opaque by design, whereas, Caulfield argues, Google’s approach is documented:

I’m not saying it doesn’t have problems — it does. It has taken Google some time to understand the implications of some of their decisions and I’ve been critical of them in the past. But I am able to be critical partially because we can reference a common understanding of what Google is trying to accomplish and see how it was falling short, or see how guidance in the rater docs may be having unintended consequences.

This is one of the major issues of our time, particularly now that people have access to the kind of CGI only previously available to Hollywood. And what are they using this AI-powered technology for? Fake celebrity (and revenge) porn, of course.

Source: Hapgood

Facebook is under attack

This year is a time of reckoning for the world’s most popular social network. From their own website (which I’ll link to via archive.org because I don’t link to Facebook). Note the use of the passive voice:

Facebook was originally designed to connect friends and family — and it has excelled at that. But as unprecedented numbers of people channel their political energy through this medium, it’s being used in unforeseen ways with societal repercussions that were never anticipated.

It’s pretty amazing that a Facebook spokesperson is saying things like this:

I wish I could guarantee that the positives are destined to outweigh the negatives, but I can’t. That’s why we have a moral duty to understand how these technologies are being used and what can be done to make communities like Facebook as representative, civil and trustworthy as possible.

What they are careful to do is to paint a picture of Facebook as somehow ‘neutral’ and being ‘hijacked’ by bad actors. This isn’t actually the case.

As an article in The Guardian points out, executives at Facebook and Twitter aren’t exactly heavy users of their own platforms:

It is a pattern that holds true across the sector. For all the industry’s focus on “eating your own dog food”, the most diehard users of social media are rarely those sitting in a position of power.

These sites are designed to be addictive. So, just as drug dealers “don’t get high on their own supply”, so those designing social networks know what they’re dealing with:

These addictions haven’t happened accidentally… Instead, they are a direct result of the intention of companies such as Facebook and Twitter to build “sticky” products, ones that we want to come back to over and over again. “The companies that are producing these products, the very large tech companies in particular, are producing them with the intent to hook. They’re doing their very best to ensure not that our wellbeing is preserved, but that we spend as much time on their products and on their programs and apps as possible. That’s their key goal: it’s not to make a product that people enjoy and therefore becomes profitable, but rather to make a product that people can’t stop using and therefore becomes profitable.

The trouble is that this advertising-fuelled medium which is built to be addictive, is the place where most people get their news these days. Facebook has realised that it has a problem in this regard so they’ve made the decision to pass the buck to users. Instead of Facebook, or anyone else, deciding which news sources an individual should trust, it’s being left up to users.

While this sounds empowering and democratic, I can’t help but think it’s a bad move. As The Washington Post notes:

“They want to avoid making a judgment, but they are in a situation where you can’t avoid making a judgment,” said Jay Rosen, a journalism professor at New York University. “They are looking for a safe approach. But sometimes you can be in a situation where there is no safe route out.”

The article continues to cite former Facebook executives who think that the problems are more than skin-deep:

They say that the changes the company is making are just tweaks when, in fact, the problems are a core feature of the Facebook product, said Sandy Parakilas, a former Facebook privacy operations manager.

“If they demote stories that get a lot of likes, but drive people toward posts that generate conversation, they may be driving people toward conversation that isn’t positive,” Parakilas said.

A final twist in the tale is that Rupert Murdoch, a guy who has no morals but certainly has a valid point here, has made a statement on all of this:

If Facebook wants to recognize ‘trusted’ publishers then it should pay those publishers a carriage fee similar to the model adopted by cable companies. The publishers are obviously enhancing the value and integrity of Facebook through their news and content but are not being adequately rewarded for those services. Carriage payments would have a minor impact on Facebook’s profits but a major impact on the prospects for publishers and journalists.”

2018 is going to be an interesting year. If you want to quit Facebook and/or Twitter be part of something better, why not join me on Mastodon via social.coop and help built Project MoodleNet?

Sources: Facebook newsroom / The Guardian / The Washington Post / News Corp

Tribal politics in social networks

I’ve started buying the Financial Times Weekend along with The Observer each Sunday. Annoyingly, while the latter doesn’t have a paywall, the FT does which means although I can quote from, and link to, this article by Simon Kuper about tribal politics, many of you won’t be able to read it in full.

Kuper makes the point that in a world of temporary jobs, ‘broken’ families, and declining church attendance, social networks provide a place where people can find their ‘tribe’:

Online, each tribe inhabits its own filter bubble of partisan news. To blame this only on Facebook is unfair. If people wanted a range of views, they could install both rightwing and leftwing feeds on their Facebook pages — The Daily Telegraph and The Guardian, say. Most people choose not to, partly because they like living in their tribe. It makes them feel less lonely.

There’s a lot to agree with in this article. I think we can blame people for getting their news mainly through Facebook. I think we can roll our eyes at people who don’t think carefully about their information environment.

On the other hand, social networks are mediated by technology. And technology is never neutral. For example, Facebook has gone from saying that it couldn’t possibly be blamed for ‘fake news’ (2016) to investigating the way that Russian accounts may have manipulated users (2017) to announcing that they’re going to make some changes (2018, NSFW language in link).

We need to zoom out from specific problems in our society to the wider issues that underpin them. Kuper does this to some extent in this article, but the FT isn’t the place where you’ll see a robust criticism of the problems with capitalism. Social networks can, and have, been different — just think of what Twitter was like before becoming a publicly-traded company, for example.

My concern is that we need to sort out these huge, society-changing companies before they become too large to regulate.

Source: FT Weekend