Tag: social networks (page 2 of 3)

Ethical design in social networks

I’m thinking a lot about privacy and ethical design at the moment as part of my role leading Project MoodleNet. This article gives a short but useful overview of the Ethical Design Manifesto, along with some links for further reading:

There is often a disconnect between what digital designers originally intend with a product or feature, and how consumers use or interpret it.

Ethical user experience design – meaning, for example, designing technologies in ways that promote good online behaviour and intuit how they might be used – may help bridge that gap.

There’s already people (like me) making choices about the technology and social networks they used based on ethics:

User experience design and research has so far mainly been applied to designing tech that is responsive to user needs and locations. For example, commercial and digital assistants that intuit what you will buy at a local store based on your previous purchases.

However, digital designers and tech companies are beginning to recognise that there is an ethical dimension to their work, and that they have some social responsibility for the well-being of their users.

Meeting this responsibility requires designers to anticipate the meanings people might create around a particular technology.

In addition to ethical design, there are other elements to take into consideration:

Contextually aware design is capable of understanding the different meanings that a particular technology may have, and adapting in a way that is socially and ethically responsible. For example, smart cars that prevent mobile phone use while driving.

Emotional design refers to technology that elicits appropriate emotional responses to create positive user experiences. It takes into account the connections people form with the objects they use, from pleasure and trust to fear and anxiety.

This includes the look and feel of a product, how easy it is to use and how we feel after we have used it.

Anticipatory design allows technology to predict the most useful interaction within a sea of options and make a decision for the user, thus “simplifying” the experience. Some companies may use anticipatory design in unethical ways that trick users into selecting an option that benefits the company.

Source: The Conversation

A useful IndieWeb primer

I’ve followed the IndieWeb movement since its inception, but it’s always seemed a bit niche. I love (and use) the POSSE model, for example, but expecting everyone to have domain of their own stacked with open source software seems a bit utopian right now.

I was surprised and delighted, therefore, to see a post on the GoDaddy blog extolling the virtues of the IndieWeb for business owners. The author explains that the IndieWeb movement was born of frustration:

Frustration from software developers who like the idea of social media, but who do not want to hand over their content to some big, unaccountable internet company that unilaterally decides who gets to see what.

Frustration from writers and content creators who do not want a third party between them and the people they want to reach.

Frustration from researchers and journalists who need a way to get their message out without depending on the whim of a big company that monitors, and sometimes censors, what they have to say.

He does a great job of explaining, with an appropriate level of technical detail, how to get started. The thing I’d really like to see in particular is people publishing details of events at a public URL instead of (just) on Facebook:

Importantly, with IndieAuth, you can log into third-party websites using your own domain name. And your visitors can log into your website with their domain name. Or, if you organize events, you can post your event announcement right on your website, and have attendees RSVP either from their own IndieWeb sites, or natively on a social site.

A recommended read. I’ll be pointing people to this in future!

Source: GoDaddy

More on Facebook’s ‘trusted news’ system

Mike Caulfield reflects on Facebook’s announcement that they’re going to allow users to rate the sources of news in terms of trustworthiness. Like me, and most people who have thought about this for more than two seconds, he thinks it’s a bad idea.

Instead, he thinks Facebook should try Google’s approach:

Most people misunderstand what the Google system looks like (misreporting on it is rife) but the way it works is this. Google produces guidance docs for paid search raters who use them to rate search results (not individual sites). These documents are public, and people can argue about whether Google’s take on what constitutes authoritative sources is right — because they are public.

Facebook’s algorithms are opaque by design, whereas, Caulfield argues, Google’s approach is documented:

I’m not saying it doesn’t have problems — it does. It has taken Google some time to understand the implications of some of their decisions and I’ve been critical of them in the past. But I am able to be critical partially because we can reference a common understanding of what Google is trying to accomplish and see how it was falling short, or see how guidance in the rater docs may be having unintended consequences.

This is one of the major issues of our time, particularly now that people have access to the kind of CGI only previously available to Hollywood. And what are they using this AI-powered technology for? Fake celebrity (and revenge) porn, of course.

Source: Hapgood

Facebook is under attack

This year is a time of reckoning for the world’s most popular social network. From their own website (which I’ll link to via archive.org because I don’t link to Facebook). Note the use of the passive voice:

Facebook was originally designed to connect friends and family — and it has excelled at that. But as unprecedented numbers of people channel their political energy through this medium, it’s being used in unforeseen ways with societal repercussions that were never anticipated.

It’s pretty amazing that a Facebook spokesperson is saying things like this:

I wish I could guarantee that the positives are destined to outweigh the negatives, but I can’t. That’s why we have a moral duty to understand how these technologies are being used and what can be done to make communities like Facebook as representative, civil and trustworthy as possible.

What they are careful to do is to paint a picture of Facebook as somehow ‘neutral’ and being ‘hijacked’ by bad actors. This isn’t actually the case.

As an article in The Guardian points out, executives at Facebook and Twitter aren’t exactly heavy users of their own platforms:

It is a pattern that holds true across the sector. For all the industry’s focus on “eating your own dog food”, the most diehard users of social media are rarely those sitting in a position of power.

These sites are designed to be addictive. So, just as drug dealers “don’t get high on their own supply”, so those designing social networks know what they’re dealing with:

These addictions haven’t happened accidentally… Instead, they are a direct result of the intention of companies such as Facebook and Twitter to build “sticky” products, ones that we want to come back to over and over again. “The companies that are producing these products, the very large tech companies in particular, are producing them with the intent to hook. They’re doing their very best to ensure not that our wellbeing is preserved, but that we spend as much time on their products and on their programs and apps as possible. That’s their key goal: it’s not to make a product that people enjoy and therefore becomes profitable, but rather to make a product that people can’t stop using and therefore becomes profitable.

The trouble is that this advertising-fuelled medium which is built to be addictive, is the place where most people get their news these days. Facebook has realised that it has a problem in this regard so they’ve made the decision to pass the buck to users. Instead of Facebook, or anyone else, deciding which news sources an individual should trust, it’s being left up to users.

While this sounds empowering and democratic, I can’t help but think it’s a bad move. As The Washington Post notes:

“They want to avoid making a judgment, but they are in a situation where you can’t avoid making a judgment,” said Jay Rosen, a journalism professor at New York University. “They are looking for a safe approach. But sometimes you can be in a situation where there is no safe route out.”

The article continues to cite former Facebook executives who think that the problems are more than skin-deep:

They say that the changes the company is making are just tweaks when, in fact, the problems are a core feature of the Facebook product, said Sandy Parakilas, a former Facebook privacy operations manager.

“If they demote stories that get a lot of likes, but drive people toward posts that generate conversation, they may be driving people toward conversation that isn’t positive,” Parakilas said.

A final twist in the tale is that Rupert Murdoch, a guy who has no morals but certainly has a valid point here, has made a statement on all of this:

If Facebook wants to recognize ‘trusted’ publishers then it should pay those publishers a carriage fee similar to the model adopted by cable companies. The publishers are obviously enhancing the value and integrity of Facebook through their news and content but are not being adequately rewarded for those services. Carriage payments would have a minor impact on Facebook’s profits but a major impact on the prospects for publishers and journalists.”

2018 is going to be an interesting year. If you want to quit Facebook and/or Twitter be part of something better, why not join me on Mastodon via social.coop and help built Project MoodleNet?

Sources: Facebook newsroom / The Guardian / The Washington Post / News Corp

Tribal politics in social networks

I’ve started buying the Financial Times Weekend along with The Observer each Sunday. Annoyingly, while the latter doesn’t have a paywall, the FT does which means although I can quote from, and link to, this article by Simon Kuper about tribal politics, many of you won’t be able to read it in full.

Kuper makes the point that in a world of temporary jobs, ‘broken’ families, and declining church attendance, social networks provide a place where people can find their ‘tribe’:

Online, each tribe inhabits its own filter bubble of partisan news. To blame this only on Facebook is unfair. If people wanted a range of views, they could install both rightwing and leftwing feeds on their Facebook pages — The Daily Telegraph and The Guardian, say. Most people choose not to, partly because they like living in their tribe. It makes them feel less lonely.

There’s a lot to agree with in this article. I think we can blame people for getting their news mainly through Facebook. I think we can roll our eyes at people who don’t think carefully about their information environment.

On the other hand, social networks are mediated by technology. And technology is never neutral. For example, Facebook has gone from saying that it couldn’t possibly be blamed for ‘fake news’ (2016) to investigating the way that Russian accounts may have manipulated users (2017) to announcing that they’re going to make some changes (2018, NSFW language in link).

We need to zoom out from specific problems in our society to the wider issues that underpin them. Kuper does this to some extent in this article, but the FT isn’t the place where you’ll see a robust criticism of the problems with capitalism. Social networks can, and have, been different — just think of what Twitter was like before becoming a publicly-traded company, for example.

My concern is that we need to sort out these huge, society-changing companies before they become too large to regulate.

Source: FT Weekend

This isn’t the golden age of free speech

You’d think with anyone, anywhere, being able to post anything to a global audience, that this would be a golden age of free speech. Right?

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence? (Yes, there are systems that can create increasingly convincing fake videos.)

The problem is not with the free speech, it’s the means by which it’s disseminated:

In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.

It’s time to re-decentralise, people.

Source: WIRED

How to build a consensual social network

Here’s another article that was linked to from the source of a post I shared recently. The paragraph quoted here is from the section entitled ‘Consent-Oriented Architecture’:

Corporations built to maximize profits are unable to build consensual platforms. Their business model depend fundamentally on surveillance and behavioral control. To build consensual platforms require that privacy, security, and anonymity be built into the platforms as core features. The most effective way to secure consent is to ensure that all user data and control of all user interaction resides with the software running on the user’s own computer, not on any intermediary servers.

Earlier in that section, the author makes the obvious (but nevertheless alarming point) that audiences are sorted and graded as commodities to be bought and sold:

Audiences, like all commodities, are sold by measure and grade. Eggs are sold in dozens as grade A, for example. An advertisers might buy a thousand clicks from middle-aged white men who own a car and have a good credit rating.

In a previous section, the author notes that those who use social networks are subjects of an enclosed system:

The profits of the media monopolies are formed after surplus value has already been extracted. Their users are not exploited, but subjected, captured as an audience, and instrumentalized to extract surplus profits from other sectors of the ownership class.

I had to read some sections twice, but I’m glad I did. Great stuff, and very thought-provoking.

In short, to ensure Project MoodleNet is a consensual social network, we need to ensure full transparency and, if possible, that the majority of the processing of personal data is done on the user’s own device.

Source: P2P Foundation

Social media short-circuits democracy

I’m wondering whether to delete all my social media accounts, or whether I should stay and fight. The trouble is, no technology is neutral, it always contains biases.

It’s interesting how the narrative has changed since the 2011 revolutions in Iran and Egypt:

Because of the advent of social media, the story seemed to go, tyrants would fall and democracy would rule. Social media communications were supposed to translate into a political revolution, even though we don’t necessarily agree on what a positive revolution would look like. The process is overtly emotional: The outrage felt translates directly, thanks to the magic of social media, into a “rebellion” that becomes democratic governance.

But social media has not helped these revolutions turn into lasting democracies. Social media speaks directly to the most reactive, least reflective parts of our minds, demanding we pay attention even when our calmer selves might tell us not to. It is no surprise that this form of media is especially effective at promoting hate, white supremacy, and public humiliation.

In my new job at Moodle, I’m tasked with leading work around a new social network for educators focused on sharing Open Educational Resources and professional development. I think we’ll start to see more social networks based around content than people (think Pinterest rather than Facebook).

Source: Motherboard

Twitter isn’t going to ban Trump, no matter what

Twitter have confirmed what everyone knew all along: they’re not going to ban Donald Trump, no matter what he says or does. It’s too good for business.

Blocking a world leader from Twitter or removing their controversial Tweets would hide important information people should be able to see and debate. It would also not silence that leader, but it would certainly hamper necessary discussion around their words and actions.

It’s a weak, cowardly argument to infer that if Twitter doesn’t provide a platform for Trump, then someone else will. This is absolutely about their growth, absolutely about the fact they make software with shareholders.

Source: Twitter blog

Image via CNN

Life in likes

England’s Children’s Commissioner has released a report entitled ‘Life in Likes’ which has gathered lots of attention in my networks. This, despite the fact that during the research they only talked to only 32 children. I used to teach over 250 kids a week! 32 is a class size, not a representative sample.

This article includes quotations from parents, such as this one:

Parent Trevor said his 12-year-old twin daughters had moved schools as a result of the pressure from social media, but admits they “can’t walk away” from it.

He told BBC Radio 5 live: “I can’t say to them, ‘You can’t use that,’ when I use it.”

Yes you can. My kids see me drink alcohol but it doesn’t mean I let them have it. My son has a smartphone with an app lock on the Google Play store so he can’t install apps without my permission.

The solution to this stuff does involve basic digital skills, but mainly what’s lacking here are parenting skills, I think.

Source: BBC News