Tag: Facebook (page 3 of 13)

Using WhatsApp is a (poor) choice that you make

People often ask me about my stance on Facebook products. They can understand that I don’t use Facebook itself, but what about Instagram? And surely I use WhatsApp? Nope.

Given that I don’t usually have a single place to point people who want to read about the problems with WhatsApp, I thought I’d create one.


WhatsApp is a messaging app that was acquired by Facebook for the eye-watering amount of $19 billion in 2014. Interestingly, a BuzzFeed News article from 2018 cites documents confidential documents from the time leading up to the acquisition that were acquired by the UK’s Department for Culture, Media, and Sport. They show the threat WhatsApp posed to Facebook at the time.

US mobile messenger apps (iPhone) graph from August 2012 to March 2013
A document obtained by the DCMS as part of their investigations

As you can see from the above chart, Facebook executives were shown in 2013 that WhatsApp (8.6% reach) was growing rapidly and posed a huge threat to Facebook Messenger (13.7% reach).

So Facebook bought WhatsApp. But what did they buy? If, as we’re led to believe, WhatsApp is ‘end-to-end encrypted’ then Facebook don’t have access to the messages of users. So what’s so valuable?


Brian Acton, one of the founders of WhatsApp (and a man who got very rich through its sale) has gone on record saying that he feels like he sold his users’ privacy to Facebook.

Facebook, Acton says, had decided to pursue two ways of making money from WhatsApp. First, by showing targeted ads in WhatsApp’s new Status feature, which Acton felt broke a social compact with its users. “Targeted advertising is what makes me unhappy,” he says. His motto at WhatsApp had been “No ads, no games, no gimmicks”—a direct contrast with a parent company that derived 98% of its revenue from advertising. Another motto had been “Take the time to get it right,” a stark contrast to “Move fast and break things.”

Facebook also wanted to sell businesses tools to chat with WhatsApp users. Once businesses were on board, Facebook hoped to sell them analytics tools, too. The challenge was WhatsApp’s watertight end-to-end encryption, which stopped both WhatsApp and Facebook from reading messages. While Facebook didn’t plan to break the encryption, Acton says, its managers did question and “probe” ways to offer businesses analytical insights on WhatsApp users in an encrypted environment.

Parmy Olson (Forbes)

The other way Facebook wanted to make money was to sell tools to businesses allowing them to chat with WhatsApp users. These tools would also give “analytical insights” on how users interacted with WhatsApp.

Facebook was allowed to acquire WhatsApp (and Instagram) despite fears around monopolistic practices. This was because they made a promise not to combine data from various platforms. But, guess what happened next?

In 2014, Facebook bought WhatsApp for $19b, and promised users that it wouldn’t harvest their data and mix it with the surveillance troves it got from Facebook and Instagram. It lied. Years later, Facebook mixes data from all of its properties, mining it for data that ultimately helps advertisers, political campaigns and fraudsters find prospects for whatever they’re peddling. Today, Facebook is in the process of acquiring Giphy, and while Giphy currently doesn’t track users when they embed GIFs in messages, Facebook could start doing that anytime.

Cory Doctorow (EFF)

So Facebook is harvesting metadata from its various platforms, tracking people around the web (even if they don’t have an account), and buying up data about offline activities.

All of this creates a profile. So yes, because of end-ot-end encryption, Facebook might not know the exact details of your messages. But they know that you’ve started messaging a particular user account around midnight every night. They know that you’ve started interacting with a bunch of stuff around anxiety. They know how the people you message most tend to vote.


Do I have to connect the dots here? This is a company that sells targeted adverts, the kind of adverts that can influence the outcome of elections. Of course, Facebook will never admit that its platforms are the problem, it’s always the responsibility of the user to be ‘vigilant’.

Man reading a newspaper
A WhatsApp advert aiming to ‘fighting false information’ (via The Guardian)

So you might think that you’re just messaging your friend or colleague on a platform that ‘everyone’ uses. But your decision to go with the flow has consequences. It has implications for democracy. It has implications on creating a de facto monopoly for our digital information. And it has implications around the dissemination of false information.

The features that would later allow WhatsApp to become a conduit for conspiracy theory and political conflict were ones never integral to SMS, and have more in common with email: the creation of groups and the ability to forward messages. The ability to forward messages from one group to another – recently limited in response to Covid-19-related misinformation – makes for a potent informational weapon. Groups were initially limited in size to 100 people, but this was later increased to 256. That’s small enough to feel exclusive, but if 256 people forward a message on to another 256 people, 65,536 will have received it.

[…]

A communication medium that connects groups of up to 256 people, without any public visibility, operating via the phones in their pockets, is by its very nature, well-suited to supporting secrecy. Obviously not every group chat counts as a “conspiracy”. But it makes the question of how society coheres, who is associated with whom, into a matter of speculation – something that involves a trace of conspiracy theory. In that sense, WhatsApp is not just a channel for the circulation of conspiracy theories, but offers content for them as well. The medium is the message.

William Davies (The Guardian)

I cannot control the decisions others make, nor have I forced my opinions on my two children, who (despite my warnings) both use WhatsApp to message their friends. But, for me, the risk to myself and society of using WhatsApp is not one I’m happy with taking.

Just don’t say I didn’t warn you.


Header image by Rachit Tank

Everyone has a mob self and an individual self, in varying proportions

Digital mediation, decentralisation, and context collapse

Is social media ‘real life’? A recent Op-Ed in The New York Times certainly things so:

An argument about Twitter — or any part of the internet — as “real life” is frequently an argument about what voices “matter” in our national conversation. Not just which arguments are in the bounds of acceptable public discourse, but also which ideas are considered as legitimate for mass adoption. It is a conversation about the politics of the possible. That conversation has many gatekeepers — politicians, the press, institutions of all kinds. And frequently they lack creativity.

Charlie Warzel (The New York Times)

I’ve certainly been a proponent over the years for the view that digital interactions are no less ‘real’ than analogue ones. Yes, you’re reading a book when you do so on an e-reader. That’s right, you’re meeting someone when doing so over video conference. And correct, engaging in a Twitter thread counts as a conversation.

Now that everyone’s interacting via digital devices during the pandemic, things that some parts of the population refused to count as ‘normal’ have at least been normalised. It’s been great to see so much IRL mobilisation due to protests that started online, for example with the #BlackLivesMatter hashtag.


With this very welcome normalisation, however, I’m not sure there’s a general understanding about how digital spaces mediate our interactions. Offline, our conversations are mediated by the context in which we find ourselves: we speak differently at home, on the street, and in the pub. Meanwhile, online, we experience context collapse as we take our smartphones everywhere.

We forget that we interact in algorithmically-curated environments that favour certain kinds of interactions over others. Sometimes these algorithms can be fairly blunt instruments, for example when ‘Dominic Cummings’ didn’t trend on Twitter despite him being all over the news. Why? Because of anti-porn filters.

Other times, things are quite subtle. I’ve spoken on numerous occasions why I don’t use Facebook products. Part of the reason for this is that I don’t trust their privacy practices or algorithms. For example, a recent study showed that Instagram (which, of course, is owned by Facebook) actively encourages users to show some skin.

While Instagram claims that the newsfeed is organized according to what a given user “cares about most”, the company’s patent explains that it could actually be ranked according to what it thinks all users care about. Whether or not users see the pictures posted by the accounts they follow depends not only on their past behavior, but also on what Instagram believes is most engaging for other users of the platform.

Judith Duportail, Nicolas Kayser-Bril, Kira Schacht and Édouard Richard (Algorithm Watch)

I think I must have linked back to this post of mine from six years ago more than any other one I’ve written: Curate or Be Curated: Why Our Information Environment is Crucial to a Flourishing Democracy, Civil Society. To quote myself:

The problem with social networks as news platforms is that they are not neutral spaces. Perhaps the easiest way to get quickly to the nub of the issue is to ask how they are funded. The answer is clear and unequivocal: through advertising. The two biggest social networks, Twitter and Facebook (which also owns Instagram and WhatsApp), are effectively “services with shareholders.” Your interactions with other people, with media, and with adverts, are what provide shareholder value. Lest we forget, CEOs of publicly-listed companies have a legal obligation to provide shareholder value. In an advertising-fueled online world this means continually increasing the number of eyeballs looking at (and fingers clicking on) content. 

Doug Belshaw (Connected learning Alliance)

Herein lies the difficulty. We can’t rely on platforms backed by venture capital as they end up incentivised to do the wrong kinds of things. Equally, no-one is going to want to use a platform provided by a government.

This is why really do still believe that decentralisation is the answer here. Local moderation by people you know and/or trust that can happen on an individual or instance level. Algorithmic curation for the benefit of users which can be turned on or off by the user. Scaling both vertically and horizontally.

At the moment it’s not the tech that’s holding people back from such decentralisation but rather two things. The first is the mental model of decentralisation. I think that’s easy to overcome, as back in 2007 people didn’t really ‘get’ Twitter, etc. The second one is much more difficult, and is around the dopamine hit you get from posting something on social media and becoming a minor celebrity. Although it’s possible to replicate this in decentralised environments, I’m not sure we’d necessarily want to?


Slightly modified quotation-as-title by D.H. Lawrence. Header image by Prateek Katyal

Saturday soundings

Black Lives Matter. The money from this month’s kind supporters of Thought Shrapnel has gone directly to the 70+ community bail funds, mutual aid funds, and racial justice organizers listed here.


IBM abandons ‘biased’ facial recognition tech

A 2019 study conducted by the Massachusetts Institute of Technology found that none of the facial recognition tools from Microsoft, Amazon and IBM were 100% accurate when it came to recognising men and women with dark skin.

And a study from the US National Institute of Standards and Technology suggested facial recognition algorithms were far less accurate at identifying African-American and Asian faces compared with Caucasian ones.

Amazon, whose Rekognition software is used by police departments in the US, is one of the biggest players in the field, but there are also a host of smaller players such as Facewatch, which operates in the UK. Clearview AI, which has been told to stop using images from Facebook, Twitter and YouTube, also sells its software to US police forces.

Maria Axente, AI ethics expert at consultancy firm PwC, said facial recognition had demonstrated “significant ethical risks, mainly in enhancing existing bias and discrimination”.

BBC News

Like many newer technologies, facial recognition is already a battleground for people of colour. This is a welcome, if potential cynical move, by IBM who let’s not forget literally provided technology to the Nazis.


How Wikipedia Became a Battleground for Racial Justice

If there is one reason to be optimistic about Wikipedia’s coverage of racial justice, it’s this: The project is by nature open-ended and, well, editable. The spike in volunteer Wikipedia contributions stemming from the George Floyd protests is certainly not neutral, at least to the extent that word means being passive in this moment. Still, Koerner cautioned that any long-term change of focus to knowledge equity was unlikely to be easy for the Wikipedia editing community. “I hope that instead of struggling against it they instead lean into their discomfort,” she said. “When we’re uncomfortable, change happens.”

Stephen Harrison (Slate)

This is a fascinating glimpse into Wikipedia and how the commitment to ‘neutrality’ affects coverage of different types of people and event feeds.


Deeds, not words

Recent events have revealed, again, that the systems we inhabit and use as educators are perfectly designed to get the results they get. The stated desire is there to change the systems we use. Let’s be able to look back to this point in two years and say that we have made a genuine difference.

Nick Dennis

Some great questions here from Nick, some of which are specific to education, whereas others are applicable everywhere.


Sign with hole cut out saying 'NO JUSTICE NO PEACE'

Audio Engineers Built a Shield to Deflect Police Sound Cannons

Since the protests began, demonstrators in multiple cities have reported spotting LRADs, or Long-Range Acoustic Devices, sonic weapons that blast sound waves at crowds over large distances and can cause permanent hearing loss. In response, two audio engineers from New York City have designed and built a shield which they say can block and even partially reflect these harmful sonic blasts back at the police.

Janus Rose (Vice)

For those not familiar with the increasing militarisation of police in the US, this is an interesting read.


CMA to look into Facebook’s purchase of gif search engine

The Competition and Markets Authority (CMA) is inviting comments about Facebook’s purchase of a company that currently provides gif search across many of the social network’s competitors, including Twitter and the messaging service Signal.

[…]

[F]or Facebook, the more compelling reason for the purchase may be the data that Giphy has about communication across the web. Since many services that integrate with the platform not only use it to find gifs, but also leave the original clip hosted on Giphy’s servers, the company receives information such as when a message is sent and received, the IP address of both parties, and details about the platforms they are using.

Alex Hern (The Guardian)

In my 2012 TEDx Talk I discussed the memetic power of gifs. Others might find this news surprising, but I don’t think I would have been surprised even back then that it would be such a hot topic in 2020.

Also by the Hern this week is an article on Twitter’s experiments around getting people to actually read things before they tweet/retweet them. What times we live in.


Human cycles: History as science

To Peter Turchin, who studies population dynamics at the University of Connecticut in Storrs, the appearance of three peaks of political instability at roughly 50-year intervals is not a coincidence. For the past 15 years, Turchin has been taking the mathematical techniques that once allowed him to track predator–prey cycles in forest ecosystems, and applying them to human history. He has analysed historical records on economic activity, demographic trends and outbursts of violence in the United States, and has come to the conclusion that a new wave of internal strife is already on its way1. The peak should occur in about 2020, he says, and will probably be at least as high as the one in around 1970. “I hope it won’t be as bad as 1870,” he adds.

Laura Spinney (Nature)

I’m not sure about this at all, because if you go looking for examples of something to fit your theory, you’ll find it. Especially when your theory is as generic as this one. It seems like a kind of reverse fortune-telling?


Universal Basic Everything

Much of our economies in the west have been built on the idea of unique ideas, or inventions, which are then protected and monetised. It’s a centuries old way of looking at ideas, but today we also recognise that this method of creating and growing markets around IP protected products has created an unsustainable use of the world’s natural resources and generated too much carbon emission and waste.

Open source and creative commons moves us significantly in the right direction. From open sharing of ideas we can start to think of ideas, services, systems, products and activities which might be essential or basic for sustaining life within the ecological ceiling, whilst also re-inforcing social foundations.

TessyBritton

I’m proud to be part of a co-op that focuses on openness of all forms. This article is a great introduction to anyone who wants a new way of looking at our post-COVID future.


World faces worst food crisis for at least 50 years, UN warns

Lockdowns are slowing harvests, while millions of seasonal labourers are unable to work. Food waste has reached damaging levels, with farmers forced to dump perishable produce as the result of supply chain problems, and in the meat industry plants have been forced to close in some countries.

Even before the lockdowns, the global food system was failing in many areas, according to the UN. The report pointed to conflict, natural disasters, the climate crisis, and the arrival of pests and plant and animal plagues as existing problems. East Africa, for instance, is facing the worst swarms of locusts for decades, while heavy rain is hampering relief efforts.

The additional impact of the coronavirus crisis and lockdowns, and the resulting recession, would compound the damage and tip millions into dire hunger, experts warned.

Fiona Harvey (The Guardian)

The knock-on effects of COVID-19 are going to be with us for a long time yet. And these second-order effects will themselves have effects which, with climate change also being in the mix, could lead to mass migrations and conflict by 2025.


Mice on Acid

What exactly a mouse sees when she’s tripping on DOI—whether the plexiglass walls of her cage begin to melt, or whether the wood chips begin to crawl around like caterpillars—is tied up in the private mysteries of what it’s like to be a mouse. We can’t ask her directly, and, even if we did, her answer probably wouldn’t be of much help.

Cody Kommers (Nautilus)

The bit about ‘ego disillusion’ in this article, which is ostensibly about how to get legal hallucinogens to market, is really interesting.


Header image by Dmitry Demidov