In the complaint filed Thursday in federal court in San Francisco, New Jersey Instagram user Brittany Conditi contends the app’s use of the camera is intentional and done for the purpose of collecting “lucrative and valuable data on its users that it would not otherwise have access to.”
Facebook has been incredibly lucrative for its founder, Mark Zuckerberg, who ranks among the wealthiest men in the world. But it’s been a disaster for the world itself, a powerful vector for paranoia, propaganda and conspiracy-theorizing as well as authoritarian crackdowns and vicious attacks on the free press. Wherever it goes, chaos and destabilization follow.
I can’t sit by and stay silent while these platforms continue to allow the spreading of hate, propaganda and misinformation – created by groups to sow division and split America apart,” Kardashian West said.
This image, from Grayson Perry, is incredible. As he points out in the accompanying article, he’s chosen the US due to an upcoming series of his, but geographically this could be anywhere, as culture wars these days happen mainly online.
I’ve added the emphasis in the quotation below:
When we experience a background hum of unfocused emotion, be it anxiety, sadness, fear, anger, we unconsciously look for something to attach it to. Social media is brilliant at supplying us with issues to which attach our free-floating feelings. We often look for nice, preformed boxes into which we can dump our inchoate feelings, we crave certainty. Social media constantly offers up neat solutions for our messy feelings, whether it be God, guns, Greta or gender identity.
In a battle-torn landscape governed by zeroes and ones, nuance, compromise and empathy are the first casualties. If I were to sum up the online culture war in one word it would be “diaphobia”, a term coined by the psychiatrist RD Laing meaning “fear of being influenced by other people”, the opposite of dialogue. Our ever-present underlying historical and enculturated emotions will nudge us to cherrypick and polish the nuggets of information that support a stance that may have been in our bodies from childhood. Once we have taken sides, the algorithms will supply us with a stream of content to entrench and confirm our beliefs.
People often ask me about my stance on Facebook products. They can understand that I don’t use Facebook itself, but what about Instagram? And surely I use WhatsApp? Nope.
Given that I don’t usually have a single place to point people who want to read about the problems with WhatsApp, I thought I’d create one.
WhatsApp is a messaging app that was acquired by Facebook for the eye-watering amount of $19 billion in 2014. Interestingly, a BuzzFeed News article from 2018 cites documents confidential documents from the time leading up to the acquisition that were acquired by the UK’s Department for Culture, Media, and Sport. They show the threat WhatsApp posed to Facebook at the time.
As you can see from the above chart, Facebook executives were shown in 2013 that WhatsApp (8.6% reach) was growing rapidly and posed a huge threat to Facebook Messenger (13.7% reach).
So Facebook bought WhatsApp. But what did they buy? If, as we’re led to believe, WhatsApp is ‘end-to-end encrypted’ then Facebook don’t have access to the messages of users. So what’s so valuable?
Brian Acton, one of the founders of WhatsApp (and a man who got very rich through its sale) has gone on record saying that he feels like he sold his users’ privacy to Facebook.
Facebook, Acton says, had decided to pursue two ways of making money from WhatsApp. First, by showing targeted ads in WhatsApp’s new Status feature, which Acton felt broke a social compact with its users. “Targeted advertising is what makes me unhappy,” he says. His motto at WhatsApp had been “No ads, no games, no gimmicks”—a direct contrast with a parent company that derived 98% of its revenue from advertising. Another motto had been “Take the time to get it right,” a stark contrast to “Move fast and break things.”
Facebook also wanted to sell businesses tools to chat with WhatsApp users. Once businesses were on board, Facebook hoped to sell them analytics tools, too. The challenge was WhatsApp’s watertight end-to-end encryption, which stopped both WhatsApp and Facebook from reading messages. While Facebook didn’t plan to break the encryption, Acton says, its managers did question and “probe” ways to offer businesses analytical insights on WhatsApp users in an encrypted environment.
Parmy Olson (Forbes)
The other way Facebook wanted to make money was to sell tools to businesses allowing them to chat with WhatsApp users. These tools would also give “analytical insights” on how users interacted with WhatsApp.
Facebook was allowed to acquire WhatsApp (and Instagram) despite fears around monopolistic practices. This was because they made a promise not to combine data from various platforms. But, guess what happened next?
In 2014, Facebook bought WhatsApp for $19b, and promised users that it wouldn’t harvest their data and mix it with the surveillance troves it got from Facebook and Instagram. It lied. Years later, Facebook mixes data from all of its properties, mining it for data that ultimately helps advertisers, political campaigns and fraudsters find prospects for whatever they’re peddling. Today, Facebook is in the process of acquiring Giphy, and while Giphy currently doesn’t track users when they embed GIFs in messages, Facebook could start doing that anytime.
All of this creates a profile. So yes, because of end-ot-end encryption, Facebook might not know the exact details of your messages. But they know that you’ve started messaging a particular user account around midnight every night. They know that you’ve started interacting with a bunch of stuff around anxiety. They know how the people you message most tend to vote.
Do I have to connect the dots here? This is a company that sells targeted adverts, the kind of adverts that can influence the outcome of elections. Of course, Facebook will never admit that its platforms are the problem, it’s always the responsibility of the user to be ‘vigilant’.
So you might think that you’re just messaging your friend or colleague on a platform that ‘everyone’ uses. But your decision to go with the flow has consequences. It has implications for democracy. It has implications on creating a de facto monopoly for our digital information. And it has implications around the dissemination of false information.
The features that would later allow WhatsApp to become a conduit for conspiracy theory and political conflict were ones never integral to SMS, and have more in common with email: the creation of groups and the ability to forward messages. The ability to forward messages from one group to another – recently limited in response to Covid-19-related misinformation – makes for a potent informational weapon. Groups were initially limited in size to 100 people, but this was later increased to 256. That’s small enough to feel exclusive, but if 256 people forward a message on to another 256 people, 65,536 will have received it.
A communication medium that connects groups of up to 256 people, without any public visibility, operating via the phones in their pockets, is by its very nature, well-suited to supporting secrecy. Obviously not every group chat counts as a “conspiracy”. But it makes the question of how society coheres, who is associated with whom, into a matter of speculation – something that involves a trace of conspiracy theory. In that sense, WhatsApp is not just a channel for the circulation of conspiracy theories, but offers content for them as well. The medium is the message.
William Davies (The Guardian)
I cannot control the decisions others make, nor have I forced my opinions on my two children, who (despite my warnings) both use WhatsApp to message their friends. But, for me, the risk to myself and society of using WhatsApp is not one I’m happy with taking.
Digital mediation, decentralisation, and context collapse
Is social media ‘real life’? A recent Op-Ed in The New York Times certainly things so:
An argument about Twitter — or any part of the internet — as “real life” is frequently an argument about what voices “matter” in our national conversation. Not just which arguments are in the bounds of acceptable public discourse, but also which ideas are considered as legitimate for mass adoption. It is a conversation about the politics of the possible. That conversation has many gatekeepers — politicians, the press, institutions of all kinds. And frequently they lack creativity.
Charlie Warzel (The New York Times)
I’ve certainly been a proponent over the years for the view that digital interactions are no less ‘real’ than analogue ones. Yes, you’re reading a book when you do so on an e-reader. That’s right, you’re meeting someone when doing so over video conference. And correct, engaging in a Twitter thread counts as a conversation.
Now that everyone’s interacting via digital devices during the pandemic, things that some parts of the population refused to count as ‘normal’ have at least been normalised. It’s been great to see so much IRL mobilisation due to protests that started online, for example with the #BlackLivesMatter hashtag.
With this very welcome normalisation, however, I’m not sure there’s a general understanding about how digital spaces mediate our interactions. Offline, our conversations are mediated by the context in which we find ourselves: we speak differently at home, on the street, and in the pub. Meanwhile, online, we experience context collapse as we take our smartphones everywhere.
We forget that we interact in algorithmically-curated environments that favour certain kinds of interactions over others. Sometimes these algorithms can be fairly blunt instruments, for example when ‘Dominic Cummings’ didn’t trend on Twitter despite him being all over the news. Why? Because of anti-porn filters.
Other times, things are quite subtle. I’ve spoken on numerous occasions why I don’t use Facebook products. Part of the reason for this is that I don’t trust their privacy practices or algorithms. For example, a recent study showed that Instagram (which, of course, is owned by Facebook) actively encourages users to show some skin.
While Instagram claims that the newsfeed is organized according to what a given user “cares about most”, the company’s patent explains that it could actually be ranked according to what it thinks all users care about. Whether or not users see the pictures posted by the accounts they follow depends not only on their past behavior, but also on what Instagram believes is most engaging for other users of the platform.
Judith Duportail, Nicolas Kayser-Bril, Kira Schacht and Édouard Richard (Algorithm Watch)
The problem with social networks as news platforms is that they are not neutral spaces. Perhaps the easiest way to get quickly to the nub of the issue is to ask how they are funded. The answer is clear and unequivocal: through advertising. The two biggest social networks, Twitter and Facebook (which also owns Instagram and WhatsApp), are effectively “services with shareholders.” Your interactions with other people, with media, and with adverts, are what provide shareholder value. Lest we forget, CEOs of publicly-listed companies have a legal obligation to provide shareholder value. In an advertising-fueled online world this means continually increasing the number of eyeballs looking at (and fingers clicking on) content.
Doug Belshaw (Connected learning Alliance)
Herein lies the difficulty. We can’t rely on platforms backed by venture capital as they end up incentivised to do the wrong kinds of things. Equally, no-one is going to want to use a platform provided by a government.
This is why really do still believe that decentralisation is the answer here. Local moderation by people you know and/or trust that can happen on an individual or instance level. Algorithmic curation for the benefit of users which can be turned on or off by the user. Scaling both vertically and horizontally.
At the moment it’s not the tech that’s holding people back from such decentralisation but rather two things. The first is the mental model of decentralisation. I think that’s easy to overcome, as back in 2007 people didn’t really ‘get’ Twitter, etc. The second one is much more difficult, and is around the dopamine hit you get from posting something on social media and becoming a minor celebrity. Although it’s possible to replicate this in decentralised environments, I’m not sure we’d necessarily want to?
Slightly modified quotation-as-title by D.H. Lawrence. Header image byPrateek Katyal
A 2019 study conducted by the Massachusetts Institute of Technology found that none of the facial recognition tools from Microsoft, Amazon and IBM were 100% accurate when it came to recognising men and women with dark skin.
And a study from the US National Institute of Standards and Technology suggested facial recognition algorithms were far less accurate at identifying African-American and Asian faces compared with Caucasian ones.
Amazon, whose Rekognition software is used by police departments in the US, is one of the biggest players in the field, but there are also a host of smaller players such as Facewatch, which operates in the UK. Clearview AI, which has been told to stop using images from Facebook, Twitter and YouTube, also sells its software to US police forces.
Maria Axente, AI ethics expert at consultancy firm PwC, said facial recognition had demonstrated “significant ethical risks, mainly in enhancing existing bias and discrimination”.
Like many newer technologies, facial recognition is already a battleground for people of colour. This is a welcome, if potential cynical move, by IBM who let’s not forget literally provided technology to the Nazis.
If there is one reason to be optimistic about Wikipedia’s coverage of racial justice, it’s this: The project is by nature open-ended and, well, editable. The spike in volunteer Wikipedia contributions stemming from the George Floyd protests is certainly not neutral, at least to the extent that word means being passive in this moment. Still, Koerner cautioned that any long-term change of focus to knowledge equity was unlikely to be easy for the Wikipedia editing community. “I hope that instead of struggling against it they instead lean into their discomfort,” she said. “When we’re uncomfortable, change happens.”
Stephen Harrison (Slate)
This is a fascinating glimpse into Wikipedia and how the commitment to ‘neutrality’ affects coverage of different types of people and event feeds.
Recent events have revealed, again, that the systems we inhabit and use as educators are perfectly designed to get the results they get. The stated desire is there to change the systems we use. Let’s be able to look back to this point in two years and say that we have made a genuine difference.
Some great questions here from Nick, some of which are specific to education, whereas others are applicable everywhere.
Since the protests began, demonstrators in multiple cities have reported spotting LRADs, or Long-Range Acoustic Devices, sonic weapons that blast sound waves at crowds over large distances and can cause permanent hearing loss. In response, two audio engineers from New York City have designed and built a shield which they say can block and even partially reflect these harmful sonic blasts back at the police.
Janus Rose (Vice)
For those not familiar with the increasing militarisation of police in the US, this is an interesting read.
The Competition and Markets Authority (CMA) is inviting comments about Facebook’s purchase of a company that currently provides gif search across many of the social network’s competitors, including Twitter and the messaging service Signal.
[F]or Facebook, the more compelling reason for the purchase may be the data that Giphy has about communication across the web. Since many services that integrate with the platform not only use it to find gifs, but also leave the original clip hosted on Giphy’s servers, the company receives information such as when a message is sent and received, the IP address of both parties, and details about the platforms they are using.
Alex Hern (The Guardian)
In my 2012 TEDx Talk I discussed the memetic power of gifs. Others might find this news surprising, but I don’t think I would have been surprised even back then that it would be such a hot topic in 2020.
Also by the Hern this week is an article on Twitter’s experiments around getting people to actually read things before they tweet/retweet them. What times we live in.
To Peter Turchin, who studies population dynamics at the University of Connecticut in Storrs, the appearance of three peaks of political instability at roughly 50-year intervals is not a coincidence. For the past 15 years, Turchin has been taking the mathematical techniques that once allowed him to track predator–prey cycles in forest ecosystems, and applying them to human history. He has analysed historical records on economic activity, demographic trends and outbursts of violence in the United States, and has come to the conclusion that a new wave of internal strife is already on its way1. The peak should occur in about 2020, he says, and will probably be at least as high as the one in around 1970. “I hope it won’t be as bad as 1870,” he adds.
Laura Spinney (Nature)
I’m not sure about this at all, because if you go looking for examples of something to fit your theory, you’ll find it. Especially when your theory is as generic as this one. It seems like a kind of reverse fortune-telling?
Much of our economies in the west have been built on the idea of unique ideas, or inventions, which are then protected and monetised. It’s a centuries old way of looking at ideas, but today we also recognise that this method of creating and growing markets around IP protected products has created an unsustainable use of the world’s natural resources and generated too much carbon emission and waste.
Open source and creative commons moves us significantly in the right direction. From open sharing of ideas we can start to think of ideas, services, systems, products and activities which might be essential or basic for sustaining life within the ecological ceiling, whilst also re-inforcing social foundations.
I’m proud to be part of a co-op that focuses on openness of all forms. This article is a great introduction to anyone who wants a new way of looking at our post-COVID future.
Lockdowns are slowing harvests, while millions of seasonal labourers are unable to work. Food waste has reached damaging levels, with farmers forced to dump perishable produce as the result of supply chain problems, and in the meat industry plants have been forced to close in some countries.
Even before the lockdowns, the global food system was failing in many areas, according to the UN. The report pointed to conflict, natural disasters, the climate crisis, and the arrival of pests and plant and animal plagues as existing problems. East Africa, for instance, is facing the worst swarms of locusts for decades, while heavy rain is hampering relief efforts.
The additional impact of the coronavirus crisis and lockdowns, and the resulting recession, would compound the damage and tip millions into dire hunger, experts warned.
Fiona Harvey (The Guardian)
The knock-on effects of COVID-19 are going to be with us for a long time yet. And these second-order effects will themselves have effects which, with climate change also being in the mix, could lead to mass migrations and conflict by 2025.
What exactly a mouse sees when she’s tripping on DOI—whether the plexiglass walls of her cage begin to melt, or whether the wood chips begin to crawl around like caterpillars—is tied up in the private mysteries of what it’s like to be a mouse. We can’t ask her directly, and, even if we did, her answer probably wouldn’t be of much help.
Cody Kommers (Nautilus)
The bit about ‘ego disillusion’ in this article, which is ostensibly about how to get legal hallucinogens to market, is really interesting.
The article with the above embedded video is from five years ago, but someone shared it on my Twitter timeline and it reminded me of something. When I taught my History students about the Industrial Revolution it blew their minds that different parts of the country could be, effectively, on different ‘timezones’ until the dawn of the railways.
It just goes to show how true it is that first we shape our tools, and then they shape us.
“Uncertainty is one of the biggest elements that contributes to our experience of stress,” said Lynn Bufka, the senior director of Practice, Research, and Policy at the American Psychological Association. “Part of what we try to do to function in our society is to have some structure, some predictability. When we have those kinds of things, life feels more manageable, because you don’t have to put the energy into figuring those things out.”
Emily Baron Cadloff (VICE)
A short but useful article on why despite having grand plans, it’s difficult to get anything done in our current situation. We can’t even plan holidays at the moment.
The industrialized world is so full of human faces, like in ads, that we forget that it’s just ink, or pixels on a computer screen. Every time our ancestors saw something that looked like a human face, it probably was one. As a result, we didn’t evolve to distinguish reality from representation. The same perceptual machinery interprets both.
Jim Davies (Nautilus)
A useful reminder that our brain contains several systems, some of which are paleolithic.
The Wright Flier could only go 200 meters, and the Rocket Belt could only fly for 21 seconds. But the Flier was a breakthrough of principle. There was no reason why it couldn’t get much better, very quickly, and Blériot flew across the English Channel just six years later. There was a very clear and obvious path to make it better. Conversely, the Rocket Belt flew for 21 seconds because it used almost a litre of fuel per second – to fly like this for half a hour you’d need almost two tonnes of fuel, and you can’t carry that on your back. There was no roadmap to make it better without changing the laws of physics. We don’t just know that now – we knew it in 1962.
A useful post about figuring out whether something will happen or be successful. The question is “what would have to change?”
The case went to court after the woman refused to delete photographs of her grandchildren which she had posted on social media. The mother of the children had asked several times for the pictures to be deleted.
The GDPR does not apply to the “purely personal” or “household” processing of data. However, that exemption did not apply because posting photographs on social media made them available to a wider audience, the ruling said.
“With Facebook, it cannot be ruled out that placed photos may be distributed and may end up in the hands of third parties,” it said.
The woman must remove the photos or pay a fine of €50 (£45) for every day that she fails to comply with the order, up to a maximum fine of €1,000.
I think this is entirely reasonable, and I’m hoping we’ll see more of this until people stop thinking they can sharing the personally identifiable information of others whenever and however they like.
– Environment (E) – are the reasons its not happening outside of the control of the people you identified in Step 1? Do they have the resources, the tools, the funding? Do their normal objectives mean that they have to prioritise other things? Does the prevailing organisational culture work against achieving the goals?
– Skills (S) – Are they aware of the tasks they need to do and enabled to do them?
– Knowledge (K) – is the knowledge they need available to them? It could either be information they have to carry around in their heads, or just be available in a place they know about.
– Motivation (Mo) – Do they have the will to carry it out?
The last three (S,K, Mo) work a little bit like the fire triangle from that online fire safety training you probably had to do this year. All three need to be present for new practice to happen and to be sustainable.
Chris Thomson (Jisc)
In this post, Chris Thomson, who I used to work with at Jisc, challenges the notion that training is about getting people to do what you want. Instead, this ESKiMO approach asks why they’re not already doing it.
Within Scrum, estimates have a primary purpose – to figure out how much work the team can accomplish in a given sprint. If I were to grant that Sprints were a good idea (which I obviously don’t believe) then the description of estimates in the official Scrum guide wouldn’t be a problem.
The problem is that estimates in practice are a bastardization of reality. The Scrum guide is vague on the topic so managers take matters into their own hands.
Lane Wagner (Qvault)
I’m a product manager, and I find it incredible that people assume that ‘agile’ is the same as ‘Scrum’. If you’re trying to shoehorn the work you do into a development process then, to my mind, you’re doing it wrong.
As with the example below, it’s all about something that works for your particular context, while bearing in mind the principles of the agile manifesto.
The downside of all those nice methods and tools is that you have to apply them, which can be of course, postponed as well. Thus, the most important step is to integrate your tool or todo list in your daily routine. Whenever you finish a task, or you’re thinking what to do next, the focus should be on your list. For example, I figured out that I always click on one link in my browser favourites (a news website) or an app on my mobile phone (my email app). Sometimes I clicked hundred times a day, even though, knowing that there can’t be any new emails, as I checked one minute ago. Maybe you also developed such a “useless” habit which should be broken or at least used for something good. So I just replaced the app on my mobile and the link in my browser with my Remember The Milk app which shows me the tasks I have to do today. If you have just a paper-based solution it might be more difficult but try to integrate it in your daily routines, and keep it always in reach. After finishing a task, you should tick it in your system, which also forces you to have a look at the task list again.
Some useful pointers in this post, especially at the end about developing and refining your own system that depends on your current context.
The focus should be on the insistence of excellence, both from yourself and from those around you. The wisdom from experience. The work ethic. The drive. The dedication. The sacrifice. Jordan hits on all of those. And he even implies that not everyone needed the “tough love” to push them. But that’s glossed over for the more powerful mantra. Still, it doesn’t change the fact that not only are there other ways to tease such greatness out of people — different people require different methods.
M.G. Siegler (500ish)
I like basketball, and my son plays, but I haven’t yet seen the documentary mentioned in this post. The author discusses Michael Jordan stating that “Winning has a price. And leadership has a price.” However, he suggests that this isn’t the only way to get to excellence, and I would agree.
I don’t know… I think I’d like to say only that [young people] should learn to be alone and try to spend as much time as possible by themselves. I think one of the faults of young people today is that they try to come together around events that are noisy, almost aggressive at times. This desire to be together in order to not feel alone is an unfortunate symptom, in my opinion. Every person needs to learn from childhood how to spend time with oneself. That doesn’t mean he should be lonely, but that he shouldn’t grow bored with himself because people who grow bored in their own company seem to me in danger, from a self-esteem point of view.
This article in Open Culture quotes the film-maker Andrei Tarkovsky. Having just finished my first set of therapy sessions, I have to say that the metaphor of “puting on your own oxygen mask before helping others” would be a good takeaway from it. That sounds selfish, but as Tarkovsky points out here, other approaches can lead to the destruction of self-esteem.
[T]here are two sources of feeling like a noob: being stupid, and doing something novel. Our dislike of feeling like a noob is our brain telling us “Come on, come on, figure this out.” Which was the right thing to be thinking for most of human history. The life of hunter-gatherers was complex, but it didn’t change as much as life does now. They didn’t suddenly have to figure out what to do about cryptocurrency. So it made sense to be biased toward competence at existing problems over the discovery of new ones. It made sense for humans to dislike the feeling of being a noob, just as, in a world where food was scarce, it made sense for them to dislike the feeling of being hungry.
I’m not sure about the evolutionary framing, but there’s definitely something in this about having the confidence (and humility) to be a ‘noob’ and learn things as a beginner.
Imagine you were to take two identical twins and give them the same starter job, same manager, same skills, and the same personality. One competently does all of their work behind a veil of silence, not sharing good news, opportunities, or challenges, but just plugs away until asked for a status update. The other does the same level of work but communicates effectively, keeping their manager and stakeholders proactively informed. Which one is going to get the next opportunity for growth?
I absolutely love this post. As a Product Manager, I’ve been talking repeatedly recently about making our open-source project ‘legible’. As remote workers, that means over-communicating and, as pointed out in this post, being proactive in that communication. Highly recommended.
Consider packing a suitcase for a trip. It contains many different items – clothes, toiletries, books, electrical items, maybe food and drink or gifts. Some of these items bear a relationship to others, for example underwear, and others are seemingly unrelated, for example a hair dryer. Each brings their own function, which has a separate existence and relates to other items outside of the case, but within the case, they form a new category, that of “items I need for my trip.” In this sense the suitcase resembles the ed tech field, or at least a gathering of ed tech individuals, for example at a conference
If you attend a chemistry conference and have lunch with strangers, it is highly likely they will nearly all have chemistry degrees and PhDs. This is not the case at an ed tech conference, where the lunch table might contain people with expertise in computer science, philosophy, psychology, art, history and engineering. This is a strength of the field. The chemistry conference suitcase then contains just socks (but of different types), but the ed tech suitcase contains many different items. In this perspective then the aim is not to make the items of the suitcase the same, but to find means by which they meet the overall aim of usefulness for your trip, and are not random items that won’t be needed. This suggests a different way of approaching ed tech beyond making it a discipline.
At the start of this year, it became (briefly) fashionable among ageing (mainly North American) men to state that they had “never been an edtech guy”. Follwed by something something pedagogy or something something people. In this post, Martin Weller uses a handy metaphor to explain that edtech may not be a discipline, but it’s a useful field (or area of focus) nonetheless.
Backdoors are usually camouflaged as “accidental” security flaws. In the last year alone, 12 such flaws have been found in WhatsApp. Seven of them were critical – like the one that got Jeff Bezos. Some might tell you WhatsApp is still “very secure” despite having 7 backdoors exposed in the last 12 months, but that’s just statistically improbable.
Don’t let yourself be fooled by the tech equivalent of circus magicians who’d like to focus your attention on one isolated aspect all while performing their tricks elsewhere. They want you to think about end-to-end encryption as the only thing you have to look at for privacy. The reality is much more complicated.
Facebook products are bad for you, for society, and for the planet. Choose alternatives and encourage others to do likewise.
The current social-media model isn’t quite right for family sharing. Different generations tend to congregate in different places: Facebook is Boomer paradise, Instagram appeals to Millennials, TikTok is GenZ central. (WhatsApp has helped bridge the generational divide, but its focus on messaging is limiting.)
Updating family about a vacation across platforms—via Instagram stories or on Facebook, for example—might not always be appropriate. Do you really want your cubicle pal, your acquaintance from book club, and your high school frenemy to be looped in as well?
Some apps are just before their time. Take Path, for example, which my family used for almost the entire eight years it was around, from 2010 to 2018. The interface was great, the experience cosy, and the knowledge that you weren’t sharing with everyone outside of a close circle? Priceless.
While one data broker might only be able to tie my shopping behavior to something like my IP address, and another broker might only be able to tie it to my rough geolocation, that’s ultimately not much of an issue. What is an issue is what happens when those “anonymized” data points inevitably bleed out of the marketing ecosystem and someone even more nefarious uses it for, well, whatever—use your imagination. In other words, when one data broker springs a leak, it’s bad enough—but when dozens spring leaks over time, someone can piece that data together in a way that’s not only identifiable but chillingly accurate.
This idea of cumulative harm is a particularly difficult one to explain (and prove) not only in the world of data, but in every area of life.
Google recently invented a third way to track who you are and what you view on the web.
Each and every install of Chrome, since version 54, have generated a unique ID. Depending upon which settings you configure, the unique ID may be longer or shorter.
So every time you visit a Google web page or use a third party site which uses some Google resource, this ID is sent to Google and can be used to track which website or individual page you are viewing. As Google’s services such as scripts, captchas and fonts are used extensively on the most popular web sites, it’s likely that Google tracks most web pages you visit.
In the last year I have seen more and more researchers like danah boyd suggesting that digital literacies are not enough. Given that some on the Internet have weaponized these tools, I believe she is right. Moving beyond digital literacies means thinking about the epistemology behind digital literacies and helping to “build the capacity to truly hear and embrace someone else’s perspective and teaching people to understand another’s view while also holding their view firm” (boyd, March 9, 2018). We can still rely on social media for our news but we really owe it to ourselves to do better in further developing digital literacies, and knowing that just because we have discussions through screens that we should not be so narcissistic to believe that we MUST be right or that the other person is simply an idiot.
I’d argue, as I did recently in this talk, that what Young and boyd are talking about here is actually a central tenet of digital literacies.
So said Aldous Huxley. Recently, I discovered a episode of the podcast The Science of Success in which Dan Carlin was interviewed. Now Dan is the host of one of my favourite podcasts, Hardcore History as well as one he’s recently discontinued called Common Sense.
The reason the latter is on ‘indefinite hiatus’ was discussed on The Science of Success podcast. Dan feels that, after 30 years as a journalist, if he can’t get a grip on the current information landscape, then who can? It’s shaken him up a little.
One of the quotations he just gently lobbed into the conversation was from John Stuart Mill, who at one time or another was accused by someone of being ‘inconsistent’ in his views. Mill replied:
When the facts change, I change my mind. What do you do, sir?
John Stuart Mill
Now whether or not Mill said those exact words, the sentiment nevertheless stands. I reckon human beings have always made up their minds first and then chosen ‘facts’ to support their opinions. These days, I just think that it’s easier than ever to find ‘news’ outlets and people sharing social media posts to support your worldview. It’s as simple as that.
Last week I watched a stand-up comedy routine by Kevin Bridges on BBC iPlayer as part of his 2018 tour. As a Glaswegian, he made the (hilarious) analogy of social media as being like going into a pub.
(As an aside, this is interesting, as a decade ago people would often use the analogy of using social media as being like going to an café. The idea was that you could overhear, and perhaps join in with, interesting conversations that you hear. No-one uses that analogy any more.)
Bridges pointed out that if you entered a pub, sat down for a quiet pint, and the person next to you was trying to flog you Herbalife products, constantly talking about how #blessed they felt, or talking ambiguously for the sake of attention, you’d probably find another pub.
He was doing it for laughs, but I think he was also making a serious point. Online, we tolerate people ranting on and generally being obnoxious in ways we would never do offline.
The underlying problem of course is that any platform that takes some segment of the real world and brings it into software will also bring in all that segment’s problems. Amazon took products and so it has to deal with bad and fake products (whereas one might say that Facebook took people, and so has bad and fake people).
I met Clay Shirky at an event last month, which kind of blew my mind given that it was me speaking at it rather than him. After introducing myself, we spoke for a few minutes about everything from his choice of laptop to what he’s been working on recently. Curiously, he’s not writing a book at the moment. After a couple of very well-received books (Here Comes Everybody and Cognitive Surplus) Shirky has actually only published a slightly obscure book about Chinese smartphone manufacturing since 2010.
While I didn’t have time to dig into things there and then, and it would been a bit presumptuous of me to do so, it feels to me like Shirky may have ‘walked back’ some of his pre-2010 thoughts. This doesn’t surprise me at all, given that many of the rest of us have, too. For example, in 2014 he published a Medium article explaining why he banned his students from using laptops in lectures. Such blog posts and news articles are common these days, but it felt like was one of the first.
The last decade from 2010 to 2019, which Audrey Watters has done a great job of eviscerating, was, shall we say, somewhat problematic. The good news is that we connected 4.5 billion people to the internet. The bad news is that we didn’t really harness that for much good. So we went from people sharing pictures of cats, to people sharing pictures of cats and destroying western democracy.
Other than the ‘bad and fake people’ problem cited by Ben Evans above, another big problem was the rise of surveillance capitalism. In a similar way to climate change, this has been repackaged as a series of individual failures on the part of end users. But, as Lindsey Barrett explains for Fast Company, it’s not really our fault at all:
In some ways, the tendency to blame individuals simply reflects the mistakes of our existing privacy laws, which are built on a vision of privacy choices that generally considers the use of technology to be a purely rational decision, unconstrained by practical limitations such as the circumstances of the user or human fallibility. These laws are guided by the idea that providing people with information about data collection practices in a boilerplate policy statement is a sufficient safeguard. If people don’t like the practices described, they don’t have to use the service.
The problem is that we have monopolistic practices in the digital world. Fast Companyalso reports the four most downloaded apps of the 2010s were all owned by Facebook:
I don’t actually think people really understand that their data from WhatsApp and Instagram is being hoovered up by Facebook. I don’t then think they understand what Facebook then do with that data. I tried to lift the veil on this a little bit at the event where I met Clay Shirky. I know at least one person who immediately deleted their Facebook account as a result of it. But I suspect everyone else will just keep on keeping on. And yes, I have been banging my drum about this for quite a while now. I’ll continue to do so.
The truth is, and this is something I’ll be focusing on in upcoming workshops I’m running on digital literacies, that to be an ‘informed citizen’ these days means reading things like the EFF’s report into the current state of corporate surveillance. It means deleting accounts as a result. It means slowing down, taking time, and reading stuff before sharing it on platforms that you know care for the many, not the few. It means actually caring about this stuff.
All of this might just look and feel like a series of preferences. I prefer decentralised social networks and you prefer Facebook. Or I like to use Signal and you like WhatsApp. But it’s more than that. It’s a whole lot more than that. Democracy as we know it is at stake here.
As Prof. Scott Galloway has discussed from an American point of view, we’re living in times of increasing inequality. The tools we’re using exacerbate that inequality. All of a sudden you have to be amazing at your job to even be able to have a decent quality of life:
The biggest losers of the decade are the unremarkables. Our society used to give remarkable opportunities to unremarkable kids and young adults. Some of the crowding out of unremarkable white males, including myself, is a good thing. More women are going to college, and remarkable kids from low-income neighborhoods get opportunities. But a middle-class kid who doesn’t learn to code Python or speak Mandarin can soon find she is not “tracking” and can’t catch up.
Prof. Scott Galloway
I shared an article last Friday, about how you shouldn’t have to be good at your job. The whole point of society is that we look after one another, not compete with one another to see which of us can ‘extract the most value’ and pile up more money than he or she can ever hope to spend. Yes, it would be nice if everyone was awesome at all they did, but the optimisation of everything isn’t the point of human existence.
So once we come down the stack from social networks, to surveillance capitalism, to economic and markets eating the world we find the real problem behind all of this: decision-making. We’ve sacrificed stability for speed, and seem to be increasingly happy with dictator-like behaviour in both our public institutions and corporate lives.
Dictatorships can be more efficient than democracies because they don’t have to get many people on board to make a decision. Democracies, by contrast, are more robust, but at the cost of efficiency.
A selectorate, according to Pearson, “represents the number of people who have influence in a government, and thus the degree to which power is distributed”. Aside from the fact that dictatorships tend to be corrupt and oppressive, they’re just not a good idea in terms of decision-making:
Said another way, much of what appears efficient in the short term may not be efficient but hiding risk somewhere, creating the potential for a blow-up. A large selectorate tends to appear to be working less efficiently in the short term, but can be more robust in the long term, making it more efficient in the long term as well. It is a story of the Tortoise and the Hare: slow and steady may lose the first leg, but win the race.
I don’t think we should be optimising human beings for their role in markets. I think we should be optimising markets (if in fact we need them) for their role in human flourishing. The best way of doing that is to ensure that we distribute power and decision-making well.
So it might seem that my continual ragging on Facebook (in particular) is a small thing in the bigger picture. But it’s actually part of the whole deal. When we have super-powerful individuals whose companies have the ability to surveil us at will; who then share that data to corrupt regimes; who in turn reinforce the worst parts of the status quo; then I think we have a problem.
This year I’ve made a vow to be more radical. To speak my mind even more, and truth to power, especially when it’s inconvenient. I hope you’ll join me ✊
Happy 25th year, blogging. You’ve grown up, but social media is still having a brawl(The Guardian) — “The furore over social media and its impact on democracy has obscured the fact that the blogosphere not only continues to exist, but also to fulfil many of the functions of a functioning public sphere. And it’s massive. One source, for example, estimates that more than 409 million people view more than 20bn blog pages each month and that users post 70m new posts and 77m new comments each month. Another source claims that of the 1.7 bn websites in the world, about 500m are blogs. And WordPress.com alone hosts blogs in 120 languages, 71% of them in English.”
Emmanuel Macron Wants to Scan Your Face(The Washington Post) — “President Emmanuel Macron’s administration is set to be the first in Europe to use facial recognition when providing citizens with a secure digital identity for accessing more than 500 public services online… The roll-out is tainted by opposition from France’s data regulator, which argues the electronic ID breaches European Union rules on consent – one of the building blocks of the bloc’s General Data Protection Regulation laws – by forcing everyone signing up to the service to use the facial recognition, whether they like it or not.”
This is your phone on feminism (The Conversationalist) — “Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true. This cognitive dissonance confuses and paralyses us. And look around. Everyone has a smartphone. So it’s probably not so bad, and anyway, that’s just how things work. Right?”
Google’s auto-delete tools are practically worthless for privacy(Fast Company) — “In reality, these auto-delete tools accomplish little for users, even as they generate positive PR for Google. Experts say that by the time three months rolls around, Google has already extracted nearly all the potential value from users’ data, and from an advertising standpoint, data becomes practically worthless when it’s more than a few months old.”
Audrey Watters (Uses This) — “For me, the ideal set-up is much less about the hardware or software I am using. It’s about the ideas that I’m thinking through and whether or not I can sort them out and shape them up in ways that make for a good piece of writing. Ideally, that does require some comfort — a space for sustained concentration. (I know better than to require an ideal set up in order to write. I’d never get anything done.)”
Computer Files Are Going Extinct(OneZero) — “Files are skeuomorphic. That’s a fancy word that just means they’re a digital concept that mirrors a physical item. A Word document, for example, is like a piece of paper, sitting on your desk(top). A JPEG is like a painting, and so on. They each have a little icon that looks like the physical thing they represent. A pile of paper, a picture frame, a manila folder. It’s kind of charming really.”
Inside Mozilla’s 18-month effort to market without Facebook(Digiday) — “The decision to focus on data privacy in marketing the Mozilla brand came from research conducted by the company four years ago into the rise of consumers who make values-based decisions on not only what they purchase but where they spend their time.”
Core human values not eyeballs(Cubic Garden) — “Theres so much more to do, but the aims are high and important for not just the BBC, but all public service entities around the world. Measuring the impact and quality on peoples lives beyond the shallow meaningless metrics for public service is critical.”
I head off on holiday tomorrow! Before I go, check out these highlights from this week’s reading and research:
“Things that were considered worthless are redeemed”(Ira David Socol) — “Empathy plus Making must be what education right now is about. We are at both a point of learning crisis and a point of moral crisis. We see today what happens — in the US, in the UK, in Brasil — when empathy is lost — and it is a frightening sight. We see today what happens — in graduates from our schools who do not know how to navigate their world — when the learning in our schools is irrelevant in content and/or delivery.”
Voice assistants are going to make our work lives better—and noisier(Quartz) — “Active noise cancellation and AI-powered sound settings could help to tackle these issues head on (or ear on). As the AI in noise cancellation headphones becomes better and better, we’ll potentially be able to enhance additional layers of desirable audio, while blocking out sounds that distract. Audio will adapt contextually, and we’ll be empowered to fully manage and control our soundscapes.
We Aren’t Here to Learn What We Already Know(LA Review of Books) — “A good question, in short, is an honest question, one that, like good theory, dances on the edge of what is knowable, what it is possible to speculate on, what is available to our immediate grasp of what we are reading, or what it is possible to say. A good question, that is, like good theory, might be quite unlovely to read, particularly in its earliest iterations. And sometimes it fails or has to be abandoned.”
The runner who makes elaborate artwork with his feet and a map(The Guardian) — “The tracking process is high-tech, but the whole thing starts with just a pen and paper. “When I was a kid everyone thought I’d be an artist when I grew up – I was always drawing things,” he said. He was a particular fan of the Etch-a-Sketch, which has something in common with his current work: both require creating images in an unbroken line.”
What I Do When it Feels Like My Work Isn’t Good Enough(James Clear) — “Release the desire to define yourself as good or bad. Release the attachment to any individual outcome. If you haven’t reached a particular point yet, there is no need to judge yourself because of it. You can’t make time go faster and you can’t change the number of repetitions you have put in before today. The only thing you can control is the next repetition.”
Online porn and our kids: It’s time for an uncomfortable conversation(The Irish Times) — “Now when we talk about sex, we need to talk about porn, respect, consent, sexuality, body image and boundaries. We don’t need to terrify them into believing watching porn will ruin their lives, destroy their relationships and warp their libidos, maybe, but we do need to talk about it.”
Drones will fly for days with new photovoltaic engine(Tech Xplore) — “[T]his finding builds on work… published in 2011, which found that the key to boosting solar cell efficiency was not by absorbing more photons (light) but emitting them. By adding a highly reflective mirror on the back of a photovoltaic cell, they broke efficiency records at the time and have continued to do so with subsequent research.
Twitter won’t ruin the world. But constraining democracy would(The Guardian) — “The problems of Twitter mobs and fake news are real. As are the issues raised by populism and anti-migrant hostility. But neither in technology nor in society will we solve any problem by beginning with the thought: “Oh no, we put power into the hands of people.” Retweeting won’t ruin the world. Constraining democracy may well do.
The Encryption Debate Is Over – Dead At The Hands Of Facebook(Forbes) — “Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.”
Living in surplus(Seth Godin) — “When you live in surplus, you can choose to produce because of generosity and wonder, not because you’re drowning.”
Image from Dilbert. Shared to make the (hopefully self-evident) counterpoint that not everything of value has an economic value. There’s more to life than accumulation.