Tag: Facebook (page 1 of 4)

Slowly-boiling frogs in Facebook’s surveillance panopticon

I can’t think of a worse company than Facebook than to be creating a IRL surveillance panopticon. But, I have to say, it’s entirely on-brand.

On Wednesday, the company announced a plan to map the entire world, beyond street view. The company is launching a set of glasses that contains cameras, microphones, and other sensors to build a constantly updating map of the world in an effort called Project Aria. That map will include the inside of buildings and homes and all the objects inside of them. It’s Google Street View, but for your entire life.

Dave Gershgorn, Facebook’s Project Aria Is Google Maps — For Your Entire Life (OneZero)

We’re like slowly-boiling frogs with this stuff. Everything seems fine. Until it’s not.

The company insists any faces and license plates captured by Aria glasses wearers will be anonymized. But that won’t protect the data from Facebook itself. Ostensibly, Facebook will possess a live map of your home, pictures of your loved ones, pictures of any sensitive documents or communications you might be looking at with the glasses on, passwords — literally your entire life. The employees and contractors who have agreed to wear the research glasses are already trusting the company with this data.

Dave Gershgorn, Facebook’s Project Aria Is Google Maps — For Your Entire Life (OneZero)

With Amazon cosying up to police departments in the US with its Ring cameras, we really are hurtling towards surveillance states in the West.

Who has access to see the data from this live 3D map, and what, precisely, constitutes private versus public data? And who makes that determination? Faces might be blurred, but people can be easily identified without their faces. What happens if law enforcement wants to subpoena a day’s worth of Facebook’s LiveMap? Might Facebook ever build a feature to try to, say, automatically detect domestic violence, and if so, what would it do if it detected it?

Dave Gershgorn, Facebook’s Project Aria Is Google Maps — For Your Entire Life (OneZero)

Judges already requisition Fitbit data to solve crimes. No matter what Facebook say are their intentions around Project Aria, this data will end up in the hands of law enforcement, too.


More details on Project Aria:

To pursue the unattainable is insanity, yet the thoughtless can never refrain from doing so

Two people talking to one another

💬 The Surprising Power of Simply Asking Coworkers How They’re Doing

🤔 Facebook Maybe Not Singlehandedly Undermining Democracy With Political Content, Says Facebook

🐑 What is the Zollman effect?

👂 Unnervingly good entry in the “what languages sound like to non-speakers” genre

⚔️ Could a Peasant defeat a Knight in Battle?


Quotation-as-title from Marcus Aurelius. Image from top-linked post.

You can’t tech your way out of problems the tech didn’t create

The Electronic Frontier Foundation (EFF), is a US-based non-profit that exists to defend civil liberties in the digital world. They’ve been around for 30 years, and I support them financially on a monthly basis.

In this article by Corynne McSherry, EFF’s Legal Director, she outlines the futility in attempts by ‘Big Social’ to do content moderation at scale:

[C]ontent moderation is a fundamentally broken system. It is inconsistent and confusing, and as layer upon layer of policy is added to a system that employs both human moderators and automated technologies, it is increasingly error-prone. Even well-meaning efforts to control misinformation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression.

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

Ultimately, these monolithic social networks have a problem around false positives. It’s in their interests to be over-zealous, as they’re increasingly under the watchful eye of regulators and governments.

We have been watching closely as Facebook, YouTube, and Twitter, while disclaiming any interest in being “the arbiters of truth,” have all adjusted their policies over the past several months to try arbitrate lies—or at least flag them. And we’re worried, especially when we look abroad. Already this year, an attempt by Facebook to counter election misinformation targeting Tunisia, Togo, Côte d’Ivoire, and seven other African countries resulted in the accidental removal of accounts belonging to dozens of Tunisian journalists and activists, some of whom had used the platform during the country’s 2011 revolution. While some of those users’ accounts were restored, others—mostly belonging to artists—were not.

Corynne McSherry, Content Moderation and the U.S. Election: What to Ask, What to Demand (EFF)

McSherry’s analysis is spot-on: it’s the algorithms that are a problem here. Social networks employ these algorithms because of their size and structure, and because of the cost of human-based content moderation. After all, these are companies with shareholders.

Algorithms used by Facebook’s Newsfeed or Twitter’s timeline make decisions about which news items, ads, and user-generated content to promote and which to hide. That kind of curation can play an amplifying role for some types of incendiary content, despite the efforts of platforms like Facebook to tweak their algorithms to “disincentivize” or “downrank” it. Features designed to help people find content they’ll like can too easily funnel them into a rabbit hole of disinformation.

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

She includes useful questions for social networks to answer about content moderation:

  • Is the approach narrowly tailored or a categorical ban?
  • Does it empower users?
  • Is it transparent?
  • Is the policy consistent with human rights principles?

But, ultimately…

You can’t tech your way out of problems the tech didn’t create. And even where content moderation has a role to play, history tells us to be wary. Content moderation at scale is impossible to do perfectly, and nearly impossible to do well, even under the most transparent, sensible, and fair conditions

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

I’m so pleased that I don’t use Facebook products, and that I only use Twitter these days as a place to publish links to my writing.

Instead, I’m much happier on the Fediverse, a place where if you don’t like the content moderation approach of the instance you’re on, you can take your digital knapsack and decide to call another place home. You can find me here (for now!).

Nothing will ever be attempted, if all possible objections must be first overcome

Facebook Accused of Watching Instagram Users Through Cameras (The Verge)

In the complaint filed Thursday in federal court in San Francisco, New Jersey Instagram user Brittany Conditi contends the app’s use of the camera is intentional and done for the purpose of collecting “lucrative and valuable data on its users that it would not otherwise have access to.”


Facebook Has Been a Disaster for the World (The New York Times)

Facebook has been incredibly lucrative for its founder, Mark Zuckerberg, who ranks among the wealthiest men in the world. But it’s been a disaster for the world itself, a powerful vector for paranoia, propaganda and conspiracy-theorizing as well as authoritarian crackdowns and vicious attacks on the free press. Wherever it goes, chaos and destabilization follow.


Kim Kardashian West joins Facebook and Instagram boycott (BBC News)

I can’t sit by and stay silent while these platforms continue to allow the spreading of hate, propaganda and misinformation – created by groups to sow division and split America apart,” Kardashian West said.


Quotation-as-title from Dr Johnson.

One nation under Zuck

This image, from Grayson Perry, is incredible. As he points out in the accompanying article, he’s chosen the US due to an upcoming series of his, but geographically this could be anywhere, as culture wars these days happen mainly online.

I’ve added the emphasis in the quotation below:

When we experience a background hum of unfocused emotion, be it anxiety, sadness, fear, anger, we unconsciously look for something to attach it to. Social media is brilliant at supplying us with issues to which attach our free-floating feelings. We often look for nice, preformed boxes into which we can dump our inchoate feelings, we crave certainty. Social media constantly offers up neat solutions for our messy feelings, whether it be God, guns, Greta or gender identity.

In a battle-torn landscape governed by zeroes and ones, nuance, compromise and empathy are the first casualties. If I were to sum up the online culture war in one word it would be “diaphobia”, a term coined by the psychiatrist RD Laing meaning “fear of being influenced by other people”, the opposite of dialogue. Our ever-present underlying historical and enculturated emotions will nudge us to cherrypick and polish the nuggets of information that support a stance that may have been in our bodies from childhood. Once we have taken sides, the algorithms will supply us with a stream of content to entrench and confirm our beliefs.

Grayson Perry, Be it on God, guns or Greta, social media offers neat solutions for our messy feelings (The Guardian)

Using WhatsApp is a (poor) choice that you make

People often ask me about my stance on Facebook products. They can understand that I don’t use Facebook itself, but what about Instagram? And surely I use WhatsApp? Nope.

Given that I don’t usually have a single place to point people who want to read about the problems with WhatsApp, I thought I’d create one.


WhatsApp is a messaging app that was acquired by Facebook for the eye-watering amount of $19 billion in 2014. Interestingly, a BuzzFeed News article from 2018 cites documents confidential documents from the time leading up to the acquisition that were acquired by the UK’s Department for Culture, Media, and Sport. They show the threat WhatsApp posed to Facebook at the time.

US mobile messenger apps (iPhone) graph from August 2012 to March 2013
A document obtained by the DCMS as part of their investigations

As you can see from the above chart, Facebook executives were shown in 2013 that WhatsApp (8.6% reach) was growing rapidly and posed a huge threat to Facebook Messenger (13.7% reach).

So Facebook bought WhatsApp. But what did they buy? If, as we’re led to believe, WhatsApp is ‘end-to-end encrypted’ then Facebook don’t have access to the messages of users. So what’s so valuable?


Brian Acton, one of the founders of WhatsApp (and a man who got very rich through its sale) has gone on record saying that he feels like he sold his users’ privacy to Facebook.

Facebook, Acton says, had decided to pursue two ways of making money from WhatsApp. First, by showing targeted ads in WhatsApp’s new Status feature, which Acton felt broke a social compact with its users. “Targeted advertising is what makes me unhappy,” he says. His motto at WhatsApp had been “No ads, no games, no gimmicks”—a direct contrast with a parent company that derived 98% of its revenue from advertising. Another motto had been “Take the time to get it right,” a stark contrast to “Move fast and break things.”

Facebook also wanted to sell businesses tools to chat with WhatsApp users. Once businesses were on board, Facebook hoped to sell them analytics tools, too. The challenge was WhatsApp’s watertight end-to-end encryption, which stopped both WhatsApp and Facebook from reading messages. While Facebook didn’t plan to break the encryption, Acton says, its managers did question and “probe” ways to offer businesses analytical insights on WhatsApp users in an encrypted environment.

Parmy Olson (Forbes)

The other way Facebook wanted to make money was to sell tools to businesses allowing them to chat with WhatsApp users. These tools would also give “analytical insights” on how users interacted with WhatsApp.

Facebook was allowed to acquire WhatsApp (and Instagram) despite fears around monopolistic practices. This was because they made a promise not to combine data from various platforms. But, guess what happened next?

In 2014, Facebook bought WhatsApp for $19b, and promised users that it wouldn’t harvest their data and mix it with the surveillance troves it got from Facebook and Instagram. It lied. Years later, Facebook mixes data from all of its properties, mining it for data that ultimately helps advertisers, political campaigns and fraudsters find prospects for whatever they’re peddling. Today, Facebook is in the process of acquiring Giphy, and while Giphy currently doesn’t track users when they embed GIFs in messages, Facebook could start doing that anytime.

Cory Doctorow (EFF)

So Facebook is harvesting metadata from its various platforms, tracking people around the web (even if they don’t have an account), and buying up data about offline activities.

All of this creates a profile. So yes, because of end-ot-end encryption, Facebook might not know the exact details of your messages. But they know that you’ve started messaging a particular user account around midnight every night. They know that you’ve started interacting with a bunch of stuff around anxiety. They know how the people you message most tend to vote.


Do I have to connect the dots here? This is a company that sells targeted adverts, the kind of adverts that can influence the outcome of elections. Of course, Facebook will never admit that its platforms are the problem, it’s always the responsibility of the user to be ‘vigilant’.

Man reading a newspaper
A WhatsApp advert aiming to ‘fighting false information’ (via The Guardian)

So you might think that you’re just messaging your friend or colleague on a platform that ‘everyone’ uses. But your decision to go with the flow has consequences. It has implications for democracy. It has implications on creating a de facto monopoly for our digital information. And it has implications around the dissemination of false information.

The features that would later allow WhatsApp to become a conduit for conspiracy theory and political conflict were ones never integral to SMS, and have more in common with email: the creation of groups and the ability to forward messages. The ability to forward messages from one group to another – recently limited in response to Covid-19-related misinformation – makes for a potent informational weapon. Groups were initially limited in size to 100 people, but this was later increased to 256. That’s small enough to feel exclusive, but if 256 people forward a message on to another 256 people, 65,536 will have received it.

[…]

A communication medium that connects groups of up to 256 people, without any public visibility, operating via the phones in their pockets, is by its very nature, well-suited to supporting secrecy. Obviously not every group chat counts as a “conspiracy”. But it makes the question of how society coheres, who is associated with whom, into a matter of speculation – something that involves a trace of conspiracy theory. In that sense, WhatsApp is not just a channel for the circulation of conspiracy theories, but offers content for them as well. The medium is the message.

William Davies (The Guardian)

I cannot control the decisions others make, nor have I forced my opinions on my two children, who (despite my warnings) both use WhatsApp to message their friends. But, for me, the risk to myself and society of using WhatsApp is not one I’m happy with taking.

Just don’t say I didn’t warn you.


Header image by Rachit Tank

Everyone has a mob self and an individual self, in varying proportions

Digital mediation, decentralisation, and context collapse

Is social media ‘real life’? A recent Op-Ed in The New York Times certainly things so:

An argument about Twitter — or any part of the internet — as “real life” is frequently an argument about what voices “matter” in our national conversation. Not just which arguments are in the bounds of acceptable public discourse, but also which ideas are considered as legitimate for mass adoption. It is a conversation about the politics of the possible. That conversation has many gatekeepers — politicians, the press, institutions of all kinds. And frequently they lack creativity.

Charlie Warzel (The New York Times)

I’ve certainly been a proponent over the years for the view that digital interactions are no less ‘real’ than analogue ones. Yes, you’re reading a book when you do so on an e-reader. That’s right, you’re meeting someone when doing so over video conference. And correct, engaging in a Twitter thread counts as a conversation.

Now that everyone’s interacting via digital devices during the pandemic, things that some parts of the population refused to count as ‘normal’ have at least been normalised. It’s been great to see so much IRL mobilisation due to protests that started online, for example with the #BlackLivesMatter hashtag.


With this very welcome normalisation, however, I’m not sure there’s a general understanding about how digital spaces mediate our interactions. Offline, our conversations are mediated by the context in which we find ourselves: we speak differently at home, on the street, and in the pub. Meanwhile, online, we experience context collapse as we take our smartphones everywhere.

We forget that we interact in algorithmically-curated environments that favour certain kinds of interactions over others. Sometimes these algorithms can be fairly blunt instruments, for example when ‘Dominic Cummings’ didn’t trend on Twitter despite him being all over the news. Why? Because of anti-porn filters.

Other times, things are quite subtle. I’ve spoken on numerous occasions why I don’t use Facebook products. Part of the reason for this is that I don’t trust their privacy practices or algorithms. For example, a recent study showed that Instagram (which, of course, is owned by Facebook) actively encourages users to show some skin.

While Instagram claims that the newsfeed is organized according to what a given user “cares about most”, the company’s patent explains that it could actually be ranked according to what it thinks all users care about. Whether or not users see the pictures posted by the accounts they follow depends not only on their past behavior, but also on what Instagram believes is most engaging for other users of the platform.

Judith Duportail, Nicolas Kayser-Bril, Kira Schacht and Édouard Richard (Algorithm Watch)

I think I must have linked back to this post of mine from six years ago more than any other one I’ve written: Curate or Be Curated: Why Our Information Environment is Crucial to a Flourishing Democracy, Civil Society. To quote myself:

The problem with social networks as news platforms is that they are not neutral spaces. Perhaps the easiest way to get quickly to the nub of the issue is to ask how they are funded. The answer is clear and unequivocal: through advertising. The two biggest social networks, Twitter and Facebook (which also owns Instagram and WhatsApp), are effectively “services with shareholders.” Your interactions with other people, with media, and with adverts, are what provide shareholder value. Lest we forget, CEOs of publicly-listed companies have a legal obligation to provide shareholder value. In an advertising-fueled online world this means continually increasing the number of eyeballs looking at (and fingers clicking on) content. 

Doug Belshaw (Connected learning Alliance)

Herein lies the difficulty. We can’t rely on platforms backed by venture capital as they end up incentivised to do the wrong kinds of things. Equally, no-one is going to want to use a platform provided by a government.

This is why really do still believe that decentralisation is the answer here. Local moderation by people you know and/or trust that can happen on an individual or instance level. Algorithmic curation for the benefit of users which can be turned on or off by the user. Scaling both vertically and horizontally.

At the moment it’s not the tech that’s holding people back from such decentralisation but rather two things. The first is the mental model of decentralisation. I think that’s easy to overcome, as back in 2007 people didn’t really ‘get’ Twitter, etc. The second one is much more difficult, and is around the dopamine hit you get from posting something on social media and becoming a minor celebrity. Although it’s possible to replicate this in decentralised environments, I’m not sure we’d necessarily want to?


Slightly modified quotation-as-title by D.H. Lawrence. Header image by Prateek Katyal

Saturday soundings

Black Lives Matter. The money from this month’s kind supporters of Thought Shrapnel has gone directly to the 70+ community bail funds, mutual aid funds, and racial justice organizers listed here.


IBM abandons ‘biased’ facial recognition tech

A 2019 study conducted by the Massachusetts Institute of Technology found that none of the facial recognition tools from Microsoft, Amazon and IBM were 100% accurate when it came to recognising men and women with dark skin.

And a study from the US National Institute of Standards and Technology suggested facial recognition algorithms were far less accurate at identifying African-American and Asian faces compared with Caucasian ones.

Amazon, whose Rekognition software is used by police departments in the US, is one of the biggest players in the field, but there are also a host of smaller players such as Facewatch, which operates in the UK. Clearview AI, which has been told to stop using images from Facebook, Twitter and YouTube, also sells its software to US police forces.

Maria Axente, AI ethics expert at consultancy firm PwC, said facial recognition had demonstrated “significant ethical risks, mainly in enhancing existing bias and discrimination”.

BBC News

Like many newer technologies, facial recognition is already a battleground for people of colour. This is a welcome, if potential cynical move, by IBM who let’s not forget literally provided technology to the Nazis.


How Wikipedia Became a Battleground for Racial Justice

If there is one reason to be optimistic about Wikipedia’s coverage of racial justice, it’s this: The project is by nature open-ended and, well, editable. The spike in volunteer Wikipedia contributions stemming from the George Floyd protests is certainly not neutral, at least to the extent that word means being passive in this moment. Still, Koerner cautioned that any long-term change of focus to knowledge equity was unlikely to be easy for the Wikipedia editing community. “I hope that instead of struggling against it they instead lean into their discomfort,” she said. “When we’re uncomfortable, change happens.”

Stephen Harrison (Slate)

This is a fascinating glimpse into Wikipedia and how the commitment to ‘neutrality’ affects coverage of different types of people and event feeds.


Deeds, not words

Recent events have revealed, again, that the systems we inhabit and use as educators are perfectly designed to get the results they get. The stated desire is there to change the systems we use. Let’s be able to look back to this point in two years and say that we have made a genuine difference.

Nick Dennis

Some great questions here from Nick, some of which are specific to education, whereas others are applicable everywhere.


Sign with hole cut out saying 'NO JUSTICE NO PEACE'

Audio Engineers Built a Shield to Deflect Police Sound Cannons

Since the protests began, demonstrators in multiple cities have reported spotting LRADs, or Long-Range Acoustic Devices, sonic weapons that blast sound waves at crowds over large distances and can cause permanent hearing loss. In response, two audio engineers from New York City have designed and built a shield which they say can block and even partially reflect these harmful sonic blasts back at the police.

Janus Rose (Vice)

For those not familiar with the increasing militarisation of police in the US, this is an interesting read.


CMA to look into Facebook’s purchase of gif search engine

The Competition and Markets Authority (CMA) is inviting comments about Facebook’s purchase of a company that currently provides gif search across many of the social network’s competitors, including Twitter and the messaging service Signal.

[…]

[F]or Facebook, the more compelling reason for the purchase may be the data that Giphy has about communication across the web. Since many services that integrate with the platform not only use it to find gifs, but also leave the original clip hosted on Giphy’s servers, the company receives information such as when a message is sent and received, the IP address of both parties, and details about the platforms they are using.

Alex Hern (The Guardian)

In my 2012 TEDx Talk I discussed the memetic power of gifs. Others might find this news surprising, but I don’t think I would have been surprised even back then that it would be such a hot topic in 2020.

Also by the Hern this week is an article on Twitter’s experiments around getting people to actually read things before they tweet/retweet them. What times we live in.


Human cycles: History as science

To Peter Turchin, who studies population dynamics at the University of Connecticut in Storrs, the appearance of three peaks of political instability at roughly 50-year intervals is not a coincidence. For the past 15 years, Turchin has been taking the mathematical techniques that once allowed him to track predator–prey cycles in forest ecosystems, and applying them to human history. He has analysed historical records on economic activity, demographic trends and outbursts of violence in the United States, and has come to the conclusion that a new wave of internal strife is already on its way1. The peak should occur in about 2020, he says, and will probably be at least as high as the one in around 1970. “I hope it won’t be as bad as 1870,” he adds.

Laura Spinney (Nature)

I’m not sure about this at all, because if you go looking for examples of something to fit your theory, you’ll find it. Especially when your theory is as generic as this one. It seems like a kind of reverse fortune-telling?


Universal Basic Everything

Much of our economies in the west have been built on the idea of unique ideas, or inventions, which are then protected and monetised. It’s a centuries old way of looking at ideas, but today we also recognise that this method of creating and growing markets around IP protected products has created an unsustainable use of the world’s natural resources and generated too much carbon emission and waste.

Open source and creative commons moves us significantly in the right direction. From open sharing of ideas we can start to think of ideas, services, systems, products and activities which might be essential or basic for sustaining life within the ecological ceiling, whilst also re-inforcing social foundations.

TessyBritton

I’m proud to be part of a co-op that focuses on openness of all forms. This article is a great introduction to anyone who wants a new way of looking at our post-COVID future.


World faces worst food crisis for at least 50 years, UN warns

Lockdowns are slowing harvests, while millions of seasonal labourers are unable to work. Food waste has reached damaging levels, with farmers forced to dump perishable produce as the result of supply chain problems, and in the meat industry plants have been forced to close in some countries.

Even before the lockdowns, the global food system was failing in many areas, according to the UN. The report pointed to conflict, natural disasters, the climate crisis, and the arrival of pests and plant and animal plagues as existing problems. East Africa, for instance, is facing the worst swarms of locusts for decades, while heavy rain is hampering relief efforts.

The additional impact of the coronavirus crisis and lockdowns, and the resulting recession, would compound the damage and tip millions into dire hunger, experts warned.

Fiona Harvey (The Guardian)

The knock-on effects of COVID-19 are going to be with us for a long time yet. And these second-order effects will themselves have effects which, with climate change also being in the mix, could lead to mass migrations and conflict by 2025.


Mice on Acid

What exactly a mouse sees when she’s tripping on DOI—whether the plexiglass walls of her cage begin to melt, or whether the wood chips begin to crawl around like caterpillars—is tied up in the private mysteries of what it’s like to be a mouse. We can’t ask her directly, and, even if we did, her answer probably wouldn’t be of much help.

Cody Kommers (Nautilus)

The bit about ‘ego disillusion’ in this article, which is ostensibly about how to get legal hallucinogens to market, is really interesting.


Header image by Dmitry Demidov

Saturday signalings

I’ve been head-down doing lots of work this week, and then it’s been Bank Holiday weekend, so my reading has been pretty much whatever my social media feeds have thrown up!

There’s broadly three sections here, though: stuff about the way we think, about technology, and about ways of working. Enjoy!


How Clocks Changed Humanity Forever, Making Us Masters and Slaves of Time

The article with the above embedded video is from five years ago, but someone shared it on my Twitter timeline and it reminded me of something. When I taught my History students about the Industrial Revolution it blew their minds that different parts of the country could be, effectively, on different ‘timezones’ until the dawn of the railways.

It just goes to show how true it is that first we shape our tools, and then they shape us.


‘Allostatic Load’ is the Psychological Reason for Our Pandemic Brain Fog

“Uncertainty is one of the biggest elements that contributes to our experience of stress,” said Lynn Bufka, the senior director of Practice, Research, and Policy at the American Psychological Association. “Part of what we try to do to function in our society is to have some structure, some predictability. When we have those kinds of things, life feels more manageable, because you don’t have to put the energy into figuring those things out.”

Emily Baron Cadloff (VICE)

A short but useful article on why despite having grand plans, it’s difficult to get anything done in our current situation. We can’t even plan holidays at the moment.


Most of the Mind Can’t Tell Fact from Fiction

The industrialized world is so full of human faces, like in ads, that we forget that it’s just ink, or pixels on a computer screen. Every time our ancestors saw something that looked like a human face, it probably was one. As a result, we didn’t evolve to distinguish reality from representation. The same perceptual machinery interprets both.

Jim Davies (Nautilus)

A useful reminder that our brain contains several systems, some of which are paleolithic.


Wright Flier and Bell Rocket Belt

Not even wrong: ways to predict tech

The Wright Flier could only go 200 meters, and the Rocket Belt could only fly for 21 seconds. But the Flier was a breakthrough of principle. There was no reason why it couldn’t get much better, very quickly, and Blériot flew across the English Channel just six years later. There was a very clear and obvious path to make it better. Conversely, the Rocket Belt flew for 21 seconds because it used almost a litre of fuel per second – to fly like this for half a hour you’d need almost two tonnes of fuel, and you can’t carry that on your back. There was no roadmap to make it better without changing the laws of physics. We don’t just know that now – we knew it in 1962.

Benedict Evans

A useful post about figuring out whether something will happen or be successful. The question is “what would have to change?”


Grandmother ordered to delete Facebook photos under GDPR

The case went to court after the woman refused to delete photographs of her grandchildren which she had posted on social media. The mother of the children had asked several times for the pictures to be deleted.

The GDPR does not apply to the “purely personal” or “household” processing of data. However, that exemption did not apply because posting photographs on social media made them available to a wider audience, the ruling said.

“With Facebook, it cannot be ruled out that placed photos may be distributed and may end up in the hands of third parties,” it said.

The woman must remove the photos or pay a fine of €50 (£45) for every day that she fails to comply with the order, up to a maximum fine of €1,000.

BBC News

I think this is entirely reasonable, and I’m hoping we’ll see more of this until people stop thinking they can sharing the personally identifiable information of others whenever and however they like.


Developing new digital skills – is training always the answer?

Think ESKiMO:

– Environment (E) – are the reasons its not happening outside of the control of the people you identified in Step 1? Do they have the resources, the tools, the funding? Do their normal objectives mean that they have to prioritise other things? Does the prevailing organisational culture work against achieving the goals?

– Skills (S) – Are they aware of the tasks they need to do and enabled to do them?

– Knowledge (K) – is the knowledge they need available to them? It could either be information they have to carry around in their heads, or just be available in a place they know about.

– Motivation (Mo) – Do they have the will to carry it out?

The last three (S,K, Mo) work a little bit like the fire triangle from that online fire safety training you probably had to do this year. All three need to be present for new practice to happen and to be sustainable.

Chris Thomson (Jisc)

In this post, Chris Thomson, who I used to work with at Jisc, challenges the notion that training is about getting people to do what you want. Instead, this ESKiMO approach asks why they’re not already doing it.


xkcd: estimating time

Leave Scrum to Rugby, I Like Getting Stuff Done

Within Scrum, estimates have a primary purpose – to figure out how much work the team can accomplish in a given sprint. If I were to grant that Sprints were a good idea (which I obviously don’t believe) then the description of estimates in the official Scrum guide wouldn’t be a problem.

The problem is that estimates in practice are a bastardization of reality. The Scrum guide is vague on the topic so managers take matters into their own hands.

Lane Wagner (Qvault)

I’m a product manager, and I find it incredible that people assume that ‘agile’ is the same as ‘Scrum’. If you’re trying to shoehorn the work you do into a development process then, to my mind, you’re doing it wrong.

As with the example below, it’s all about something that works for your particular context, while bearing in mind the principles of the agile manifesto.


How I trick my well developed procrastination skills

The downside of all those nice methods and tools is that you have to apply them, which can be of course, postponed as well. Thus, the most important step is to integrate your tool or todo list in your daily routine. Whenever you finish a task, or you’re thinking what to do next, the focus should be on your list. For example, I figured out that I always click on one link in my browser favourites (a news website) or an app on my mobile phone (my email app). Sometimes I clicked hundred times a day, even though, knowing that there can’t be any new emails, as I checked one minute ago. Maybe you also developed such a “useless” habit which should be broken or at least used for something good. So I just replaced the app on my mobile and the link in my browser with my Remember The Milk app which shows me the tasks I have to do today. If you have just a paper-based solution it might be more difficult but try to integrate it in your daily routines, and keep it always in reach. After finishing a task, you should tick it in your system, which also forces you to have a look at the task list again.

Wolfgang Gassler

Some useful pointers in this post, especially at the end about developing and refining your own system that depends on your current context.


The Great Asshole Fallacy

The focus should be on the insistence of excellence, both from yourself and from those around you. The wisdom from experience. The work ethic. The drive. The dedication. The sacrifice. Jordan hits on all of those. And he even implies that not everyone needed the “tough love” to push them. But that’s glossed over for the more powerful mantra. Still, it doesn’t change the fact that not only are there other ways to tease such greatness out of people — different people require different methods.

M.G. Siegler (500ish)

I like basketball, and my son plays, but I haven’t yet seen the documentary mentioned in this post. The author discusses Michael Jordan stating that “Winning has a price. And leadership has a price.” However, he suggests that this isn’t the only way to get to excellence, and I would agree.


Header image by Romain Briaux

Friday flaggings

As usual, a mixed bag of goodies, just like you used to get from your favourite sweet shop as a kid. Except I don’t hold the bottom of the bag, so you get full value.

Let me know which you found tasty and which ones suck (if you’ll pardon the pun).


Andrei Tarkovsky’s Message to Young People: “Learn to Be Alone,” Enjoy Solitude

I don’t know… I think I’d like to say only that [young people] should learn to be alone and try to spend as much time as possible by themselves. I think one of the faults of young people today is that they try to come together around events that are noisy, almost aggressive at times. This desire to be together in order to not feel alone is an unfortunate symptom, in my opinion. Every person needs to learn from childhood how to spend time with oneself. That doesn’t mean he should be lonely, but that he shouldn’t grow bored with himself because people who grow bored in their own company seem to me in danger, from a self-esteem point of view.

Andrei Tarkovsky

This article in Open Culture quotes the film-maker Andrei Tarkovsky. Having just finished my first set of therapy sessions, I have to say that the metaphor of “puting on your own oxygen mask before helping others” would be a good takeaway from it. That sounds selfish, but as Tarkovsky points out here, other approaches can lead to the destruction of self-esteem.


Being a Noob

[T]here are two sources of feeling like a noob: being stupid, and doing something novel. Our dislike of feeling like a noob is our brain telling us “Come on, come on, figure this out.” Which was the right thing to be thinking for most of human history. The life of hunter-gatherers was complex, but it didn’t change as much as life does now. They didn’t suddenly have to figure out what to do about cryptocurrency. So it made sense to be biased toward competence at existing problems over the discovery of new ones. It made sense for humans to dislike the feeling of being a noob, just as, in a world where food was scarce, it made sense for them to dislike the feeling of being hungry.

Paul Graham

I’m not sure about the evolutionary framing, but there’s definitely something in this about having the confidence (and humility) to be a ‘noob’ and learn things as a beginner.


You Aren’t Communicating Nearly Enough

Imagine you were to take two identical twins and give them the same starter job, same manager, same skills, and the same personality. One competently does all of their work behind a veil of silence, not sharing good news, opportunities, or challenges, but just plugs away until asked for a status update. The other does the same level of work but communicates effectively, keeping their manager and stakeholders proactively informed. Which one is going to get the next opportunity for growth?

Michael Natkin

I absolutely love this post. As a Product Manager, I’ve been talking repeatedly recently about making our open-source project ‘legible’. As remote workers, that means over-communicating and, as pointed out in this post, being proactive in that communication. Highly recommended.


The Boomer Blockade: How One Generation Reshaped the Workforce and Left Everyone Behind

This is a profound trend. The average age of incoming CEOs for S&P 500 companies has increased about 14 years over the last 14 years

From 1980 to 2001 the average age of a CEO dropped four years and then from 2005 to 2019 the averare incoming age of new CEOs increased 14 years!

This means that the average birth year of a CEO has not budged since 2005. The best predictor of becoming a CEO of our most successful modern institutions?

Being a baby boomer.

Paul Millerd

Wow. This, via Marginal Revolution, pretty much speaks for itself.


The Ed Tech suitcase

Consider packing a suitcase for a trip. It contains many different items – clothes, toiletries, books, electrical items, maybe food and drink or gifts. Some of these items bear a relationship to others, for example underwear, and others are seemingly unrelated, for example a hair dryer. Each brings their own function, which has a separate existence and relates to other items outside of the case, but within the case, they form a new category, that of “items I need for my trip.” In this sense the suitcase resembles the ed tech field, or at least a gathering of ed tech individuals, for example at a conference

If you attend a chemistry conference and have lunch with strangers, it is highly likely they will nearly all have chemistry degrees and PhDs. This is not the case at an ed tech conference, where the lunch table might contain people with expertise in computer science, philosophy, psychology, art, history and engineering. This is a strength of the field. The chemistry conference suitcase then contains just socks (but of different types), but the ed tech suitcase contains many different items. In this perspective then the aim is not to make the items of the suitcase the same, but to find means by which they meet the overall aim of usefulness for your trip, and are not random items that won’t be needed. This suggests a different way of approaching ed tech beyond making it a discipline.

Martin Weller

At the start of this year, it became (briefly) fashionable among ageing (mainly North American) men to state that they had “never been an edtech guy”. Follwed by something something pedagogy or something something people. In this post, Martin Weller uses a handy metaphor to explain that edtech may not be a discipline, but it’s a useful field (or area of focus) nonetheless.


Why Using WhatsApp is Dangerous

Backdoors are usually camouflaged as “accidental” security flaws. In the last year alone, 12 such flaws have been found in WhatsApp. Seven of them were critical – like the one that got Jeff Bezos. Some might tell you WhatsApp is still “very secure” despite having 7 backdoors exposed in the last 12 months, but that’s just statistically improbable.

[…]

Don’t let yourself be fooled by the tech equivalent of circus magicians who’d like to focus your attention on one isolated aspect all while performing their tricks elsewhere. They want you to think about end-to-end encryption as the only thing you have to look at for privacy. The reality is much more complicated. 

Pavel Durov

Facebook products are bad for you, for society, and for the planet. Choose alternatives and encourage others to do likewise.


Why private micro-networks could be the future of how we connect

The current social-media model isn’t quite right for family sharing. Different generations tend to congregate in different places: Facebook is Boomer paradise, Instagram appeals to Millennials, TikTok is GenZ central. (WhatsApp has helped bridge the generational divide, but its focus on messaging is limiting.)

Updating family about a vacation across platforms—via Instagram stories or on Facebook, for example—might not always be appropriate. Do you really want your cubicle pal, your acquaintance from book club, and your high school frenemy to be looped in as well?

Tanya Basu

Some apps are just before their time. Take Path, for example, which my family used for almost the entire eight years it was around, from 2010 to 2018. The interface was great, the experience cosy, and the knowledge that you weren’t sharing with everyone outside of a close circle? Priceless.


‘Anonymized’ Data Is Meaningless Bullshit

While one data broker might only be able to tie my shopping behavior to something like my IP address, and another broker might only be able to tie it to my rough geolocation, that’s ultimately not much of an issue. What is an issue is what happens when those “anonymized” data points inevitably bleed out of the marketing ecosystem and someone even more nefarious uses it for, well, whatever—use your imagination. In other words, when one data broker springs a leak, it’s bad enough—but when dozens spring leaks over time, someone can piece that data together in a way that’s not only identifiable but chillingly accurate.

Shoshana Wodinsky

This idea of cumulative harm is a particularly difficult one to explain (and prove) not only in the world of data, but in every area of life.


“Hey Google, stop tracking me”

Google recently invented a third way to track who you are and what you view on the web.

[…]

Each and every install of Chrome, since version 54, have generated a unique ID. Depending upon which settings you configure, the unique ID may be longer or shorter.

[…]

So every time you visit a Google web page or use a third party site which uses some Google resource, this ID is sent to Google and can be used to track which website or individual page you are viewing. As Google’s services such as scripts, captchas and fonts are used extensively on the most popular web sites, it’s likely that Google tracks most web pages you visit.

Magic Lasso

Use Firefox. Use multi-account containers and extensions that protect your privacy.


The Golden Age of the Internet and social media is over

In the last year I have seen more and more researchers like danah boyd suggesting that digital literacies are not enough. Given that some on the Internet have weaponized these tools, I believe she is right. Moving beyond digital literacies means thinking about the epistemology behind digital literacies and helping to “build the capacity to truly hear and embrace someone else’s perspective and teaching people to understand another’s view while also holding their view firm” (boyd, March 9, 2018). We can still rely on social media for our news but we really owe it to ourselves to do better in further developing digital literacies, and knowing that just because we have discussions through screens that we should not be so narcissistic to believe that we MUST be right or that the other person is simply an idiot.

Jimmy Young

I’d argue, as I did recently in this talk, that what Young and boyd are talking about here is actually a central tenet of digital literacies.


Image via Introvert doodles


Enjoy this? Sign up for the weekly roundup and/or become a supporter!

Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box