Tag: identity

To others we are not ourselves but a performer in their lives cast for a part we do not even know that we are playing

Surveillance, technology, and society

Last week, the London Metropolitan Police (‘the Met’) proudly announced that they’ve begun using ‘LFR’, which is their neutral-sounding acronym for something incredibly invasive to the privacy of everyday people in Britain’s capital: Live Facial Recognition.

It’s obvious that the Met expect some pushback here:

The Met will begin operationally deploying LFR at locations where intelligence suggests we are most likely to locate serious offenders. Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences. 

At a deployment, cameras will be focused on a small, targeted area to scan passers-by. The cameras will be clearly signposted and officers deployed to the operation will hand out leaflets about the activity. The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.

London Metropolitan Police

Note the talk of ‘intelligence’ and ‘bespoke watch lists’, as well as promises that LFR will not be linked any other systems. (ANPR, for those not familiar with it, is ‘Automatic Number Plate Recognition’.) This, of course, is the thin end of the wedge and how these things start — in a ‘targeted’ way. They’re expanded later, often when the fuss has died down.


Meanwhile, a lot of controversy surrounds an app called Clearview AI which scrapes publicly-available data (e.g. Twitter or YouTube profiles) and applies facial recognition algorithms. It’s already in use by law enforcement in the USA.

The size of the Clearview database dwarfs others in use by law enforcement. The FBI’s own database, which taps passport and driver’s license photos, is one of the largest, with over 641 million images of US citizens.

The Clearview app isn’t available to the public, but the Times says police officers and Clearview investors think it will be in the future.

The startup said in a statement Tuesday that its “technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public.” 

Edward Moyer (CNET)

So there we are again, the technology is ‘intended’ for one purpose, but the general feeling is that it will leak out into others. Imagine the situation if anyone could identify almost anyone on the planet simply by pointing their smartphone at them for a few seconds?

This is a huge issue, and one that politicians and lawmakers on both sides of the Atlantic are both ill-equipped to deal with and particularly concerned about. As the BBC reports, the European Commission is considering a five-year ban on facial recognition in public spaces while it figures out how to regulate the technology:

The Commission set out its plans in an 18-page document, suggesting that new rules will be introduced to bolster existing regulation surrounding privacy and data rights.

It proposed imposing obligations on both developers and users of artificial intelligence, and urged EU countries to create an authority to monitor the new rules.

During the ban, which would last between three and five years, “a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed”.

BBC News

I can’t see the genie going back in this particular bottle and, as Ian Welsh puts it, this is the end of public anonymity. He gives the examples of the potential for all kinds of abuse, from an increase in rape, to abuse by corporations, to an increase in parental surveillance of children.

The larger issue is this: people who are constantly under surveillance become super conformers out of defense. Without true private time, the public persona and the private personality tend to collapse together. You need a backstage — by yourself and with a small group of friends to become yourself. You need anonymity.

When everything you do is open to criticism by everyone, you will become timid and conforming.

When governments, corporations, schools and parents know everything, they will try to control everything. This often won’t be for your benefit.

Ian Welsh

We already know that self-censorship is the worst kind of censorship, and live facial recognition means we’re going to have to do a whole lot more of it in the near future.

So what can we do about it? Welsh thinks that this technology should be made illegal, which is one option. However, you can’t un-invent technologies. So live facial recognition is going to be used (lawfully) by some organisations, even if it were restricted to state operatives. I’m not sure if that’s better or worse than everyone having it?


At a recent workshop I ran, I was talking during one of the breaks to one person who couldn’t really see the problem I had raised about surveillance capitalism. I have to wonder if they would have a problem with live facial recognition? From our conversation, I’d suspect not.

Remember that facial recognition is not 100% accurate and (realistically) never can be. So there will be false positives. Let’s say your face ends up on a ‘watch list’ or a ‘bad actor’ database shared with many different agencies and retailers. All of a sudden, you’ve got yourself a very big problem.


As BuzzFeed News reports, around half of US retailers are either using live facial recognition, or have plans to use it. At the moment, companies like FaceFirst do not facilitate the sharing of data across their clients, but you can see what’s coming next:

[Peter Trepp, CEO of FaceFirst] said the database is not shared with other retailers or with FaceFirst directly. All retailers have their own policies, but Trepp said often stores will offer not to press charges against apprehended shoplifters if they agree to opt into the store’s shoplifter database. The files containing the images and identities of people on “the bad guy list” are encrypted and only accessible to retailers using their own systems, he said.

FaceFirst automatically purges visitor data that does not match information in a criminal database every 14 days, which is the company’s minimum recommendation for auto-purging data. It’s up to the retailer if apprehended shoplifters or people previously on the list can later opt out of the database.

Leticia Miranda (BuzzFeed News)

There is no opt-in, no consent sought or gathered by retailers. This is a perfect example of technology being light years ahead of lawmaking.


This is all well-and-good in situations where adults are going into public spaces, but what about schools, where children are often only one step above prisoners in terms of the rights they enjoy?

Recode reports that, in schools, the surveillance threat to students goes beyond facial recognition. So long as authorities know generally what a student looks like, they can track them everywhere they go:

Appearance Search can find people based on their age, gender, clothing, and facial characteristics, and it scans through videos like facial recognition tech — though the company that makes it, Avigilon, says it doesn’t technically count as a full-fledged facial recognition tool

Even so, privacy experts told Recode that, for students, the distinction doesn’t necessarily matter. Appearance Search allows school administrators to review where a person has traveled throughout campus — anywhere there’s a camera — using data the system collects about that person’s clothing, shape, size, and potentially their facial characteristics, among other factors. It also allows security officials to search through camera feeds using certain physical descriptions, like a person’s age, gender, and hair color. So while the tool can’t say who the person is, it can find where else they’ve likely been.

Rebecca Heilweil (Recode)

This is a good example of the boundaries of technology that may-or-may-not be banned at some point in the future. The makers of Appearance Search, Avigilon, claim that it’s not facial recognition technology because the images it captures and analyses are tied to the identity of a particular person:

Avigilon’s surveillance tool exists in a gray area: Even privacy experts are conflicted over whether or not it would be accurate to call the system facial recognition. After looking at publicly available content about Avigilon, Leong said it would be fairer to call the system an advanced form of characterization, meaning that the system is making judgments about the attributes of that person, like what they’re wearing or their hair, but it’s not actually claiming to know their identity.

Rebecca Heilweil (Recode)

You can give as many examples of the technology being used for good as you want — there’s one in this article about how the system helped discover a girl was being bullied, for example — but it’s still intrusive surveillance. There are other ways of getting to the same outcome.


We do not live in a world of certainty. We live in a world where things are ambiguous, unsure, and sometimes a little dangerous. While we should seek to protect one another, and especially those who are most vulnerable in society, we should think about the harm we’re doing by forcing people to live the totality of their lives in public.

What does that do to our conceptions of self? To creativity? To activism? Live facial recognition technology, as well as those technologies that exist in a grey area around it, is the hot-button issue of the 2020s.


Image by Kirill Sharkovski. Quotation-as-title by Elizabeth Bibesco.

Friday fizzles

I head off on holiday tomorrow! Before I go, check out these highlights from this week’s reading and research:

  • “Things that were considered worthless are redeemed” (Ira David Socol) — “Empathy plus Making must be what education right now is about. We are at both a point of learning crisis and a point of moral crisis. We see today what happens — in the US, in the UK, in Brasil — when empathy is lost — and it is a frightening sight. We see today what happens — in graduates from our schools who do not know how to navigate their world — when the learning in our schools is irrelevant in content and/or delivery.”
  • Voice assistants are going to make our work lives better—and noisier (Quartz) — “Active noise cancellation and AI-powered sound settings could help to tackle these issues head on (or ear on). As the AI in noise cancellation headphones becomes better and better, we’ll potentially be able to enhance additional layers of desirable audio, while blocking out sounds that distract. Audio will adapt contextually, and we’ll be empowered to fully manage and control our soundscapes.
  • We Aren’t Here to Learn What We Already Know (LA Review of Books) — “A good question, in short, is an honest question, one that, like good theory, dances on the edge of what is knowable, what it is possible to speculate on, what is available to our immediate grasp of what we are reading, or what it is possible to say. A good question, that is, like good theory, might be quite unlovely to read, particularly in its earliest iterations. And sometimes it fails or has to be abandoned.”
  • The runner who makes elaborate artwork with his feet and a map (The Guardian) — “The tracking process is high-tech, but the whole thing starts with just a pen and paper. “When I was a kid everyone thought I’d be an artist when I grew up – I was always drawing things,” he said. He was a particular fan of the Etch-a-Sketch, which has something in common with his current work: both require creating images in an unbroken line.”
  • What I Do When it Feels Like My Work Isn’t Good Enough (James Clear) — “Release the desire to define yourself as good or bad. Release the attachment to any individual outcome. If you haven’t reached a particular point yet, there is no need to judge yourself because of it. You can’t make time go faster and you can’t change the number of repetitions you have put in before today. The only thing you can control is the next repetition.”
  • Online porn and our kids: It’s time for an uncomfortable conversation (The Irish Times) — “Now when we talk about sex, we need to talk about porn, respect, consent, sexuality, body image and boundaries. We don’t need to terrify them into believing watching porn will ruin their lives, destroy their relationships and warp their libidos, maybe, but we do need to talk about it.”
  • Drones will fly for days with new photovoltaic engine (Tech Xplore) — “[T]his finding builds on work… published in 2011, which found that the key to boosting solar cell efficiency was not by absorbing more photons (light) but emitting them. By adding a highly reflective mirror on the back of a photovoltaic cell, they broke efficiency records at the time and have continued to do so with subsequent research.
  • Twitter won’t ruin the world. But constraining democracy would (The Guardian) — “The problems of Twitter mobs and fake news are real. As are the issues raised by populism and anti-migrant hostility. But neither in technology nor in society will we solve any problem by beginning with the thought: “Oh no, we put power into the hands of people.” Retweeting won’t ruin the world. Constraining democracy may well do.
  • The Encryption Debate Is Over – Dead At The Hands Of Facebook (Forbes) — “Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.”
  • Living in surplus (Seth Godin) — “When you live in surplus, you can choose to produce because of generosity and wonder, not because you’re drowning.”

Image from Dilbert. Shared to make the (hopefully self-evident) counterpoint that not everything of value has an economic value. There’s more to life than accumulation.

Remembering the past through photos

A few weeks ago, I bought a Google Assistant-powered smart display and put it in our kitchen in place of the DAB radio. It has the added bonus of cycling through all of my Google Photos, which stretch back as far as when my wife and I were married, 15 years ago.

This part of its functionality makes it, of course, just a cloud-powered digital photo frame. But I think it’s possible to underestimate the power that these things have. About an hour before composing this post, for example, my wife took a photo of a photo(!) that appeared on the display showing me on the beach with our two children when they were very small.

An article by Giuliana Mazzoni in The Conversation points out that our ability to whip out a smartphone at any given moment and take a photo changes our relationship to the past:

We use smart phones and new technologies as memory repositories. This is nothing new – humans have always used external devices as an aid when acquiring knowledge and remembering.

[…]

Nowadays we tend to commit very little to memory – we entrust a huge amount to the cloud. Not only is it almost unheard of to recite poems, even the most personal events are generally recorded on our cellphones. Rather than remembering what we ate at someone’s wedding, we scroll back to look at all the images we took of the food.

Mazzoni points out that this can be problematic, as memory is important for learning. However, there may be a “silver lining”:

Even if some studies claim that all this makes us more stupid, what happens is actually shifting skills from purely being able to remember to being able to manage the way we remember more efficiently. This is called metacognition, and it is an overarching skill that is also essential for students – for example when planning what and how to study. There is also substantial and reliable evidence that external memories, selfies included, can help individuals with memory impairments.

But while photos can in some instances help people to remember, the quality of the memories may be limited. We may remember what something looked like more clearly, but this could be at the expense of other types of information. One study showed that while photos could help people remember what they saw during some event, they reduced their memory of what was said.

She goes on to discuss the impact that viewing many photos from your past has on a malleable sense of self:

Research shows that we often create false memories about the past. We do this in order to maintain the identity that we want to have over time – and avoid conflicting narratives about who we are. So if you have always been rather soft and kind – but through some significant life experience decide you are tough – you may dig up memories of being aggressive in the past or even completely make them up.

I’m not so sure that it’s a good thing to tell yourself the wrong story about who you are. For example, although I grew up in, and identified with, a macho ex-mining town environment, I’ve become happier by realising that my identify is separate to that.

I suppose it’s a bit different for me, as most of the photos I’m looking at are of me with my children and/or my wife. However, I still have to tell myself a story of who I am as a husband and a father, so in many ways it’s the same.

All in all, I love the fact that we can take photos anywhere and at any time. We may need to evolve social norms around the most appropriate ways of capturing images in crowded situations, but that’s separate to the very great benefit which I believe they bring us.

Source: The Conversation

Identity is a pattern in time

When I was an undergraduate at Sheffield University, one of my Philosophy modules (quite appropriately) blew my mind. Entitled Mind, Brain and Personal Identity, it’s still being taught there, almost 20 years later.

One of the reasons for studying Philosophy is that it challenges your assumptions about the world as well as the ‘cultural programming’ of how you happened to be brought up. This particular module challenged my beliefs around a person being a single, contiguous being from birth to death.

That’s why I found this article by Esko Kilpi about workplace culture and identity particularly interesting:

There are two distinctly different approaches to understanding the individual and the social. Mainstream thinking sees the social as a community, on a different level from the individuals who form it. The social is separate from the individuals. “I” and “we” are separate things and can be understood separately.

Although he doesn’t mention it, Kilpi is actually invoking the African philosophy of Ubuntu here.

Ubuntu (Zulu pronunciation: [ùɓúntʼù]) is a Nguni Bantu term meaning “humanity”. It is often translated as “I am because we are,” and also “humanity towards others”, but is often used in a more philosophical sense to mean “the belief in a universal bond of sharing that connects all humanity”.

Instead of seeing the individual as “silent and private” and social interaction as “vocal and more public”, individuals are “thoroughly social”:

In this way of thinking, we leave behind the western notion of the self-governing, independent individual for a different notion, of interdependent people whose identities are established in interaction with each other. From this perspective, individual change cannot be separated from changes in the groups to which an individual belongs. And changes in the groups don’t take place without the individuals changing. We form our groups and our followerships and they form us at the same time, all the time.

This is why I believe in open licensing, open source, and working as openly as possible. It maximises social relationships, and helps foster individual development within those groups.

Source: Esko Kilpi

No-one wants a single identity, online or offline

It makes sense for companies reliant on advertising to not only get as much data as they can about you, but to make sure that you have a single identity on their platform to which to associate it.

This article by Cory Doctorow in BoingBoing reports on some research around young people and social media. As Doctorow states:

Social media has always had a real-names problem. Social media companies want their users to use their real names because it makes it easier to advertise to them. Users want to be able to show different facets of their identities to different people, because only a sociopath interacts with their boss, their kids, and their spouse in the same way.

I was talking to one of my Moodle colleagues about how, in our mid-thirties, we’re a ‘bridging’ generation between those who only went online in adulthood, and those who have only ever known a world with the internet. I got online for the first time when I was about fourteen or fifteen.

Those younger than me are well aware of the perils and pitfalls of a single online identity:

Amy Lancaster from the Journalism and Digital Communications school at the University of Central Lancashire studies the way that young people resent “the way Facebook ties them into a fixed self…[linking] different areas of a person’s life, carrying over from school to university to work.”

I think Doctorow has made an error around Amy’s surname, which is given as ‘Binns’ instead of ‘Lancaster’ both in the journal article and the original post.

Binns writes:

Young people know their future employers, parents and grandparents are present online, and so they behave accordingly. And it’s not only older people that affect behaviour.

My research shows young people dislike the way Facebook ties them into a fixed self. Facebook insists on real names and links different areas of a person’s life, carrying over from school to university to work. This arguably restricts the freedom to explore new identities – one of the key benefits of the web.

The desire for escapable transience over damning permanence has driven Snapchat’s success, precisely because it’s a messaging app that allows users to capture videos and pictures that are quickly removed from the service.

This is important for the work I’m leading around Project MoodleNet. It’s not just teenagers who want “escapable transience over damning permanence”.

Source: BoingBoing

Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box