Tag: facial recognition

To others we are not ourselves but a performer in their lives cast for a part we do not even know that we are playing

Surveillance, technology, and society

Last week, the London Metropolitan Police (‘the Met’) proudly announced that they’ve begun using ‘LFR’, which is their neutral-sounding acronym for something incredibly invasive to the privacy of everyday people in Britain’s capital: Live Facial Recognition.

It’s obvious that the Met expect some pushback here:

The Met will begin operationally deploying LFR at locations where intelligence suggests we are most likely to locate serious offenders. Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences. 

At a deployment, cameras will be focused on a small, targeted area to scan passers-by. The cameras will be clearly signposted and officers deployed to the operation will hand out leaflets about the activity. The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.

London Metropolitan Police

Note the talk of ‘intelligence’ and ‘bespoke watch lists’, as well as promises that LFR will not be linked any other systems. (ANPR, for those not familiar with it, is ‘Automatic Number Plate Recognition’.) This, of course, is the thin end of the wedge and how these things start — in a ‘targeted’ way. They’re expanded later, often when the fuss has died down.


Meanwhile, a lot of controversy surrounds an app called Clearview AI which scrapes publicly-available data (e.g. Twitter or YouTube profiles) and applies facial recognition algorithms. It’s already in use by law enforcement in the USA.

The size of the Clearview database dwarfs others in use by law enforcement. The FBI’s own database, which taps passport and driver’s license photos, is one of the largest, with over 641 million images of US citizens.

The Clearview app isn’t available to the public, but the Times says police officers and Clearview investors think it will be in the future.

The startup said in a statement Tuesday that its “technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public.” 

Edward Moyer (CNET)

So there we are again, the technology is ‘intended’ for one purpose, but the general feeling is that it will leak out into others. Imagine the situation if anyone could identify almost anyone on the planet simply by pointing their smartphone at them for a few seconds?

This is a huge issue, and one that politicians and lawmakers on both sides of the Atlantic are both ill-equipped to deal with and particularly concerned about. As the BBC reports, the European Commission is considering a five-year ban on facial recognition in public spaces while it figures out how to regulate the technology:

The Commission set out its plans in an 18-page document, suggesting that new rules will be introduced to bolster existing regulation surrounding privacy and data rights.

It proposed imposing obligations on both developers and users of artificial intelligence, and urged EU countries to create an authority to monitor the new rules.

During the ban, which would last between three and five years, “a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed”.

BBC News

I can’t see the genie going back in this particular bottle and, as Ian Welsh puts it, this is the end of public anonymity. He gives the examples of the potential for all kinds of abuse, from an increase in rape, to abuse by corporations, to an increase in parental surveillance of children.

The larger issue is this: people who are constantly under surveillance become super conformers out of defense. Without true private time, the public persona and the private personality tend to collapse together. You need a backstage — by yourself and with a small group of friends to become yourself. You need anonymity.

When everything you do is open to criticism by everyone, you will become timid and conforming.

When governments, corporations, schools and parents know everything, they will try to control everything. This often won’t be for your benefit.

Ian Welsh

We already know that self-censorship is the worst kind of censorship, and live facial recognition means we’re going to have to do a whole lot more of it in the near future.

So what can we do about it? Welsh thinks that this technology should be made illegal, which is one option. However, you can’t un-invent technologies. So live facial recognition is going to be used (lawfully) by some organisations, even if it were restricted to state operatives. I’m not sure if that’s better or worse than everyone having it?


At a recent workshop I ran, I was talking during one of the breaks to one person who couldn’t really see the problem I had raised about surveillance capitalism. I have to wonder if they would have a problem with live facial recognition? From our conversation, I’d suspect not.

Remember that facial recognition is not 100% accurate and (realistically) never can be. So there will be false positives. Let’s say your face ends up on a ‘watch list’ or a ‘bad actor’ database shared with many different agencies and retailers. All of a sudden, you’ve got yourself a very big problem.


As BuzzFeed News reports, around half of US retailers are either using live facial recognition, or have plans to use it. At the moment, companies like FaceFirst do not facilitate the sharing of data across their clients, but you can see what’s coming next:

[Peter Trepp, CEO of FaceFirst] said the database is not shared with other retailers or with FaceFirst directly. All retailers have their own policies, but Trepp said often stores will offer not to press charges against apprehended shoplifters if they agree to opt into the store’s shoplifter database. The files containing the images and identities of people on “the bad guy list” are encrypted and only accessible to retailers using their own systems, he said.

FaceFirst automatically purges visitor data that does not match information in a criminal database every 14 days, which is the company’s minimum recommendation for auto-purging data. It’s up to the retailer if apprehended shoplifters or people previously on the list can later opt out of the database.

Leticia Miranda (BuzzFeed News)

There is no opt-in, no consent sought or gathered by retailers. This is a perfect example of technology being light years ahead of lawmaking.


This is all well-and-good in situations where adults are going into public spaces, but what about schools, where children are often only one step above prisoners in terms of the rights they enjoy?

Recode reports that, in schools, the surveillance threat to students goes beyond facial recognition. So long as authorities know generally what a student looks like, they can track them everywhere they go:

Appearance Search can find people based on their age, gender, clothing, and facial characteristics, and it scans through videos like facial recognition tech — though the company that makes it, Avigilon, says it doesn’t technically count as a full-fledged facial recognition tool

Even so, privacy experts told Recode that, for students, the distinction doesn’t necessarily matter. Appearance Search allows school administrators to review where a person has traveled throughout campus — anywhere there’s a camera — using data the system collects about that person’s clothing, shape, size, and potentially their facial characteristics, among other factors. It also allows security officials to search through camera feeds using certain physical descriptions, like a person’s age, gender, and hair color. So while the tool can’t say who the person is, it can find where else they’ve likely been.

Rebecca Heilweil (Recode)

This is a good example of the boundaries of technology that may-or-may-not be banned at some point in the future. The makers of Appearance Search, Avigilon, claim that it’s not facial recognition technology because the images it captures and analyses are tied to the identity of a particular person:

Avigilon’s surveillance tool exists in a gray area: Even privacy experts are conflicted over whether or not it would be accurate to call the system facial recognition. After looking at publicly available content about Avigilon, Leong said it would be fairer to call the system an advanced form of characterization, meaning that the system is making judgments about the attributes of that person, like what they’re wearing or their hair, but it’s not actually claiming to know their identity.

Rebecca Heilweil (Recode)

You can give as many examples of the technology being used for good as you want — there’s one in this article about how the system helped discover a girl was being bullied, for example — but it’s still intrusive surveillance. There are other ways of getting to the same outcome.


We do not live in a world of certainty. We live in a world where things are ambiguous, unsure, and sometimes a little dangerous. While we should seek to protect one another, and especially those who are most vulnerable in society, we should think about the harm we’re doing by forcing people to live the totality of their lives in public.

What does that do to our conceptions of self? To creativity? To activism? Live facial recognition technology, as well as those technologies that exist in a grey area around it, is the hot-button issue of the 2020s.


Image by Kirill Sharkovski. Quotation-as-title by Elizabeth Bibesco.

Friday fermentations

I boiled the internet and this was what remained:

  • I Quit Social Media for a Year and Nothing Magical Happened (Josh C. Simmons) — “A lot of social media related aspects of my life are different now – I’m not sure they’re better, they’re just different, but I can confidently say that I prefer this normal to last year’s. There’s a bit of rain with all of the sunshine. I don’t see myself ever going back to social media. I don’t see the point of it, and after leaving for a while, and getting a good outside look, it seems like an abusive relationship – millions of workers generating data for tech-giants to crunch through and make money off of. I think that we tend to forget how we were getting along pretty well before social media – not everything was idyllic and better, but it was fine.”
  • Face recognition, bad people and bad data (Benedict Evans) — “My favourite example of what can go wrong here comes from a project for recognising cancer in photos of skin. The obvious problem is that you might not have an appropriate distribution of samples of skin in different tones. But another problem that can arise is that dermatologists tend to put rulers in the photo of cancer, for scale – so if all the examples of ‘cancer’ have a ruler and all the examples of ‘not-cancer’ do not, that might be a lot more statistically prominent than those small blemishes. You inadvertently built a ruler-recogniser instead of a cancer-recogniser.”
  • Would the Internet Be Healthier Without ‘Like’ Counts? (WIRED) ⁠— “Online, value is quantifiable. The worth of a person, idea, movement, meme, or tweet is often based on a tally of actions: likes, retweets, shares, followers, views, replies, claps, and swipes-up, among others. Each is an individual action. Together, though, they take on outsized meaning. A YouTube video with 100,000 views seems more valuable than one with 10, even though views—like nearly every form of online engagement—can be easily bought. It’s a paradoxical love affair. And it’s far from an accident.”
  • Are Platforms Commons? (On The Horizon) — “[W]hat if ecosystems were constructed so that they were governed by the participants, rather by the hypercapitalist strivings of the platform owners — such as Apple, Google, Amazon, Facebook — or the heavy-handed regulators? Is there a middle ground where the needs of the end user and those building, marketing, and shipping products and services can be balanced, and a fair share of the profits are distributed not just through common carrier laws but by the shared economics of a commons, and where the platform orchestrator gets a fair share, as well?”
  • Depression and anxiety threatened to kill my career. So I came clean about it (The Guardian) — “To my surprise, far from rejecting me, students stayed after class to tell me how sorry they were. They left condolence cards in my mailbox and sent emails to let me know they were praying for my family. They stopped by my office to check on me. Up to that point, I’d been so caught up in my despair that it never occurred to me that I might be worthy of concern and support. Being accepted despite my flaws touched me in ways that are hard to express.”
  • Absolute scale corrupts absolutely (apenwarr) — “Here’s what we’ve lost sight of, in a world where everything is Internet scale: most interactions should not be Internet scale. Most instances of most programs should be restricted to a small set of obviously trusted people. All those people, in all those foreign countries, should not be invited to read Equifax’s PII database in Argentina, no matter how stupid the password was. They shouldn’t even be able to connect to the database. They shouldn’t be able to see that it exists. It shouldn’t, in short, be on the Internet.”
  • The Automation Charade (Logic magazine) — “The problem is that the emphasis on technological factors alone, as though “disruptive innovation” comes from nowhere or is as natural as a cool breeze, casts an air of blameless inevitability over something that has deep roots in class conflict. The phrase “robots are taking our jobs” gives technology agency it doesn’t (yet?) possess, whereas “capitalists are making targeted investments in robots designed to weaken and replace human workers so they can get even richer” is less catchy but more accurate.”
  • The ambitious plan to reinvent how websites get their names (MIT Technology Review) — “The system would be based on blockchain technology, meaning it would be software that runs on a widely distributed network of computers. In theory, it would have no single point of failure and depend on no human-run organization that could be corrupted or co-opted.”
  • O whatever God or whatever ancestor that wins in the next life (The Main Event) — “And it begins to dawn on you that the stories were all myths and the epics were all narrated by the villains and the history books were written to rewrite the histories and that so much of what you thought defined excellence merely concealed grift.”
  • A Famous Argument Against Free Will Has Been Debunked (The Atlantic) — “In other words, people’s subjective experience of a decision—what Libet’s study seemed to suggest was just an illusion—appeared to match the actual moment their brains showed them making a decision.”

The best way out is always through

So said Robert Frost, but I want to begin with the ending of a magnificent post from Kate Bowles. She expresses clearly how I feel sometimes when I sit down to write something for Thought Shrapnel:

[T]his morning I blocked out time, cleared space, and sat down to write — and nothing happened. Nothing. Not a word, not even a wisp of an idea. After enough time staring at the blankness of the screen I couldn’t clearly remember having had an idea, ever.

Along the way I looked at the sky, I ate a mandarin and then a second mandarin, I made a cup of tea, I watched a family of wrens outside my window, I panicked. I let email divert me, and then remembered that was the opposite of the plan. I stayed off Twitter. Panic increased.

Then I did the one thing that absolutely makes a difference to me. I asked for help. I said “I write so many stupid words in my bullshit writing job that I can no longer write and that is the end of that.” And the person I reached out to said very calmly “Why not write about the thing you’re thinking about?”

Sometimes what you have to do as a writer is sit in place long enough, and sometimes you have to ask for help. Whatever works for you, is what works.

Kate Bowles

There are so many things wrong with the world right now, that sometimes I feel like I could stop working on all of the things I’m working on and spend time just pointing them out to people.

But to what end? You don’t change the world by just making people aware of things, not usually. For example, as tragic as the sentence, “the Amazon is on fire” is, it isn’t in and of itself a call-to-action. These days, people argue about the facts themselves as well as the appropriate response.

The world is an inordinately complicated place that we seek to make sense of by not thinking as much as humanly possible. To aid and abet us in this task, we divide ourselves, either consciously or unconsciously, into groups who apply similar heuristics. The new (information) is then assimilated into the old (worldview).

I have no privileged position, no objective viewpoint in which to observe and judge the world’s actions. None of us do. I’m as complicit in joining and forming in and out groups as the next person. I decide I’m going to delete my Twitter account and then end up rage-tweeting All The Things.

Thankfully, there are smart people, and not only academics, thinking about all this to figure out what we can and should do. Tim Urban, from the phenomenally-successful Wait But Why, for example, has spent the last three years working on “a new language we can use to think and talk about our societies and the people inside of them”. In the first chapter in a new series, he writes about the ongoing struggle between (what he calls) the ‘Primitive Minds’ and ‘Higher Minds’ of humans:

The never-ending struggle between these two minds is the human condition. It’s the backdrop of everything that has ever happened in the human world, and everything that happens today. It’s the story of our times because it’s the story of all human times.

Tim Urban

I think this is worth remembering when we spend time on social networks. And especially when we spend so much time that it becomes our default delivery method for the news of the day. Our Primitive Minds respond strongly to stimuli around fear and fornication.

When we reflect on our social media usage and the changing information landscape, the temptation is either to cut down, or to try a different information diet. Some people become the equivalent of Information Vegans, attempting to source the ‘cleanest’ morsels of information from the most wholesome, trusted, and traceable of places.

But where are those ‘trusted places’ these days? Are we as happy with the previously gold-standard news outlets such as the BBC and The New York Times as we once were? And if not, what’s changed?

The difference, I think, is the way we’ve decided to allow money to flow through our digital lives. Commercial news outlets, including those with which the BBC competes, are funded by advertising. Those adverts we see in digital spaces aren’t just showing things that we might happen to be interested in. They’ll keep on showing you that pair of shoes you almost bought last week in every space that is funded by advertising. Which is basically everywhere.

I feel like I’m saying obvious things here that everyone knows, but perhaps it bears repeating. If everyone is consuming news via social networks, and those news stories are funded by advertising, then the nature of what counts as ‘news’ starts to evolve. What gets the most engagement? How are headlines formed now, compared with a decade ago?

It’s as if something hot-wires our brain when something non-threatening and potentially interesting is made available to us ‘for free’. We never get to the stuff that we’d like to think defines us, because we caught in neverending cycles of titillation. We pay with our attention, that scarce and valuable resource.

Our attention, and more specifically, how we react to our social media feeds when we’re ‘engaged’ is valuable because it can be packaged up and sold to advertisers. But it’s also sold to governments too. Twitter just had to update their terms and conditions specifically because of the outcry over the Chinese government’s propaganda around the Hong Kong protests.

Protesters part of the ‘umbrella revolution’ in Hong Kong have recently been focusing on cutting down what we used to call CCTV cameras, but which are much more accurately described as ‘facial recognition masts’:

We are living in a world where the answer to everything seems to be ‘increased surveillance’. Kids not learning fast enough in school? Track them more. Scared of terrorism? Add more surveillance into the lives of everyday citizens. And on and on.

In an essay earlier this year, Maciej Cegłowski riffed on all of this, reflecting on what he calls ‘ambient privacy’:

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent.

Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.

Maciej Cegłowski

One of the difficulties in resisting the ‘Silicon Valley narrative’ and Big Tech’s complicity with governments is the danger of coming across as a neo-luddite. Without looking very closely to understand what’s going on (and having some time to reflect) it can all look like the inevitable march of progress.

So, without necessarily an answer to all this, I guess the best thing is, like Kate, to ask for help. What can we do here? What practical steps can we take? Comments are open.

The drawbacks of Artificial Intelligence

It’s really interesting to do philosophical thought experiments with kids. For example, the trolley problem, a staple of undergradate Philosophy courses, is also accessible to children from a fairly young age.

You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:

  1. Do nothing and allow the trolley to kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option?

With the advent of autonomous vehicles, these are no longer idle questions. The vehicles, which have to make split-second decision, may have to decide whether to hit a pram containing a baby, or swerve and hit a couple of pensioners. Due to cultural differences, even that’s not something that can be easily programmed, as the diagram below demonstrates.

Self-driving cards: pedestrians vs passengers

For two countries that are so close together, it’s really interesting that Japan and China are on the opposite ends of the spectrum when it comes to saving passengers or pedestrians!

The authors of the paper cited in the article are careful to point out that countries shouldn’t simply create laws based on popular opinion:

Edmond Awad, an author of the paper, brought up the social status comparison as an example. “It seems concerning that people found it okay to a significant degree to spare higher status over lower status,” he said. “It’s important to say, ‘Hey, we could quantify that’ instead of saying, ‘Oh, maybe we should use that.’” The results, he said, should be used by industry and government as a foundation for understanding how the public would react to the ethics of different design and policy decisions.

This is why we need more people with a background in the Humanities in tech, and be having a real conversation about ethics and AI.

Of course, that’s easier said than done, particularly when those companies who are in a position to make significant strides in this regard have near-monopolies in their field and are pulling in eye-watering amounts of money. A recent example of this, where Google convened an AI ethics committee was attacked as a smokescreen:

Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just “ethics washing,” a strategy to avoid government regulation. When researchers uncover new ways for technology to harm marginalized groups or infringe on civil liberties, tech companies can point to their boards and charters and say, “Look, we’re doing something.” It deflects criticism, and because the boards lack any power, it means the companies don’t change.

 […]

“It’s not that people are against governance bodies, but we have no transparency into how they’re built,” [Rumman] Chowdhury [a data scientist and lead for responsible AI at management consultancy Accenture] tells The Verge. With regard to Google’s most recent board, she says, “This board cannot make changes, it can just make suggestions. They can’t talk about it with the public. So what oversight capabilities do they have?”

As we saw around privacy, it takes a trusted multi-national body like the European Union to create a regulatory framework like GDPR for these issues. Thankfully, they’ve started that process by releasing guidelines containing seven requirements to create trustworthy AI:

  1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  4. Transparency: The traceability of AI systems should be ensured.
  5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The problem isn’t that people are going out of their way to build malevolent systems to rob us of our humanity. As usual, bad things happen because of more mundane requirements. For example, The Guardian has recently reported on concerns around predictive policing and hospitals using AI to predict everything from no-shows to risk of illness.

When we throw facial recognition into the mix, things get particularly scary. It’s all very well for Taylor Swift to use this technology to identify stalkers at her concerts, but given its massive drawbacks, perhaps we should restrict facial recognition somehow?

Human bias can seep into AI systems. Amazon abandoned a recruiting algorithm after it was shown to favor men’s resumes over women’s; researchers concluded an algorithm used in courtroom sentencing was more lenient to white people than to black people; a study found that mortgage algorithms discriminate against Latino and African American borrowers.

Facial recognition might be a cool way to unlock your phone, but the kind of micro-expressions that made for great television in the series Lie to Me is now easily exploited in what is expected to become a $20bn industry.

The difficult thing with all of this is that it’s very difficult for us as individuals to make a difference here. The problem needs to be tackled at a much higher level, as with GDPR. That will take time, and meanwhile the use of AI is exploding. Be careful out there.


Also check out: