Tag: police

Saturday scrapings

Every week, I go back through the links I’ve saved, pick out the best ones, and share them here. This week is perhaps even more eclectic than usual. Enjoy!


Marcus Henderson

Meet the Farmer Behind CHAZ’s Vegetable Gardens

Marcus was the first to start gardening in the park, though he was quickly joined by friends and strangers. This isn’t the work of a casual amateur; Henderson has an Energy Resources Engineering degree from Stanford University, a Master’s degree in Sustainability in the Urban Environment, and years of experience working in sustainable agriculture. His Instagram shows him hard at work on various construction and gardening projects, and he’s done community development at organic farms around the world.

Matt Baume (The Stranger)

I love this short article about Marcus Henderson, the first person to start planting in Seattle’s Capitol Hill Autonomous Zone.


The Rich Are ‘Defunding’ Our Democracy

“Apparently,” comments [journalist David] Sirota, “we’re expected to be horrified by proposals to reduce funding for the militarized police forces that are violently attacking peaceful protesters — but we’re supposed to obediently accept the defunding of the police forces responsible for protecting the population from the wealthy and powerful.”

Sam Pizzigati (Inequality.org)

A lot of people have been shocked by the calls to ‘defund the police’ on the back of the Black Lives Matter protests. The situation is undoubtedly worse in the US, but I particularly liked this explainer image, that I came across via Mastodon:

Teapot with label 'Defund the police' which has multiple spouts pouring into cups entitled 'Education', 'Universal healthcare', 'Youth services', 'Housing', and 'Other community investments'

Peasants’ Revolt

Yet perhaps the most surprising feature of the revolt is that in-spite of the modern title, Peasants’ Revolt didn’t gain usage until the late nineteenth century, the people who animated the movement weren’t peasants at all. They were in many respects the village elite. True, they weren’t noble magnates, but they were constables, stewards and jurors. In short, people who were on the up and saw an opportunity to press their agenda.

Robert Winter

I love reading about things I used to teach, especially when they’re written by interesting people about which I want to know more. This blog post is by Robert Winter, “philosopher and historian by training, Operations Director by pay cheque”. I discovered is as part of the #100DaysToOffload challenge, largely happening on the Fediverse, and to which I’m contributing.


Red blood cells

Three people with inherited diseases successfully treated with CRISPR

Two people with beta thalassaemia and one with sickle cell disease no longer require blood transfusions, which are normally used to treat severe forms of these inherited diseases, after their bone marrow stem cells were gene-edited with CRISPR.

Michael Le Page (New Scientist)

CRISPR is a way of doing gene editing within organisms. sAs far as I’m aware, this is one of the first times it’s been used to treat conditions in humans. I’m sure it won’t be the last.


Choose Your Own Fake News

Choose Your Own Fake News is an interactive “choose your own adventure” game. Play the game as Flora, Jo or Aida from East Africa, and navigate the world of disinformation and misinformation through the choices you make. Scrutinize news and information about job opportunities, vaccines and upcoming elections to make the right choices!

This is the kind of thing that the Mozilla Foundation does particularly well: either producing in-house, or funding very specific web-based tools to teach people things. In this case, it’s fake news. And it’s really good.


Why are Google and Apple dictating how European democracies fight coronavirus?

The immediate goal for governments and tech companies is to strike the right balance between privacy and the effectiveness of an application to limit the spread of Covid-19. This requires continuous collaboration between the two with the private sector, learning from the experience of national health authorities and adjusting accordingly. Latvia, together with the rest of Europe, stands firm in defending privacy, and is committed to respecting both the individual’s right to privacy and health while applying its own solutions to combat Covid-19.

Ieva Ilves (The Guardian)

This is an article written by an an adviser to the president of Latvia on information and digital policy. They explain some of the nuance behind the centralised vs decentralised contact tracing app models which I hadn’t really thought about.


Illustration of Lévy walks

Random Search Wired Into Animals May Help Them Hunt

Lévy walks are now seen as a movement pattern that a nervous system can produce in the absence of useful sensory or mnemonic information, when it is an animal’s most advantageous search strategy. Of course, many animals may never employ a Lévy walk: If a polar bear can smell a seal, or a cheetah can see a gazelle, the animals are unlikely to engage in a random search strategy. “We expect the adaptation for Lévy walks to have appeared only where they confer practical advantages,” Viswanathan said.

Liam Drew (QUanta Magazine)

If you’ve watched wildlife documentaries, you probably know about Lévy walks (or ‘flights’). This longish article gives a fascinating insight into the origin of the theory and how it can be useful in protecting different species.


A plan to turn the atmosphere into one, enormous sensor

One of AtmoSense’s first goals will be to locate and study phenomena at or close to Earth’s surface—storms, earthquakes, volcanic eruptions, mining operations and “mountain waves”, which are winds associated with mountain ranges. The aim is to see if atmospheric sensing can outperform existing methods: seismographs for earthquakes, Doppler weather radar for storms and so on.

The Economist

This sounds potentially game-changing. I can see the positives, but I wonder what the negatives will be?


Paths of desire: lockdown has lent a new twist to the trails we leave behind

Desire paths aren’t anything new – the term has been traced back to the French philosopher Gaston Bachelard, who wrote of “lignes de désir” in his 1958 book The Poetics of Space. Nature author Robert Macfarlane has written more recently about the inherent poetry of the paths. In his 2012 book The Old Ways: A Journey on Foot, Macfarlane calls them “elective easements” and says: “Paths are human; they are traces of our relationships.” Desire paths have been created by enthusiastic dogs in back gardens, by superstitious humans avoiding scaffolding and by students seeking shortcuts to class. Yet while illicit trails may have marked the easier (ie shorter) route for centuries, the pandemic has turned them into physical markers of our distance. Desire paths are no longer about making life easier for ourselves, but about preserving life for everyone.

Amelia Tait (The Guardian)

I’ve used desire paths as a metaphor many times in presentations and workshops over the last decade. This is an article that specifically talks about how they’ve sprung up during the pandemic.


Header image by Hans Braxmeier

To others we are not ourselves but a performer in their lives cast for a part we do not even know that we are playing

Surveillance, technology, and society

Last week, the London Metropolitan Police (‘the Met’) proudly announced that they’ve begun using ‘LFR’, which is their neutral-sounding acronym for something incredibly invasive to the privacy of everyday people in Britain’s capital: Live Facial Recognition.

It’s obvious that the Met expect some pushback here:

The Met will begin operationally deploying LFR at locations where intelligence suggests we are most likely to locate serious offenders. Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences. 

At a deployment, cameras will be focused on a small, targeted area to scan passers-by. The cameras will be clearly signposted and officers deployed to the operation will hand out leaflets about the activity. The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.

London Metropolitan Police

Note the talk of ‘intelligence’ and ‘bespoke watch lists’, as well as promises that LFR will not be linked any other systems. (ANPR, for those not familiar with it, is ‘Automatic Number Plate Recognition’.) This, of course, is the thin end of the wedge and how these things start — in a ‘targeted’ way. They’re expanded later, often when the fuss has died down.


Meanwhile, a lot of controversy surrounds an app called Clearview AI which scrapes publicly-available data (e.g. Twitter or YouTube profiles) and applies facial recognition algorithms. It’s already in use by law enforcement in the USA.

The size of the Clearview database dwarfs others in use by law enforcement. The FBI’s own database, which taps passport and driver’s license photos, is one of the largest, with over 641 million images of US citizens.

The Clearview app isn’t available to the public, but the Times says police officers and Clearview investors think it will be in the future.

The startup said in a statement Tuesday that its “technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public.” 

Edward Moyer (CNET)

So there we are again, the technology is ‘intended’ for one purpose, but the general feeling is that it will leak out into others. Imagine the situation if anyone could identify almost anyone on the planet simply by pointing their smartphone at them for a few seconds?

This is a huge issue, and one that politicians and lawmakers on both sides of the Atlantic are both ill-equipped to deal with and particularly concerned about. As the BBC reports, the European Commission is considering a five-year ban on facial recognition in public spaces while it figures out how to regulate the technology:

The Commission set out its plans in an 18-page document, suggesting that new rules will be introduced to bolster existing regulation surrounding privacy and data rights.

It proposed imposing obligations on both developers and users of artificial intelligence, and urged EU countries to create an authority to monitor the new rules.

During the ban, which would last between three and five years, “a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed”.

BBC News

I can’t see the genie going back in this particular bottle and, as Ian Welsh puts it, this is the end of public anonymity. He gives the examples of the potential for all kinds of abuse, from an increase in rape, to abuse by corporations, to an increase in parental surveillance of children.

The larger issue is this: people who are constantly under surveillance become super conformers out of defense. Without true private time, the public persona and the private personality tend to collapse together. You need a backstage — by yourself and with a small group of friends to become yourself. You need anonymity.

When everything you do is open to criticism by everyone, you will become timid and conforming.

When governments, corporations, schools and parents know everything, they will try to control everything. This often won’t be for your benefit.

Ian Welsh

We already know that self-censorship is the worst kind of censorship, and live facial recognition means we’re going to have to do a whole lot more of it in the near future.

So what can we do about it? Welsh thinks that this technology should be made illegal, which is one option. However, you can’t un-invent technologies. So live facial recognition is going to be used (lawfully) by some organisations, even if it were restricted to state operatives. I’m not sure if that’s better or worse than everyone having it?


At a recent workshop I ran, I was talking during one of the breaks to one person who couldn’t really see the problem I had raised about surveillance capitalism. I have to wonder if they would have a problem with live facial recognition? From our conversation, I’d suspect not.

Remember that facial recognition is not 100% accurate and (realistically) never can be. So there will be false positives. Let’s say your face ends up on a ‘watch list’ or a ‘bad actor’ database shared with many different agencies and retailers. All of a sudden, you’ve got yourself a very big problem.


As BuzzFeed News reports, around half of US retailers are either using live facial recognition, or have plans to use it. At the moment, companies like FaceFirst do not facilitate the sharing of data across their clients, but you can see what’s coming next:

[Peter Trepp, CEO of FaceFirst] said the database is not shared with other retailers or with FaceFirst directly. All retailers have their own policies, but Trepp said often stores will offer not to press charges against apprehended shoplifters if they agree to opt into the store’s shoplifter database. The files containing the images and identities of people on “the bad guy list” are encrypted and only accessible to retailers using their own systems, he said.

FaceFirst automatically purges visitor data that does not match information in a criminal database every 14 days, which is the company’s minimum recommendation for auto-purging data. It’s up to the retailer if apprehended shoplifters or people previously on the list can later opt out of the database.

Leticia Miranda (BuzzFeed News)

There is no opt-in, no consent sought or gathered by retailers. This is a perfect example of technology being light years ahead of lawmaking.


This is all well-and-good in situations where adults are going into public spaces, but what about schools, where children are often only one step above prisoners in terms of the rights they enjoy?

Recode reports that, in schools, the surveillance threat to students goes beyond facial recognition. So long as authorities know generally what a student looks like, they can track them everywhere they go:

Appearance Search can find people based on their age, gender, clothing, and facial characteristics, and it scans through videos like facial recognition tech — though the company that makes it, Avigilon, says it doesn’t technically count as a full-fledged facial recognition tool

Even so, privacy experts told Recode that, for students, the distinction doesn’t necessarily matter. Appearance Search allows school administrators to review where a person has traveled throughout campus — anywhere there’s a camera — using data the system collects about that person’s clothing, shape, size, and potentially their facial characteristics, among other factors. It also allows security officials to search through camera feeds using certain physical descriptions, like a person’s age, gender, and hair color. So while the tool can’t say who the person is, it can find where else they’ve likely been.

Rebecca Heilweil (Recode)

This is a good example of the boundaries of technology that may-or-may-not be banned at some point in the future. The makers of Appearance Search, Avigilon, claim that it’s not facial recognition technology because the images it captures and analyses are tied to the identity of a particular person:

Avigilon’s surveillance tool exists in a gray area: Even privacy experts are conflicted over whether or not it would be accurate to call the system facial recognition. After looking at publicly available content about Avigilon, Leong said it would be fairer to call the system an advanced form of characterization, meaning that the system is making judgments about the attributes of that person, like what they’re wearing or their hair, but it’s not actually claiming to know their identity.

Rebecca Heilweil (Recode)

You can give as many examples of the technology being used for good as you want — there’s one in this article about how the system helped discover a girl was being bullied, for example — but it’s still intrusive surveillance. There are other ways of getting to the same outcome.


We do not live in a world of certainty. We live in a world where things are ambiguous, unsure, and sometimes a little dangerous. While we should seek to protect one another, and especially those who are most vulnerable in society, we should think about the harm we’re doing by forcing people to live the totality of their lives in public.

What does that do to our conceptions of self? To creativity? To activism? Live facial recognition technology, as well as those technologies that exist in a grey area around it, is the hot-button issue of the 2020s.


Image by Kirill Sharkovski. Quotation-as-title by Elizabeth Bibesco.

Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box