Quotation-as-title by Ralph Waldo Emerson. Image from top-linked post.
Tag: BBC (page 1 of 2)
Surveillance, technology, and society
Last week, the London Metropolitan Police (‘the Met’) proudly announced that they’ve begun using ‘LFR’, which is their neutral-sounding acronym for something incredibly invasive to the privacy of everyday people in Britain’s capital: Live Facial Recognition.
It’s obvious that the Met expect some pushback here:
The Met will begin operationally deploying LFR at locations where intelligence suggests we are most likely to locate serious offenders. Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences.
At a deployment, cameras will be focused on a small, targeted area to scan passers-by. The cameras will be clearly signposted and officers deployed to the operation will hand out leaflets about the activity. The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.London Metropolitan Police
Note the talk of ‘intelligence’ and ‘bespoke watch lists’, as well as promises that LFR will not be linked any other systems. (ANPR, for those not familiar with it, is ‘Automatic Number Plate Recognition’.) This, of course, is the thin end of the wedge and how these things start — in a ‘targeted’ way. They’re expanded later, often when the fuss has died down.
Meanwhile, a lot of controversy surrounds an app called Clearview AI which scrapes publicly-available data (e.g. Twitter or YouTube profiles) and applies facial recognition algorithms. It’s already in use by law enforcement in the USA.
The size of the Clearview database dwarfs others in use by law enforcement. The FBI’s own database, which taps passport and driver’s license photos, is one of the largest, with over 641 million images of US citizens.
The Clearview app isn’t available to the public, but the Times says police officers and Clearview investors think it will be in the future.
The startup said in a statement Tuesday that its “technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public.”Edward Moyer (CNET)
So there we are again, the technology is ‘intended’ for one purpose, but the general feeling is that it will leak out into others. Imagine the situation if anyone could identify almost anyone on the planet simply by pointing their smartphone at them for a few seconds?
This is a huge issue, and one that politicians and lawmakers on both sides of the Atlantic are both ill-equipped to deal with and particularly concerned about. As the BBC reports, the European Commission is considering a five-year ban on facial recognition in public spaces while it figures out how to regulate the technology:
The Commission set out its plans in an 18-page document, suggesting that new rules will be introduced to bolster existing regulation surrounding privacy and data rights.
It proposed imposing obligations on both developers and users of artificial intelligence, and urged EU countries to create an authority to monitor the new rules.
During the ban, which would last between three and five years, “a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed”.BBC News
I can’t see the genie going back in this particular bottle and, as Ian Welsh puts it, this is the end of public anonymity. He gives the examples of the potential for all kinds of abuse, from an increase in rape, to abuse by corporations, to an increase in parental surveillance of children.
The larger issue is this: people who are constantly under surveillance become super conformers out of defense. Without true private time, the public persona and the private personality tend to collapse together. You need a backstage — by yourself and with a small group of friends to become yourself. You need anonymity.
When everything you do is open to criticism by everyone, you will become timid and conforming.
When governments, corporations, schools and parents know everything, they will try to control everything. This often won’t be for your benefit.Ian Welsh
We already know that self-censorship is the worst kind of censorship, and live facial recognition means we’re going to have to do a whole lot more of it in the near future.
So what can we do about it? Welsh thinks that this technology should be made illegal, which is one option. However, you can’t un-invent technologies. So live facial recognition is going to be used (lawfully) by some organisations, even if it were restricted to state operatives. I’m not sure if that’s better or worse than everyone having it?
At a recent workshop I ran, I was talking during one of the breaks to one person who couldn’t really see the problem I had raised about surveillance capitalism. I have to wonder if they would have a problem with live facial recognition? From our conversation, I’d suspect not.
Remember that facial recognition is not 100% accurate and (realistically) never can be. So there will be false positives. Let’s say your face ends up on a ‘watch list’ or a ‘bad actor’ database shared with many different agencies and retailers. All of a sudden, you’ve got yourself a very big problem.
As BuzzFeed News reports, around half of US retailers are either using live facial recognition, or have plans to use it. At the moment, companies like FaceFirst do not facilitate the sharing of data across their clients, but you can see what’s coming next:
[Peter Trepp, CEO of FaceFirst] said the database is not shared with other retailers or with FaceFirst directly. All retailers have their own policies, but Trepp said often stores will offer not to press charges against apprehended shoplifters if they agree to opt into the store’s shoplifter database. The files containing the images and identities of people on “the bad guy list” are encrypted and only accessible to retailers using their own systems, he said.
FaceFirst automatically purges visitor data that does not match information in a criminal database every 14 days, which is the company’s minimum recommendation for auto-purging data. It’s up to the retailer if apprehended shoplifters or people previously on the list can later opt out of the database.Leticia Miranda (BuzzFeed News)
There is no opt-in, no consent sought or gathered by retailers. This is a perfect example of technology being light years ahead of lawmaking.
This is all well-and-good in situations where adults are going into public spaces, but what about schools, where children are often only one step above prisoners in terms of the rights they enjoy?
Recode reports that, in schools, the surveillance threat to students goes beyond facial recognition. So long as authorities know generally what a student looks like, they can track them everywhere they go:
Appearance Search can find people based on their age, gender, clothing, and facial characteristics, and it scans through videos like facial recognition tech — though the company that makes it, Avigilon, says it doesn’t technically count as a full-fledged facial recognition tool
Even so, privacy experts told Recode that, for students, the distinction doesn’t necessarily matter. Appearance Search allows school administrators to review where a person has traveled throughout campus — anywhere there’s a camera — using data the system collects about that person’s clothing, shape, size, and potentially their facial characteristics, among other factors. It also allows security officials to search through camera feeds using certain physical descriptions, like a person’s age, gender, and hair color. So while the tool can’t say who the person is, it can find where else they’ve likely been.Rebecca Heilweil (Recode)
This is a good example of the boundaries of technology that may-or-may-not be banned at some point in the future. The makers of Appearance Search, Avigilon, claim that it’s not facial recognition technology because the images it captures and analyses are tied to the identity of a particular person:
Avigilon’s surveillance tool exists in a gray area: Even privacy experts are conflicted over whether or not it would be accurate to call the system facial recognition. After looking at publicly available content about Avigilon, Leong said it would be fairer to call the system an advanced form of characterization, meaning that the system is making judgments about the attributes of that person, like what they’re wearing or their hair, but it’s not actually claiming to know their identity.Rebecca Heilweil (Recode)
You can give as many examples of the technology being used for good as you want — there’s one in this article about how the system helped discover a girl was being bullied, for example — but it’s still intrusive surveillance. There are other ways of getting to the same outcome.
We do not live in a world of certainty. We live in a world where things are ambiguous, unsure, and sometimes a little dangerous. While we should seek to protect one another, and especially those who are most vulnerable in society, we should think about the harm we’re doing by forcing people to live the totality of their lives in public.
What does that do to our conceptions of self? To creativity? To activism? Live facial recognition technology, as well as those technologies that exist in a grey area around it, is the hot-button issue of the 2020s.
Image by Kirill Sharkovski. Quotation-as-title by Elizabeth Bibesco.
So said Daniel J. Boorstin. It’s been an interesting week for those, like me, who follow the development of interaction between humans and machines. Specifically, people seem shocked that voice assistants are being used for health questions, also that the companies who make them employ people to listen to samples of voice recordings to make them better.
Before diving into that, let’s just zoom out a bit and remind ourselves that the average level of digital literacies in the general population is pretty poor. Sometimes I wonder how on earth VC-backed companies manage to burn through so much cash. Then I remember the contortions that those who design visual interfaces go through so that people don’t have to think.
Discussing ‘fake news’ and our information literacy problem in Forbes, you can almost feel Kalev Leetaru‘s eye-roll when he says:
It is the accepted truth of Silicon Valley that every problem has a technological solution.
Most importantly, in the eyes of the Valley, every problem can be solved exclusively through technology without requiring society to do anything on its own. A few algorithmic tweaks, a few extra lines of code and all the world’s problems can be simply coded out of existence.Kalev Leetaru
It’s somewhat tangential to the point I want to make in this article, but Cory Doctorow makes a a good point in this regard about fake news for Locus
Fake news is an instrument for measuring trauma, and the epistemological incoherence that trauma creates – the justifiable mistrust of the establishment that has nearly murdered our planet and that insists that making the richest among us much, much richer will benefit everyone, eventually.Cory Doctorow
Before continuing, I’d just like to say that I’ve got some skin in the voice assistant game, given that our home has no fewer that six devices that use the Google Assistant (ten if you count smartphones and tablets).
Voice assistants are pretty amazing when you know exactly what you want and can form a coherent query. It’s essentially just clicking the top link on a Google search result, without any of the effort of pointing and clicking. “Hey Google, do I need an umbrella today?”
However, some people are suspicious of voice assistants to a degree that borders on the superstitious. There’s perhaps some valid reasons if you know your tech, but if you’re of the opinion that your voice assistant is ‘always recording’ and literally sending everything to Amazon, Google, Apple, and/or Donald Trump then we need to have words. Just think about that for a moment, realise how ridiculous it is, and move on.
This week an article by VRT NWS stoked fears like these. It was cleverly written so that those who read it quickly could easily draw the conclusion that Google is listening to everything you say. However, let me carve out the key paragraphs:
Why is Google storing these recordings and why does it have employees listening to them? They are not interested in what you are saying, but the way you are saying it. Google’s computer system consists of smart, self-learning algorithms. And in order to understand the subtle differences and characteristics of the Dutch language, it still needs to learn a lot.
Speech recognition automatically generates a script of the recordings. Employees then have to double check to describe the excerpt as accurately as possible: is it a woman’s voice, a man’s voice or a child? What do they say? They write out every cough and every audible comma. These descriptions are constantly improving Google’s search engines, which results in better reactions to commands. One of our sources explains how this works.VRS NWS
Every other provider of speech recognition products does this. Obviously. How else would you manage to improve voice recognition in real-world situations? What VRS NWS did was to get a sub-contractor to break a Non-Disclosure Agreement (and violate GDPR) to share recordings.
Google responded on their blog The Keyword, saying:
As part of our work to develop speech technology for more languages, we partner with language experts around the world who understand the nuances and accents of a specific language. These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant.
We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.
We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.The Keyword
As I’ve said before, due to the GDPR actually having teeth (British Airways was fined £183m last week) I’m a lot happier to share my data with large companies than I was before the legislation came in. That’s the whole point.
The other big voice assistant story, in the UK at least, was that the National Health Service (NHS) is partnering with Amazon Alexa to offer health advice. The BBC reports:
From this week, the voice-assisted technology is automatically searching the official NHS website when UK users ask for health-related advice.
The government in England said it could reduce demand on the NHS.
Privacy campaigners have raised data protection concerns but Amazon say all information will be kept confidential.
The partnership was first announced last year and now talks are under way with other companies, including Microsoft, to set up similar arrangements.
Previously the device provided health information based on a variety of popular responses.
The use of voice search is on the increase and is seen as particularly beneficial to vulnerable patients, such as elderly people and those with visual impairment, who may struggle to access the internet through more traditional means.The BBC
So long as this is available to all types of voice assistants, this is great news. The number of people I know, including family members, who have convinced themselves they’ve got serious problems by spending ages searching their symptoms, is quite frightening. Getting sensible, prosaic advice is much better.
Iliana Magra writes in the The New York Times that privacy campaigners are concerned about Amazon setting up a health care division, but that there are tangible benefits to certain sections of the population.
The British health secretary, Matt Hancock, said Alexa could help reduce strain on doctors and pharmacists. “We want to empower every patient to take better control of their health care,” he said in a statement, “and technology like this is a great example of how people can access reliable, world-leading N.H.S. advice from the comfort of their home.”
His department added that voice-assistant advice would be particularly useful for “the elderly, blind and those who cannot access the internet through traditional means.”Iliana Magra
I’m not dismissing the privacy issues, of course not. But what I’ve found, especially recently, is that the knowledge, skills, and expertise required to be truly ‘Google-free’ (or the equivalent) is an order of magnitude greater than what is realistically possible for the general population.
It might be fatalistic to ask the following question, but I’ll do it anyway: who exactly do we expect to be building these things? Mozilla, one of the world’s largest tech non-profits is conspicuously absent in these conversations, and somehow I don’t think people aren’t going to trust governments to get involved.
For years, techies have talked about ‘personal data vaults’ where you could share information in a granular way without being tracked. Currently being trialled is the BBC box to potentially help with some of this:
With a secure Databox at its heart, BBC Box offers something very unusual and potentially important: it is a physical device in the person’s home onto which personal data is gathered from a range of sources, although of course (and as mentioned above) it is only collected with the participants explicit permission, and processed under the person’s control.
Personal data is stored locally on the box’s hardware and once there, it can be processed and added to by other programmes running on the box – much like apps on a smartphone. The results of this processing might, for example be a profile of the sort of TV programmes someone might like or the sort of theatre they would enjoy. This is stored locally on the box – unless the person explicitly chooses to share it. No third party, not even the BBC itself, can access any data in ‘the box’ unless it is authorised by the person using it, offering a secure alternative to existing services which rely on bringing large quantities of personal data together in one place – with limited control by the person using it.The BBC
It’s an interesting concept and, if they can get the user experience right, a potentially groundbreaking concept. Eventually, of course, it will be in your smartphone, which means that device really will be a ‘digital self’.
You can absolutely opt-out of whatever you want. For example, I opt out of Facebook’s products (including WhatsApp and Instagram). You can point out to others the reasons for that, but at some point you have to realise it’s an opinion, a lifestyle choice, an ideology. Not everyone wants to be a tech vegan, or live their lives under those who act as though they are one.
I’m fond of the above quotation by Douglas Adams that I’ve used for the title of this article. It serves as a reminder to myself that I’ve now reached an age when I’ll look at a technology and wonder: why?
Despite this, I’m quite excited about the potential of two technologies that will revolutionise our digital world both in our homes and offices and when we’re out-and-about. Those technologies? Wi-Fi 6, as it’s known colloquially, and 5G networks.
Let’s take Wi-Fi 6 first, which Chuong Nguyen explains in an article for Digital Trends, isn’t just about faster speeds:
A significant advantage for Wi-Fi 6 devices is better battery life. Though the standard promotes Internet of Things (IoT) devices being able to last for weeks, instead of days, on a single charge as a major benefit, the technology could even prove to be beneficial for computers, especially since Intel’s latest 9th-generation processors for laptops come with Wi-Fi 6 support.
Likewise, Alexis Madrigal, writing in The Atlantic, explains that mobile 5G networks bring benefits other than streaming YouTube videos at ever-higher resolutions, but are quite a technological hurdle:
The fantastic 5G speeds require higher-frequency, shorter-wavelength signals. And the shorter the wavelength, the more likely it is to be blocked by obstacles in the world.
Ideally, [mobile-associated companies] would like a broader set of customers than smartphone users. So the companies behind 5G are also flaunting many other applications for these networks, from emergency services to autonomous vehicles to every kind of “internet of things” gadget.
If you’ve been following the kerfuffle around the UK using Huawei’s technology for its 5G infrastructure, you’ll already know about the politics and security issues at stake here.
Sue Halpern, writing in The New Yorker, outlines the claimed benefits:
Two words explain the difference between our current wireless networks and 5G: speed and latency. 5G—if you believe the hype—is expected to be up to a hundred times faster. (A two-hour movie could be downloaded in less than four seconds.) That speed will reduce, and possibly eliminate, the delay—the latency—between instructing a computer to perform a command and its execution. This, again, if you believe the hype, will lead to a whole new Internet of Things, where everything from toasters to dog collars to dialysis pumps to running shoes will be connected. Remote robotic surgery will be routine, the military will develop hypersonic weapons, and autonomous vehicles will cruise safely along smart highways. The claims are extravagant, and the stakes are high. One estimate projects that 5G will pump twelve trillion dollars into the global economy by 2035, and add twenty-two million new jobs in the United States alone. This 5G world, we are told, will usher in a fourth industrial revolution.
In China, which has installed three hundred and fifty thousand 5G relays—about ten times more than the United States—enhanced geolocation, coupled with an expansive network of surveillance cameras, each equipped with facial-recognition technology, has enabled authorities to track and subordinate the country’s eleven million Uighur Muslims. According to the Times, “the practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.”
Automated racism, now there’s a thing. It turns out that technologies amplify our existing prejudices. Perhaps we should be a bit more careful and ask more questions before we march down the road of technological improvements? Especially given 5G could affect our ability to predict major storms. I’m reading Low-tech Magazine: The Printed Website at the moment, and it’s pretty eye-opening about what we could be doing instead.
Also check out:
- The circular economy could create an enormous jobs boom (Fast Company) — “The circular economy presents an opportunity to weave together initiatives on innovation, productivity, and job creation with environmental and climate objectives.”
- The Vision for a more Decentralized Web (Decentralize.today) — “Today, we see that we have gone too far in the direction of centralization. We have our data abused and sold by the corporations who mostly care about growth.”
- Local-first software: you own your data, in spite of the cloud (Ink & Switch) — “Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.”