I am not fond of expecting catastrophes, but there are cracks in the universe

So said Sydney Smith. Let’s talk about surveillance. Let’s talk about surveillance capitalism and surveillance humanitarianism. But first, let’s talk about machine learning and algorithms; in other words, let’s talk about what happens after all of that data is collected.

Writing in The Guardian, Sarah Marsh investigates local councils using “automated guidance systems” in an attempt to save money.

The systems are being deployed to provide automated guidance on benefit claims, prevent child abuse and allocate school places. But concerns have been raised about privacy and data security, the ability of council officials to understand how some of the systems work, and the difficulty for citizens in challenging automated decisions.

Sarah Marsh

The trouble is, they’re not particularly effective:

It has emerged North Tyneside council has dropped TransUnion, whose system it used to check housing and council tax benefit claims. Welfare payments to an unknown number of people were wrongly delayed when the computer’s “predictive analytics” erroneously identified low-risk claims as high risk

Meanwhile, Hackney council in east London has dropped Xantura, another company, from a project to predict child abuse and intervene before it happens, saying it did not deliver the expected benefits. And Sunderland city council has not renewed a £4.5m data analytics contract for an “intelligence hub” provided by Palantir.

Sarah Marsh

When I was at Mozilla there were a number of colleagues there who had worked on the OFA (Obama For America) campaign. I remember one of them, a DevOps guy, expressing his concern that the infrastructure being built was all well and good when there’s someone ‘friendly’ in the White House, but what comes next.

Well, we now know what comes next, on both sides of the Atlantic, and we can’t put that genie back in its bottle. Swingeing cuts by successive Conservative governments over here, coupled with the Brexit time-and-money pit means that there’s no attention or cash left.

If we stop and think about things for a second, we probably wouldn’t don’t want to live in a world where machines make decisions for us, based on algorithms devised by nerds. As Rose Eveleth discusses in a scathing article for Vox, this stuff isn’t ‘inevitable’ — nor does it constitute a process of ‘natural selection’:

Often consumers don’t have much power of selection at all. Those who run small businesses find it nearly impossible to walk away from Facebook, Instagram, Yelp, Etsy, even Amazon. Employers often mandate that their workers use certain apps or systems like Zoom, Slack, and Google Docs. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or, ‘I’m not on social media,’” says Rumman Chowdhury, a data scientist at Accenture. “You actually have to be so comfortable in your privilege that you can opt out of things.”

And so we’re left with a tech world claiming to be driven by our desires when those decisions aren’t ones that most consumers feel good about. There’s a growing chasm between how everyday users feel about the technology around them and how companies decide what to make. And yet, these companies say they have our best interests in mind. We can’t go back, they say. We can’t stop the “natural evolution of technology.” But the “natural evolution of technology” was never a thing to begin with, and it’s time to question what “progress” actually means.

Rose Eveleth

I suppose the thing that concerns me the most is people in dire need being subject to impersonal technology for vital and life-saving aid.

For example, Mark Latonero, writing in The New York Times, talks about the growing dangers around what he calls ‘surveillance humanitarianism’:

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.

Mark Latonero

It’s easy to think that this is an emergency, so we should just do whatever is necessary. But Latonero explains the risks, arguing that the risk is shifted to a later time:

If an individual or group’s data is compromised or leaked to a warring faction, it could result in violent retribution for those perceived to be on the wrong side of the conflict. When I spoke with officials providing medical aid to Syrian refugees in Greece, they were so concerned that the Syrian military might hack into their database that they simply treated patients without collecting any personal data. The fact that the Houthis are vying for access to civilian data only elevates the risk of collecting and storing biometrics in the first place.

Mark Latonero

There was a rather startling article in last weekend’s newspaper, which I’ve found online. Hannah Devlin, again writing in The Guardian (which is a good source of information for those concerned with surveillance) writes about a perfect storm of social media and improved processing speeds:

[I]n the past three years, the performance of facial recognition has stepped up dramatically. Independent tests by the US National Institute of Standards and Technology (Nist) found the failure rate for finding a target picture in a database of 12m faces had dropped from 5% in 2010 to 0.1% this year.

The rapid acceleration is thanks, in part, to the goldmine of face images that have been uploaded to Instagram, Facebook, LinkedIn and captioned news articles in the past decade. At one time, scientists would create bespoke databases by laboriously photographing hundreds of volunteers at different angles, in different lighting conditions. By 2016, Microsoft had published a dataset, MS Celeb, with 10m face images of 100,000 people harvested from search engines – they included celebrities, broadcasters, business people and anyone with multiple tagged pictures that had been uploaded under a Creative Commons licence, allowing them to be used for research. The dataset was quietly deleted in June, after it emerged that it may have aided the development of software used by the Chinese state to control its Uighur population.

In parallel, hardware companies have developed a new generation of powerful processing chips, called Graphics Processing Units (GPUs), uniquely adapted to crunch through a colossal number of calculations every second. The combination of big data and GPUs paved the way for an entirely new approach to facial recognition, called deep learning, which is powering a wider AI revolution.

Hannah Devlin

Those of you who have read this far and are expecting some big reveal are going to be disappointed. I don’t have any ‘answers’ to these problems. I guess I’ve been guilty, like many of us have, of the kind of ‘privacy nihilism’ mentioned by Ian Bogost in The Atlantic:

Online services are only accelerating the reach and impact of data-intelligence practices that stretch back decades. They have collected your personal data, with and without your permission, from employers, public records, purchases, banking activity, educational history, and hundreds more sources. They have connected it, recombined it, bought it, and sold it. Processed foods look wholesome compared to your processed data, scattered to the winds of a thousand databases. Everything you have done has been recorded, munged, and spat back at you to benefit sellers, advertisers, and the brokers who service them. It has been for a long time, and it’s not going to stop. The age of privacy nihilism is here, and it’s time to face the dark hollow of its pervasive void.

Ian Bogost

The only forces that we have to stop this are collective action, and governmental action. My concern is that we don’t have the digital savvy to do the former, and there’s definitely the lack of will in respect of the latter. Troubling times.

6 Comments

Add yours →

  1. Yep – pretty depressing …

    It’s almost a ‘media meme’ too – every day there’s one or two articles on the topic in my news feed. The topic of surveillance is becoming so pervasive and thus so is the practice. From the DWP to doorbell manufacturers some form of computational surveillance is built-in to every system.

    Not sure if you’ve seen this article from 2014, so just in case: https://thenewinquiry.com/the-anxieties-of-big-data/. Some useful background there on the Snowden revelations and the idea that more and more data is what drives our surveillance state (it’s a dialectical anxiety) and the fallacy that more data leads to better truth … Also interesting details about the effects of surveillance on clothing fashion(!)

    But of course, truth has nothing to do with it! It’s all about power and control over others.

    How to re-balance control? Well, that really is the big question of the day!

    Ironically, the Tories are supposed to be party of the ‘small-state’ but the idea of ‘the state’ they rely on is out of date – it is the ‘red-tape’ state, the bureaucratic state, piles of musty documents sitting on the desks of civil servants etc. A Dickensian state. (And I suspect other political parties are still on the same page.)

    That image of the old-style bureaucratic has to be destroyed. The state is now ‘bigger’ and more controlling than ever it was but laws and political behaviours are still bound by 19thC models of suited and hatted gentlemen patriarchs. But today’s politicians are like mini-Zuckerbergs – immature and contemptuous (famous quote: “They just keep giving me private information about themselves … the dumbfucks”).

    In so-called ‘neo-liberal’ states the distinction between civil government and private enterprise no longer exists; put another way, like the Chinese, we all live in state capitalist societies.

    In practical terms, for example, the Extinction Rebellion movement (which also has a long history of course) does not go far enough – it has to include the way that a society makes use of data and data processing tools …

    Maybe it’s not really a solution to say we need new regulation and new laws … but we do … because, like Pandora’s box, these magical tools (up to an including MITs latest project that can construct an image from the shadows cast on the wall of a room!) cannot be put back once they are let loose in the wild. But regulation can’t keep up …

    Perhaps regulations like GDPR, held up by many as an example of good practice, are simply not up to the job. Something more radical is needed – perhaps that personal data is *a priori* owned by the individual it represents. Somehow we need new laws – copyright laws; IP laws? – that require *all* users of data (e.g. an image of my face; a recording of my voice) to seek permission before use and to pay for the privilege. Goodness knows, an infrastructure to make that happen must be possible (e.g. Berners-Lee is working on something like that I think).

    Or, somehow we have to find a way to disconnect the use of data from the exercise of power and control – no matter how trivial that power might at first glance seem to be (e.g. Domino’s pizza tracking surveillance!) But it’s complicated …

    As you say, troubling times. :~{

    —————————————–
    Two mildly interesting asides:

    (i) We seem to be moving in a similar direction as China’s surveillance state built on its Social Credit infrastructure, which I am sure you will know about. The other week I picked up an interesting detail that the Chinese word/ideogram for ‘credit’ is also readily translated as ‘trust’ (there is a semantic link there). So really, in the eyes of the Chinese state, it is intended as a Social Trust system (!) … I wonder how long before we start hearing something similar here (if not already).

    (ii) a while back I read an interesting chapter about the impact of late 19thC technologies, new at the time, on art and philosophy. Of particular interest, the discovery of X-rays had quite an impact on for it was viewed “… as a machine that could render transparent … the mind …” . The writer quotes an interesting passage from Maxim Gorky:

    “Imagine that someone wants to know you better. He takes a picture of your skull, and if this skull contained some thoughts, the negative will reveal them as black blots, or snakelike spirals, or some other unattractive form. If he wishes, he can try to photograph your conscience, and the negative will also show all the excrescences and blots. In a word, every person will be seen through now, and however thick and impenetrable your skin might be, the new light makes it transparent like glass.”

    (Glass became something of a literary metaphor and an architectural trope in the first decades of the 20C)

    ———————————————
    Final note: Bogost uses the word ‘munged’. Never heard that before! https://en.wikipedia.org/wiki/Mung_(computer_term).

    Perhaps there’s another detail here for regulation? No Munging!

    • Thanks David, there’s a number of really interesting things you mention in that comment! That link is pretty fascinating, too.

      I think you’re absolutely right about conservatives having an anachronistic target when they focus on ‘red tape’. It’s interesting that they’re the most interested in interference in civil liberties (surveillance, etc.) to ‘protect’ people. Again, they’ve got things the wrong way around.

      In terms of the GDPR, no laws are ever going to keep up with technological developments, sadly. The best we can have are principles. Right now, it feels like an unfair ‘battle’ between huge, well-funded technology companies, and, well… society? The best way to deal with that is to cut off, or at least massively constrain, the source of that funding. In other words, let’s go to war on surveillance-based advertising.

      • I’ve often used “munge” as a verb meaning to slowly chew up or gnaw away. It’s a good one.

        The sense of alarm bells ringing is clear for me too, but alongside Doug’s two options for action – collective and governmental – can we also debate the merits of collective refusal as a kind of action or even of politics? Off grid, dark mode, alternative choices without any or much public visibility. Less confronting the enemy, more avoiding the fight.

        Technology certainly complicates but perhaps does not fundamentally alter the questions we each face every day about how to handle our potential public persona. We always retain some agency around the question “shall I go into the market/ forum/ square/ pub/ herd/ company/ system?” whether these spaces are physical, social or digital. Each of us according to their information, their needs and their feelings will consciously or unconsciously weigh up the rewards and drawbacks of participating, of having visibility, of engaging with others according to the conventions of those shared spaces. I endorse the power of Google in my choice of its data storage solutions over a paper filing cabinet, just as I endorse to the power of civil engineering when I flush the toilet rather than dig a pit latrine. A Roman citizen attending the games subscribes to the Imperial ideology. I’m signing up for gains and losses in each case. What’s novel around tech is that we are so caught up in the energy of its emergence that many of us will right now be poor at deconstructing and perceiving accurately how big tech extracts its rent from us. That’s changing, gradually.

        Rejecting digital visibility (permanently or occasionally, @Doug I admire your Dark Month policy) is a part of the response weaponry for the impending catastrophe Doug identifies. Preserves sanity, withdraws consent, saves energy. How about a third type of praxis, then, alongside Doug’s two action themes: observe and celebrate the constructive withdrawal from shared digital norms. I’ve been involved in a couple of projects that do this for education in refugee settings. By force not by choice, the digital sharing option isn’t available so the distribution of content comes back to basics around direct person to person contact (A bit like the example in Doug’s piece of Greek doctors treating Syrians and not taking a record of names). It’s effective (why shouldn’t it be?) and there’s something to learn from it.

        Even though our lives may feature choice and pervasive technology, we can adopt positive habits like ask guests to leave their device at the door when they visit us, go for a walk without a phone so we don’t take and share any picture, read a book, meet a friend in person, delete our social media profiles, not sign up using platform-provider authentication etc.

        • Thanks Stephen, this is a great point and potentially the option with the greatest potential for impact!

          My concern is that ‘advantages’ (e.g. productivity, agency) currently flow to those who either adopt the approach of privacy nihilism or have a deep set of technical skills.

          If the collective happens at the group level, then do those groups become further marginalised? For example, I’d I opt out of sharing my health data is my medical care worse?

          The most insidious example you cite is that of ‘platform-provide authentication’. It’s so convenient, yet so problematic. Previous attempts at more privacy-respecting alternatives have failed – perhaps, at heart, there’s a question about identity here?

      • Doug

        You say: “let’s go to war on surveillance-based advertising.”

        You’ll like this item from Private Eye:
        Faking Hell: https://www.private-eye.co.uk/issue-1507/news

        Here’s a some selected inks on the topic of resisting surveillance by facial recognition:

        Local government regulation (in US)
        https://gizmodo.com/berkeley-becomes-fourth-u-s-city-to-ban-face-recogniti-1839087651

        No signs in media of similar action in the UK or Europe yet (surprise!) but this is promising:
        https://europeansting.com/2019/10/19/how-the-california-effect-could-shape-a-global-approach-to-ethical-ai/

        Adversarial camouflage one way to go:
        https://www.theregister.co.uk/2019/04/19/defense_against_the_darknet_or_how_to_accessorize_to_defeat_surveillance/
        https://bigthink.com/technology-innovation/facial-recognition

        And then of course, there’s broader campaigning:
        https://www.libertyhumanrights.org.uk/resist-facial-recognition
        https://bigbrotherwatch.org.uk/

Leave a Reply

%d bloggers like this: