Tag: facial recognition (page 2 of 2)

Friday fermentations

I boiled the internet and this was what remained:

  • I Quit Social Media for a Year and Nothing Magical Happened (Josh C. Simmons) — “A lot of social media related aspects of my life are different now – I’m not sure they’re better, they’re just different, but I can confidently say that I prefer this normal to last year’s. There’s a bit of rain with all of the sunshine. I don’t see myself ever going back to social media. I don’t see the point of it, and after leaving for a while, and getting a good outside look, it seems like an abusive relationship – millions of workers generating data for tech-giants to crunch through and make money off of. I think that we tend to forget how we were getting along pretty well before social media – not everything was idyllic and better, but it was fine.”
  • Face recognition, bad people and bad data (Benedict Evans) — “My favourite example of what can go wrong here comes from a project for recognising cancer in photos of skin. The obvious problem is that you might not have an appropriate distribution of samples of skin in different tones. But another problem that can arise is that dermatologists tend to put rulers in the photo of cancer, for scale – so if all the examples of ‘cancer’ have a ruler and all the examples of ‘not-cancer’ do not, that might be a lot more statistically prominent than those small blemishes. You inadvertently built a ruler-recogniser instead of a cancer-recogniser.”
  • Would the Internet Be Healthier Without ‘Like’ Counts? (WIRED) ⁠— “Online, value is quantifiable. The worth of a person, idea, movement, meme, or tweet is often based on a tally of actions: likes, retweets, shares, followers, views, replies, claps, and swipes-up, among others. Each is an individual action. Together, though, they take on outsized meaning. A YouTube video with 100,000 views seems more valuable than one with 10, even though views—like nearly every form of online engagement—can be easily bought. It’s a paradoxical love affair. And it’s far from an accident.”
  • Are Platforms Commons? (On The Horizon) — “[W]hat if ecosystems were constructed so that they were governed by the participants, rather by the hypercapitalist strivings of the platform owners — such as Apple, Google, Amazon, Facebook — or the heavy-handed regulators? Is there a middle ground where the needs of the end user and those building, marketing, and shipping products and services can be balanced, and a fair share of the profits are distributed not just through common carrier laws but by the shared economics of a commons, and where the platform orchestrator gets a fair share, as well?”
  • Depression and anxiety threatened to kill my career. So I came clean about it (The Guardian) — “To my surprise, far from rejecting me, students stayed after class to tell me how sorry they were. They left condolence cards in my mailbox and sent emails to let me know they were praying for my family. They stopped by my office to check on me. Up to that point, I’d been so caught up in my despair that it never occurred to me that I might be worthy of concern and support. Being accepted despite my flaws touched me in ways that are hard to express.”
  • Absolute scale corrupts absolutely (apenwarr) — “Here’s what we’ve lost sight of, in a world where everything is Internet scale: most interactions should not be Internet scale. Most instances of most programs should be restricted to a small set of obviously trusted people. All those people, in all those foreign countries, should not be invited to read Equifax’s PII database in Argentina, no matter how stupid the password was. They shouldn’t even be able to connect to the database. They shouldn’t be able to see that it exists. It shouldn’t, in short, be on the Internet.”
  • The Automation Charade (Logic magazine) — “The problem is that the emphasis on technological factors alone, as though “disruptive innovation” comes from nowhere or is as natural as a cool breeze, casts an air of blameless inevitability over something that has deep roots in class conflict. The phrase “robots are taking our jobs” gives technology agency it doesn’t (yet?) possess, whereas “capitalists are making targeted investments in robots designed to weaken and replace human workers so they can get even richer” is less catchy but more accurate.”
  • The ambitious plan to reinvent how websites get their names (MIT Technology Review) — “The system would be based on blockchain technology, meaning it would be software that runs on a widely distributed network of computers. In theory, it would have no single point of failure and depend on no human-run organization that could be corrupted or co-opted.”
  • O whatever God or whatever ancestor that wins in the next life (The Main Event) — “And it begins to dawn on you that the stories were all myths and the epics were all narrated by the villains and the history books were written to rewrite the histories and that so much of what you thought defined excellence merely concealed grift.”
  • A Famous Argument Against Free Will Has Been Debunked (The Atlantic) — “In other words, people’s subjective experience of a decision—what Libet’s study seemed to suggest was just an illusion—appeared to match the actual moment their brains showed them making a decision.”

The best way out is always through

So said Robert Frost, but I want to begin with the ending of a magnificent post from Kate Bowles. She expresses clearly how I feel sometimes when I sit down to write something for Thought Shrapnel:

[T]his morning I blocked out time, cleared space, and sat down to write — and nothing happened. Nothing. Not a word, not even a wisp of an idea. After enough time staring at the blankness of the screen I couldn’t clearly remember having had an idea, ever.

Along the way I looked at the sky, I ate a mandarin and then a second mandarin, I made a cup of tea, I watched a family of wrens outside my window, I panicked. I let email divert me, and then remembered that was the opposite of the plan. I stayed off Twitter. Panic increased.

Then I did the one thing that absolutely makes a difference to me. I asked for help. I said “I write so many stupid words in my bullshit writing job that I can no longer write and that is the end of that.” And the person I reached out to said very calmly “Why not write about the thing you’re thinking about?”

Sometimes what you have to do as a writer is sit in place long enough, and sometimes you have to ask for help. Whatever works for you, is what works.

Kate Bowles

There are so many things wrong with the world right now, that sometimes I feel like I could stop working on all of the things I’m working on and spend time just pointing them out to people.

But to what end? You don’t change the world by just making people aware of things, not usually. For example, as tragic as the sentence, “the Amazon is on fire” is, it isn’t in and of itself a call-to-action. These days, people argue about the facts themselves as well as the appropriate response.

The world is an inordinately complicated place that we seek to make sense of by not thinking as much as humanly possible. To aid and abet us in this task, we divide ourselves, either consciously or unconsciously, into groups who apply similar heuristics. The new (information) is then assimilated into the old (worldview).

I have no privileged position, no objective viewpoint in which to observe and judge the world’s actions. None of us do. I’m as complicit in joining and forming in and out groups as the next person. I decide I’m going to delete my Twitter account and then end up rage-tweeting All The Things.

Thankfully, there are smart people, and not only academics, thinking about all this to figure out what we can and should do. Tim Urban, from the phenomenally-successful Wait But Why, for example, has spent the last three years working on “a new language we can use to think and talk about our societies and the people inside of them”. In the first chapter in a new series, he writes about the ongoing struggle between (what he calls) the ‘Primitive Minds’ and ‘Higher Minds’ of humans:

The never-ending struggle between these two minds is the human condition. It’s the backdrop of everything that has ever happened in the human world, and everything that happens today. It’s the story of our times because it’s the story of all human times.

Tim Urban

I think this is worth remembering when we spend time on social networks. And especially when we spend so much time that it becomes our default delivery method for the news of the day. Our Primitive Minds respond strongly to stimuli around fear and fornication.

When we reflect on our social media usage and the changing information landscape, the temptation is either to cut down, or to try a different information diet. Some people become the equivalent of Information Vegans, attempting to source the ‘cleanest’ morsels of information from the most wholesome, trusted, and traceable of places.

But where are those ‘trusted places’ these days? Are we as happy with the previously gold-standard news outlets such as the BBC and The New York Times as we once were? And if not, what’s changed?

The difference, I think, is the way we’ve decided to allow money to flow through our digital lives. Commercial news outlets, including those with which the BBC competes, are funded by advertising. Those adverts we see in digital spaces aren’t just showing things that we might happen to be interested in. They’ll keep on showing you that pair of shoes you almost bought last week in every space that is funded by advertising. Which is basically everywhere.

I feel like I’m saying obvious things here that everyone knows, but perhaps it bears repeating. If everyone is consuming news via social networks, and those news stories are funded by advertising, then the nature of what counts as ‘news’ starts to evolve. What gets the most engagement? How are headlines formed now, compared with a decade ago?

It’s as if something hot-wires our brain when something non-threatening and potentially interesting is made available to us ‘for free’. We never get to the stuff that we’d like to think defines us, because we caught in neverending cycles of titillation. We pay with our attention, that scarce and valuable resource.

Our attention, and more specifically, how we react to our social media feeds when we’re ‘engaged’ is valuable because it can be packaged up and sold to advertisers. But it’s also sold to governments too. Twitter just had to update their terms and conditions specifically because of the outcry over the Chinese government’s propaganda around the Hong Kong protests.

Protesters part of the ‘umbrella revolution’ in Hong Kong have recently been focusing on cutting down what we used to call CCTV cameras, but which are much more accurately described as ‘facial recognition masts’:

We are living in a world where the answer to everything seems to be ‘increased surveillance’. Kids not learning fast enough in school? Track them more. Scared of terrorism? Add more surveillance into the lives of everyday citizens. And on and on.

In an essay earlier this year, Maciej Cegłowski riffed on all of this, reflecting on what he calls ‘ambient privacy’:

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent.

Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.

Maciej Cegłowski

One of the difficulties in resisting the ‘Silicon Valley narrative’ and Big Tech’s complicity with governments is the danger of coming across as a neo-luddite. Without looking very closely to understand what’s going on (and having some time to reflect) it can all look like the inevitable march of progress.

So, without necessarily an answer to all this, I guess the best thing is, like Kate, to ask for help. What can we do here? What practical steps can we take? Comments are open.

The drawbacks of Artificial Intelligence

It’s really interesting to do philosophical thought experiments with kids. For example, the trolley problem, a staple of undergradate Philosophy courses, is also accessible to children from a fairly young age.

You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:

  1. Do nothing and allow the trolley to kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option?

With the advent of autonomous vehicles, these are no longer idle questions. The vehicles, which have to make split-second decision, may have to decide whether to hit a pram containing a baby, or swerve and hit a couple of pensioners. Due to cultural differences, even that’s not something that can be easily programmed, as the diagram below demonstrates.

Self-driving cards: pedestrians vs passengers

For two countries that are so close together, it’s really interesting that Japan and China are on the opposite ends of the spectrum when it comes to saving passengers or pedestrians!

The authors of the paper cited in the article are careful to point out that countries shouldn’t simply create laws based on popular opinion:

Edmond Awad, an author of the paper, brought up the social status comparison as an example. “It seems concerning that people found it okay to a significant degree to spare higher status over lower status,” he said. “It’s important to say, ‘Hey, we could quantify that’ instead of saying, ‘Oh, maybe we should use that.’” The results, he said, should be used by industry and government as a foundation for understanding how the public would react to the ethics of different design and policy decisions.

This is why we need more people with a background in the Humanities in tech, and be having a real conversation about ethics and AI.

Of course, that’s easier said than done, particularly when those companies who are in a position to make significant strides in this regard have near-monopolies in their field and are pulling in eye-watering amounts of money. A recent example of this, where Google convened an AI ethics committee was attacked as a smokescreen:

Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just “ethics washing,” a strategy to avoid government regulation. When researchers uncover new ways for technology to harm marginalized groups or infringe on civil liberties, tech companies can point to their boards and charters and say, “Look, we’re doing something.” It deflects criticism, and because the boards lack any power, it means the companies don’t change.

 […]

“It’s not that people are against governance bodies, but we have no transparency into how they’re built,” [Rumman] Chowdhury [a data scientist and lead for responsible AI at management consultancy Accenture] tells The Verge. With regard to Google’s most recent board, she says, “This board cannot make changes, it can just make suggestions. They can’t talk about it with the public. So what oversight capabilities do they have?”

As we saw around privacy, it takes a trusted multi-national body like the European Union to create a regulatory framework like GDPR for these issues. Thankfully, they’ve started that process by releasing guidelines containing seven requirements to create trustworthy AI:

  1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  4. Transparency: The traceability of AI systems should be ensured.
  5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The problem isn’t that people are going out of their way to build malevolent systems to rob us of our humanity. As usual, bad things happen because of more mundane requirements. For example, The Guardian has recently reported on concerns around predictive policing and hospitals using AI to predict everything from no-shows to risk of illness.

When we throw facial recognition into the mix, things get particularly scary. It’s all very well for Taylor Swift to use this technology to identify stalkers at her concerts, but given its massive drawbacks, perhaps we should restrict facial recognition somehow?

Human bias can seep into AI systems. Amazon abandoned a recruiting algorithm after it was shown to favor men’s resumes over women’s; researchers concluded an algorithm used in courtroom sentencing was more lenient to white people than to black people; a study found that mortgage algorithms discriminate against Latino and African American borrowers.

Facial recognition might be a cool way to unlock your phone, but the kind of micro-expressions that made for great television in the series Lie to Me is now easily exploited in what is expected to become a $20bn industry.

The difficult thing with all of this is that it’s very difficult for us as individuals to make a difference here. The problem needs to be tackled at a much higher level, as with GDPR. That will take time, and meanwhile the use of AI is exploding. Be careful out there.


Also check out: