Tag: AI (page 1 of 2)

Friday festoonings

Check out these things I read and found interesting this week. Thanks to some positive feedback, I’ve carved out time for some commentary, and changed the way this link roundup is set out.

Let me know what you think! What did you find most interesting?


Maps Are Biased Against Animals

Critics may say that it is unreasonable to expect maps to reflect the communities or achievements of nonhumans. Maps are made by humans, for humans. When beavers start Googling directions to a neighbor’s dam, then their homes can be represented! For humans who use maps solely to navigate—something that nonhumans do without maps—man-made roads are indeed the only features that are relevant. Following a map that includes other information may inadvertently lead a human onto a trail made by and for deer.

But maps are not just tools to get from points A to B. They also relay new and learned information, document evolutionary changes, and inspire intrepid exploration. We operate on the assumption that our maps accurately reflect what a visitor would find if they traveled to a particular area. Maps have immense potential to illustrate the world around us, identifying all the important features of a given region. By that definition, the current maps that most humans use fall well short of being complete. Our definition of what is “important” is incredibly narrow.

Ryan Huling (WIRED)

Cartography is an incredibly powerful tool. We’ve known for a long time that “the map is not the territory” but perhaps this is another weapon in the fight against climate change and the decline in diversity of species?


Why Actually Principled People Are Difficult (Glenn Greenwald Edition)

Then you get people like Greenwald, Assange, Manning and Snowden. They are polarizing figures. They are loved or hated. They piss people off.

They piss people off precisely because they have principles they consider non-negotiable. They will not do the easy thing when it matters. They will not compromise on anything that really matters.

That’s breaking the actual social contract of “go along to get along”, “obey authority” and “don’t make people uncomfortable.” I recently talked to a senior activist who was uncomfortable even with the idea of yelling at powerful politicians. It struck them as close to violence.

So here’s the thing, people want men and women of principle to be like ordinary people.

They aren’t. They can’t be. If they were, they wouldn’t do what they do. Much of what you may not like about a Greenwald or Assange or Manning or Snowden is why they are what they are. Not just the principle, but the bravery verging on recklessness. The willingness to say exactly what they think, and do exactly what they believe is right even if others don’t.

Ian Welsh

Activists like Greta Thunberg and Edward Snowden are the closest we get to superheroes, to people who stand for the purest possible version of an idea. This is why we need them — and why we’re so disappointed when they turn out to be human after all.


Explicit education

Students’ not comprehending the value of engaging in certain ways is more likely to be a failure in our teaching than their willingness to learn (especially if we create a culture in which success becomes exclusively about marks and credentialization). The question we have to ask is if what we provide as ‘university’ goes beyond the value of what our students can engage with outside of our formal offer. 

Dave White

This is a great post by Dave, who I had the pleasure of collaborating with briefly during my stint at Jisc. I definitely agree that any organisation walks a dangerous path when it becomes overly-fixated on the ‘how’ instead of the ‘what’ and the ‘why’.


What Are Your Rules for Life? These 11 Expressions (from Ancient History) Might Help

The power of an epigram or one of these expressions is that they say a lot with a little. They help guide us through the complexity of life with their unswerving directness. Each person must, as the retired USMC general and former Secretary of Defense Jim Mattis, has said, “Know what you will stand for and, more important, what you won’t stand for.” “State your flat-ass rules and stick to them. They shouldn’t come as a surprise to anyone.”

Ryan Holiday

Of the 11 expressions here, I have to say that other than memento mori (“remember you will die”) I particularly like semper anticus (“always forward”) which I’m going to print out in a fancy font and stick on the wall of my home office.


Dark Horse Discord

In a hypothetical world, you could get a Discord (or whatever is next) link for your new job tomorrow – you read some wiki and meta info, sort yourself into your role you’d, and then are grouped with the people who you need to collaborate with on a need be basis. All wrapped in one platform. Maybe you have an HR complaint – drop it in #HR where you can’t read the messages but they can, so it’s a blind 1 way conversation. Maybe there is a #help channel, where you ping you write your problems and the bot pings people who have expertise based on keywords. There’s a lot of things you can do with this basic design.

Mule’s Musings

What is described in this post is a bit of a stretch, but I can see it: a world where work is organised a bit like how gamers organisers in chat channels. Something to keep an eye on, as the interplay between what’s ‘normal’ and what’s possible with communications technology changes and evolves.


The Edu-Decade That Was: Unfounded Optimism?

What made the last decade so difficult is how education institutions let corporations control the definitions so that a lot of “study and ethical practice” gets left out of the work. With the promise of ease of use, low-cost, increased student retention (or insert unreasonable-metric-claim here), etc. institutions are willing to buy into technology without regard to accessibility, scalability, equity and inclusion, data privacy or student safety, in hope of solving problem X that will then get to be checked off of an accreditation list. Or worse, with the hope of not having to invest in actual people and local infrastructure.

Geoff Cain (Brainstorm in progress)

It’s nice to see a list of some positives that came out of the last decades, and for microcredentials and badging to be on that list.


When Is a Bird a ‘Birb’? An Extremely Important Guide

First, let’s consider the canonized usages. The subreddit r/birbs defines a birb as any bird that’s “being funny, cute, or silly in some way.” Urban Dictionary has a more varied set of definitions, many of which allude to a generalized smallness. A video on the youtube channel Lucidchart offers its own expansive suggestions: All birds are birbs, a chunky bird is a borb, and a fluffed-up bird is a floof. Yet some tension remains: How can all birds be birbs if smallness or cuteness are in the equation? Clearly some birds get more recognition for an innate birbness.

Asher Elbein (Audubon magazine)

A fun article, but also an interesting one when it comes to ambiguity, affinity groups, and internet culture.


Why So Many Things Cost Exactly Zero

“Now, why would Gmail or Facebook pay us? Because what we’re giving them in return is not money but data. We’re giving them lots of data about where we go, what we eat, what we buy. We let them read the contents of our email and determine that we’re about to go on vacation or we’ve just had a baby or we’re upset with our friend or it’s a difficult time at work. All of these things are in our email that can be read by the platform, and then the platform’s going to use that to sell us stuff.”

Fiona Scott Morton (Yale business school) quoted by Peter coy (Bloomberg Businessweek)

Regular readers of Thought Shrapnel know all about surveillance capitalism, but it’s good to see these explainers making their way to the more mainstream business press.


Your online activity is now effectively a social ‘credit score’

The most famous social credit system in operation is that used by China’s government. It “monitors millions of individuals’ behavior (including social media and online shopping), determines how moral or immoral it is, and raises or lowers their “citizen score” accordingly,” reported Atlantic in 2018.

“Those with a high score are rewarded, while those with a low score are punished.” Now we know the same AI systems are used for predictive policing to round up Muslim Uighurs and other minorities into concentration camps under the guise of preventing extremism.

Violet Blue (Engadget)

Some (more prudish) people will write this article off because it discusses sex workers, porn, and gay rights. But the truth is that all kinds of censorship start with marginalised groups. To my mind, we’re already on a trajectory away from Silicon Valley and towards Chinese technology. Will we be able to separate the tech from the morality?


Panicking About Your Kids’ Phones? New Research Says Don’t

The researchers worry that the focus on keeping children away from screens is making it hard to have more productive conversations about topics like how to make phones more useful for low-income people, who tend to use them more, or how to protect the privacy of teenagers who share their lives online.

“Many of the people who are terrifying kids about screens, they have hit a vein of attention from society and they are going to ride that. But that is super bad for society,” said Andrew Przybylski, the director of research at the Oxford Internet Institute, who has published several studies on the topic.

Nathaniel Popper (The New York Times)

Kids and screentime is just the latest (extended) moral panic. Overuse of anything causes problems, smartphones, games consoles, and TV included. What we need to do is to help our children find balance in all of this, which can be difficult for the first generation of parents navigating all of this on the frontline.


Gorgeous header art via the latest Facebook alternative, planetary.social

Friday flowerings

Did you see these things this week?

  • Happy 25th year, blogging. You’ve grown up, but social media is still having a brawl (The Guardian) — “The furore over social media and its impact on democracy has obscured the fact that the blogosphere not only continues to exist, but also to fulfil many of the functions of a functioning public sphere. And it’s massive. One source, for example, estimates that more than 409 million people view more than 20bn blog pages each month and that users post 70m new posts and 77m new comments each month. Another source claims that of the 1.7 bn websites in the world, about 500m are blogs. And WordPress.com alone hosts blogs in 120 languages, 71% of them in English.”
  • Emmanuel Macron Wants to Scan Your Face (The Washington Post) — “President Emmanuel Macron’s administration is set to be the first in Europe to use facial recognition when providing citizens with a secure digital identity for accessing more than 500 public services online… The roll-out is tainted by opposition from France’s data regulator, which argues the electronic ID breaches European Union rules on consent – one of the building blocks of the bloc’s General Data Protection Regulation laws – by forcing everyone signing up to the service to use the facial recognition, whether they like it or not.”
  • This is your phone on feminism (The Conversationalist) — “Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true. This cognitive dissonance confuses and paralyses us. And look around. Everyone has a smartphone. So it’s probably not so bad, and anyway, that’s just how things work. Right?”
  • Google’s auto-delete tools are practically worthless for privacy (Fast Company) — “In reality, these auto-delete tools accomplish little for users, even as they generate positive PR for Google. Experts say that by the time three months rolls around, Google has already extracted nearly all the potential value from users’ data, and from an advertising standpoint, data becomes practically worthless when it’s more than a few months old.”
  • Audrey Watters (Uses This) — “For me, the ideal set-up is much less about the hardware or software I am using. It’s about the ideas that I’m thinking through and whether or not I can sort them out and shape them up in ways that make for a good piece of writing. Ideally, that does require some comfort — a space for sustained concentration. (I know better than to require an ideal set up in order to write. I’d never get anything done.)”
  • Computer Files Are Going Extinct (OneZero) — “Files are skeuomorphic. That’s a fancy word that just means they’re a digital concept that mirrors a physical item. A Word document, for example, is like a piece of paper, sitting on your desk(top). A JPEG is like a painting, and so on. They each have a little icon that looks like the physical thing they represent. A pile of paper, a picture frame, a manila folder. It’s kind of charming really.”
  • Why Technologists Fail to Think of Moderation as a Virtue and Other Stories About AI (The LA Review of Books) — “Speculative fiction about AI can move us to think outside the well-trodden clichés — especially when it considers how technologies concretely impact human lives — through the influence of supersized mediators, like governments and corporations.”
  • Inside Mozilla’s 18-month effort to market without Facebook (Digiday) — “The decision to focus on data privacy in marketing the Mozilla brand came from research conducted by the company four years ago into the rise of consumers who make values-based decisions on not only what they purchase but where they spend their time.”
  • Core human values not eyeballs (Cubic Garden) — “Theres so much more to do, but the aims are high and important for not just the BBC, but all public service entities around the world. Measuring the impact and quality on peoples lives beyond the shallow meaningless metrics for public service is critical.”

Image: The why is often invisible via Jessica Hagy’s Indexed

Technology is the name we give to stuff that doesn’t work properly yet

So said my namesake Douglas Adams. In fact, he said lots of wise things about technology, most of them too long to serve as a title.

I’m in a weird place, emotionally, at the moment, but sometimes this can be a good thing. Being taken out of your usual ‘autopilot’ can be a useful way to see things differently. So I’m going to take this opportunity to share three things that, to be honest, make me a bit concerned about the next few years…

Attempts to put microphones everywhere

Alexa-enabled EVERYTHING

In an article for Slate, Shannon Palus ranks all of Amazon’s new products by ‘creepiness’. The Echo Frames are, in her words:

A microphone that stays on your person all day and doesn’t look like anything resembling a microphone, nor follows any established social codes for wearable microphones? How is anyone around you supposed to have any idea that you are wearing a microphone?

Shannon Palus

When we’re not talking about weapons of mass destruction, it’s not the tech that concerns me, but the context in which the tech is used. As Palus points out, how are you going to be able to have a ‘quiet word’ with anyone wearing glasses ever again?

It’s not just Amazon, of course. Google and Facebook are at it, too.

Full-body deepfakes

Scary stuff

With the exception, perhaps, of populist politicians, I don’t think we’re ready for a post-truth society. Check out the video above, which shows Chinese technology that allows for ‘full body deepfakes’.

The video is embedded, along with a couple of others in an article for Fast Company by DJ Pangburn, who also notes that AI is learning human body movements from videos. Not only will you be able to prank your friends by showing them a convincing video of your ability to do 100 pull-ups, but the fake news it engenders will mean we can’t trust anything any more.

Neuromarketing

If you clicked on the ‘super-secret link’ in Sunday’s newsletter, you will have come across STEALING UR FEELINGS which is nothing short of incredible. As powerful as it is in showing you the kind of data that organisations have on us, it’s the tip of the iceberg.

Kaveh Waddell, in an article for Axios, explains that brains are the last frontier for privacy:

“The sort of future we’re looking ahead toward is a world where our neural data — which we don’t even have access to — could be used” against us, says Tim Brown, a researcher at the University of Washington Center for Neurotechnology.

Kaveh Waddell

This would lead to ‘neuromarketing’, with advertisers knowing what triggers and influences you better than you know yourself. Also, it will no doubt be used for discriminatory purposes and, because it’s coming directly from your brainwaves, short of literally wearing a tinfoil hat, there’s nothing much you can do.


So there we are. Am I being too fearful here?

Friday fluctuations

Have a quick skim through these links that I came across this week and found interesting:

  • Overrated: Ludwig Wittgenstein (Standpoint) — “Wittgenstein’s reputation for genius did not depend on incomprehensibility alone. He was also “tortured”, rude and unreliable. He had an intense gaze. He spent months in cold places like Norway to isolate himself. He temporarily quit philosophy, because he believed that he had solved all its problems in his 1922 Tractatus Logico-Philosophicus, and worked as a gardener. He gave away his family fortune. And, of course, he was Austrian, as so many of the best geniuses are.”
  • EdTech Resistance (Ben Williamson) ⁠— “We should not and cannot ignore these tensions and challenges. They are early signals of resistance ahead for edtech which need to be engaged with before they turn to public outrage. By paying attention to and acting on edtech resistances it may be possible to create education systems, curricula and practices that are fair and trustworthy. It is important not to allow edtech resistance to metamorphose into resistance to education itself.”
  • The Guardian view on machine learning: a computer cleverer than you? (The Guardian) — “The promise of AI is that it will imbue machines with the ability to spot patterns from data, and make decisions faster and better than humans do. What happens if they make worse decisions faster? Governments need to pause and take stock of the societal repercussions of allowing machines over a few decades to replicate human skills that have been evolving for millions of years.”
  • A nerdocratic oath (Scott Aaronson) — “I will never allow anyone else to make me a cog. I will never do what is stupid or horrible because “that’s what the regulations say” or “that’s what my supervisor said,” and then sleep soundly at night. I’ll never do my part for a project unless I’m satisfied that the project’s broader goals are, at worst, morally neutral. There’s no one on earth who gets to say: “I just solve technical problems. Moral implications are outside my scope”.”
  • Privacy is power (Aeon) — “The power that comes about as a result of knowing personal details about someone is a very particular kind of power. Like economic power and political power, privacy power is a distinct type of power, but it also allows those who hold it the possibility of transforming it into economic, political and other kinds of power. Power over others’ privacy is the quintessential kind of power in the digital age.”
  • The Symmetry and Chaos of the World’s Megacities (WIRED) — “Koopmans manages to create fresh-looking images by finding unique vantage points, often by scouting his locations on Google Earth. As a rule, he tries to get as high as he can—one of his favorite tricks is talking local work crews into letting him shoot from the cockpit of a construction crane.”
  • Green cities of the future – what we can expect in 2050 (RNZ) — “In their lush vision of the future, a hyperloop monorail races past in the foreground and greenery drapes the sides of skyscrapers that house communal gardens and vertical farms.”
  • Wittgenstein Teaches Elementary School (Existential Comics) ⁠— “And I’ll have you all know, there is no crying in predicate logic.”
  • Ask Yourself These 5 Questions to Inspire a More Meaningful Career Move (Inc.) — “Introspection on the right things can lead to the life you want.”

Image from Do It Yurtself

Saturday strikings

This week’s roundup is going out a day later than usual, as yesterday was the Global Climate Strike and Thought Shrapnel was striking too!

Here’s what I’ve been paying attention to this week:

  • How does a computer ‘see’ gender? (Pew Research Center) — “Machine learning tools can bring substantial efficiency gains to analyzing large quantities of data, which is why we used this type of system to examine thousands of image search results in our own studies. But unlike traditional computer programs – which follow a highly prescribed set of steps to reach their conclusions – these systems make their decisions in ways that are largely hidden from public view, and highly dependent on the data used to train them. As such, they can be prone to systematic biases and can fail in ways that are difficult to understand and hard to predict in advance.”
  • The Communication We Share with Apes (Nautilus) — “Many primate species use gestures to communicate with others in their groups. Wild chimpanzees have been seen to use at least 66 different hand signals and movements to communicate with each other. Lifting a foot toward another chimp means “climb on me,” while stroking their mouth can mean “give me the object.” In the past, researchers have also successfully taught apes more than 100 words in sign language.”
  • Why degrowth is the only responsible way forward (openDemocracy) — “If we free our imagination from the liberal idea that well-being is best measured by the amount of stuff that we consume, we may discover that a good life could also be materially light. This is the idea of voluntary sufficiency. If we manage to decide collectively and democratically what is necessary and enough for a good life, then we could have plenty.”
  • 3 times when procrastination can be a good thing (Fast Company) — “It took Leonardo da Vinci years to finish painting the Mona Lisa. You could say the masterpiece was created by a master procrastinator. Sure, da Vinci wasn’t under a tight deadline, but his lengthy process demonstrates the idea that we need to work through a lot of bad ideas before we get down to the good ones.”
  • Why can’t we agree on what’s true any more? (The Guardian) — “What if, instead, we accepted the claim that all reports about the world are simply framings of one kind or another, which cannot but involve political and moral ideas about what counts as important? After all, reality becomes incoherent and overwhelming unless it is simplified and narrated in some way or other.
  • A good teacher voice strikes fear into grown men (TES) — “A good teacher voice can cut glass if used with care. It can silence a class of children; it can strike fear into the hearts of grown men. A quiet, carefully placed “Excuse me”, with just the slightest emphasis on the “-se”, is more effective at stopping an argument between adults or children than any amount of reason.”
  • Freeing software (John Ohno) — “The only way to set software free is to unshackle it from the needs of capital. And, capital has become so dependent upon software that an independent ecosystem of anti-capitalist software, sufficiently popular, can starve it of access to the speed and violence it needs to consume ever-doubling quantities of to survive.”
  • Young People Are Going to Save Us All From Office Life (The New York Times) — “Today’s young workers have been called lazy and entitled. Could they, instead, be among the first to understand the proper role of work in life — and end up remaking work for everyone else?”
  • Global climate strikes: Don’t say you’re sorry. We need people who can take action to TAKE ACTUAL ACTION (The Guardian) — “Brenda the civil disobedience penguin gives some handy dos and don’ts for your civil disobedience”

Friday fizzles

I head off on holiday tomorrow! Before I go, check out these highlights from this week’s reading and research:

  • “Things that were considered worthless are redeemed” (Ira David Socol) — “Empathy plus Making must be what education right now is about. We are at both a point of learning crisis and a point of moral crisis. We see today what happens — in the US, in the UK, in Brasil — when empathy is lost — and it is a frightening sight. We see today what happens — in graduates from our schools who do not know how to navigate their world — when the learning in our schools is irrelevant in content and/or delivery.”
  • Voice assistants are going to make our work lives better—and noisier (Quartz) — “Active noise cancellation and AI-powered sound settings could help to tackle these issues head on (or ear on). As the AI in noise cancellation headphones becomes better and better, we’ll potentially be able to enhance additional layers of desirable audio, while blocking out sounds that distract. Audio will adapt contextually, and we’ll be empowered to fully manage and control our soundscapes.
  • We Aren’t Here to Learn What We Already Know (LA Review of Books) — “A good question, in short, is an honest question, one that, like good theory, dances on the edge of what is knowable, what it is possible to speculate on, what is available to our immediate grasp of what we are reading, or what it is possible to say. A good question, that is, like good theory, might be quite unlovely to read, particularly in its earliest iterations. And sometimes it fails or has to be abandoned.”
  • The runner who makes elaborate artwork with his feet and a map (The Guardian) — “The tracking process is high-tech, but the whole thing starts with just a pen and paper. “When I was a kid everyone thought I’d be an artist when I grew up – I was always drawing things,” he said. He was a particular fan of the Etch-a-Sketch, which has something in common with his current work: both require creating images in an unbroken line.”
  • What I Do When it Feels Like My Work Isn’t Good Enough (James Clear) — “Release the desire to define yourself as good or bad. Release the attachment to any individual outcome. If you haven’t reached a particular point yet, there is no need to judge yourself because of it. You can’t make time go faster and you can’t change the number of repetitions you have put in before today. The only thing you can control is the next repetition.”
  • Online porn and our kids: It’s time for an uncomfortable conversation (The Irish Times) — “Now when we talk about sex, we need to talk about porn, respect, consent, sexuality, body image and boundaries. We don’t need to terrify them into believing watching porn will ruin their lives, destroy their relationships and warp their libidos, maybe, but we do need to talk about it.”
  • Drones will fly for days with new photovoltaic engine (Tech Xplore) — “[T]his finding builds on work… published in 2011, which found that the key to boosting solar cell efficiency was not by absorbing more photons (light) but emitting them. By adding a highly reflective mirror on the back of a photovoltaic cell, they broke efficiency records at the time and have continued to do so with subsequent research.
  • Twitter won’t ruin the world. But constraining democracy would (The Guardian) — “The problems of Twitter mobs and fake news are real. As are the issues raised by populism and anti-migrant hostility. But neither in technology nor in society will we solve any problem by beginning with the thought: “Oh no, we put power into the hands of people.” Retweeting won’t ruin the world. Constraining democracy may well do.
  • The Encryption Debate Is Over – Dead At The Hands Of Facebook (Forbes) — “Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.”
  • Living in surplus (Seth Godin) — “When you live in surplus, you can choose to produce because of generosity and wonder, not because you’re drowning.”

Image from Dilbert. Shared to make the (hopefully self-evident) counterpoint that not everything of value has an economic value. There’s more to life than accumulation.

One can see only what one has already seen

Fernando Pessoa with today’s quotation-as-title. He’s best known for The Book of Disquiet which he called “a factless autobiography”. It’s… odd. Here’s a sample:

Whether or not they exist, we’re slaves to the gods.

Fernando pessoa

I’ve been reading a lot of Seneca recently, who famously said:

Life is divided into three periods, past, present and future. Of these, the present is short, the future is doubtful, the past is certain.

Seneca

The trouble is, we try and predict the future in order to control the future. Some people have a good track record in this, partly because they are involved in shaping things in the present. Other people have a vested interest in trying to get the world to bend to their ideology.

In an article for WIRED, Joi Ito, Director of the MIT Media Lab writes about ‘extended intelligence’ being the future rather than AI:

The notion of singularity – which includes the idea that AI will supercede humans with its exponential growth, making everything we humans have done and will do insignificant – is a religion created mostly by people who have designed and successfully deployed computation to solve problems previously considered impossibly complex for machines.

Joi Ito

It’s a useful counter-balance to those banging the AI drum and talking about the coming jobs apocalypse.

After talking about ‘S curves’ and adaptive systems, Ito explains that:

Instead of thinking about machine intelligence in terms of humans vs machines, we should consider the system that integrates humans and machines – not artificial intelligence but extended intelligence. Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.

Joi Ito

I haven’t had a chance to read it yet, but I’m looking forward to seeing some of the ideas put forward in The Weight of Light: a collection of solar futures (which is free to download in multiple formats). We need to stop listening solely to rich white guys proclaiming the Silicon Valley narrative of ‘disruption’. There are many other, much more collaborative and egalitarian, ways of thinking about and designing for the future.

This collection was inspired by a simple question: what would a world powered entirely by solar energy look like? In part, this question is about the materiality of solar energy—about where people will choose to put all the solar panels needed to power the global economy. It’s also about how people will rearrange their lives, values, relationships, markets, and politics around photovoltaic technologies. The political theorist and historian Timothy Mitchell argues that our current societies are carbon democracies, societies wrapped around the technologies, systems, and logics of oil.What will it be like, instead, to live in the photon societies of the future?

The Weight of Light: a collection of solar futures

We create the future, it doesn’t just happen to us. My concern is that we don’t recognise the signs that we’re in the last days. Someone shared this quotation from the philosopher Kierkegaard recently, and I think it describes where we’re at pretty well:

A fire broke out backstage in a theatre. The clown came out to warn the public; they thought it was a joke and applauded. He repeated it; the acclaim was even greater. I think that’s just how the world will come to an end: to general applause from wits who believe it’s a joke.

Søren Kierkegaard

Let’s home we collectively wake up before it’s too late.


Also check out:

  • Are we on the road to civilisation collapse? (BBC Future) — “Collapse is often quick and greatness provides no immunity. The Roman Empire covered 4.4 million sq km (1.9 million sq miles) in 390. Five years later, it had plummeted to 2 million sq km (770,000 sq miles). By 476, the empire’s reach was zero.”
  • Fish farming could be the center of a future food system (Fast Company) — “Aquaculture has been shown to have 10% of the greenhouse gas emissions of beef when it’s done well, and 50% of the feed usage per unit of production as beef”
  • The global internet is disintegrating. What comes next? (BBC FutureNow) — “A separate internet for some, Facebook-mediated sovereignty for others: whether the information borders are drawn up by individual countries, coalitions, or global internet platforms, one thing is clear – the open internet that its early creators dreamed of is already gone.”

The Amazon Echo as an anatomical map of human labor, data and planetary resources

This map of what happens when you interact with a digital assistant such as the Amazon Echo is incredible. The image is taken from a length piece of work which is trying to bring attention towards the hidden costs of using such devices.

With each interaction, Alexa is training to hear better, to interpret more precisely, to trigger actions that map to the user’s commands more accurately, and to build a more complete model of their preferences, habits and desires. What is required to make this possible? Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch. A full accounting for these costs is almost impossible, but it is increasingly important that we grasp the scale and scope if we are to understand and govern the technical infrastructures that thread through our lives.

It’s a tour de force. Here’s another extract:

When a human engages with an Echo, or another voice-enabled AI device, they are acting as much more than just an end-product consumer. It is difficult to place the human user of an AI system into a single category: rather, they deserve to be considered as a hybrid case. Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product. This multiple identity recurs for human users in many technological systems. In the specific case of the Amazon Echo, the user has purchased a consumer device for which they receive a set of convenient affordances. But they are also a resource, as their voice commands are collected, analyzed and retained for the purposes of building an ever-larger corpus of human voices and instructions. And they provide labor, as they continually perform the valuable service of contributing feedback mechanisms regarding the accuracy, usefulness, and overall quality of Alexa’s replies. They are, in essence, helping to train the neural networks within Amazon’s infrastructural stack.

Well worth a read, especially alongside another article in Bloomberg about what they call ‘oral literacy’ but which I referred to in my thesis as ‘oracy’:

Should the connection between the spoken word and literacy really be so alien to us? After all, starting in the 1950s, basic literacy training in elementary schools in the United States has involved ‘phonics.’ And what is phonics but a way of attaching written words to the sounds they had been or could become? The theory grew out of the belief that all those lines of text on the pages of schoolbooks had become too divorced from their sounds; phonics was intended to give new readers a chance to recognize written language as part of the world of language they already knew.

The technological landscape is reforming what it means to be literate in the 21st century. Interestingly, some of that is a kind of a return to previous forms of human interaction that we used to value a lot more.

Sources: Anatomy of AI and Bloomberg

Systems thinking and AI

Edge is an interesting website. Its aim is:

To arrive at the edge of the world’s knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.

One recent article on the site is from Mary Catherine Bateson, a writer and cultural anthropologist who retired in 2004 from her position as Professor in Anthropology and English at George Mason University. She’s got some interesting insights into systems thinking and artificial intelligence.

We all think with metaphors of various sorts, and we use metaphors to deal with complexity, but the way human beings use computers and AI depends on their basic epistemologies—whether they’re accustomed to thinking in systemic terms, whether they’re mainly interested in quantitative issues, whether they’re used to using games of various sorts. A great deal of what people use AI for is to simulate some pattern outside in the world. On the other hand, people use one pattern in the world as a metaphor for another one all the time.

That’s such an interesting way of putting it, the insinuation being that some people have epistemologies (theories of knowledge) that are not really nuanced enough to deal with the world in all of its complexity. As a result, they use reductive metaphors that don’t really work that well. This is obviously problematic when dealing with AI that you want to do some work for you, hence the bias (racism, sexism) which has plagued the field.

One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it’s willing to make projections when it hasn’t been provided with everything that would be relevant to those projections. How do we get there? I don’t know. It’s important to be aware of it, to realize that there are limits to what we can do with AI. It’s great for computation and arithmetic, and it saves huge amounts of labor. It seems to me that it lacks humility, lacks imagination, and lacks humor. It doesn’t mean you can’t bring those things into your interactions with your devices, particularly, in communicating with other human beings. But it does mean that elements of intelligence and wisdom—I like the word wisdom, because it’s more multi-dimensional—are going to be lacking.

Something I always say is that technology is not neutral and that anyone who claims it to be so is a charlatan. Technologies are always designed by a person, or group of people, for a particular purpose. That person, or people, has hopes, fears, dreams, opinions, and biases. Therefore, AI has limits.

You don’t have to know a lot of technical terminology to be a systems thinker. One of the things that I’ve been realizing lately, and that I find fascinating as an anthropologist, is that if you look at belief systems and religions going way back in history, around the world, very often what you realize is that people have intuitively understood systems and used metaphors to think about them. The example that grabbed me was thinking about the pantheon of Greek gods—Zeus and Hera, Apollo and Demeter, and all of them. I suddenly realized that in the mythology they’re married, they have children, the sun and the moon are brother and sister. There are quarrels among the gods, and marriages, divorces, and so on. So you can use the Greek pantheon, because it is based on kinship, to take advantage of what people have learned from their observation of their friends and relatives.

I like the way that Bateson talks about the difference between computer science and systems theory. It’s a bit like the argument I gave about why kids need to learn to code back in 2013: it’s more about algorithmic thinking than it is about syntax.

The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.

The article is worth reading in its entirety, as Bateson goes off at tangents that make it difficult to quote sections here. It reminds me that I need to revisit the work of Donella Meadows.

Source: Edge

Gendered AI?

Another fantastic article from Tim Carmody, a.k.a. Dr. Time:

An Echo or an iPhone is not a friend, and it is not a pet. It is an alarm clock that plays video games. It has no sentience. It has no personality. It’s a string of canned phrases that can’t understand what I’m saying unless I’m talking to it like I’m typing on the command line. It’s not genuinely interactive or conversational. Its name isn’t really a name so much as an opening command phrase. You could call one of these virtual assistants “sudo” and it would make about as much sense.

However.

I have also watched a lot (and I mean a lot) of Star Trek: The Next Generation. And while I feel pretty comfortable talking about “it” in the context of the speaker that’s sitting on the table across the room—there’s even a certain rebellious jouissance to it, since I’m spiting the technology companies whose products I use but whose intrusion into my life I resent—I feel decidedly uncomfortable declaring once and for all time that any and all AI assistants can be reduced to an “it.” It forecloses on a possibility of personhood and opens up ethical dilemmas I’d really rather avoid, even if that personhood seems decidedly unrealized at the moment.

I’m really enjoying his new ‘column’ as well as Noticing, the newsletter he curates.

Source: kottke.org