Tag: Alexa

The Amazon Echo as an anatomical map of human labor, data and planetary resources

This map of what happens when you interact with a digital assistant such as the Amazon Echo is incredible. The image is taken from a length piece of work which is trying to bring attention towards the hidden costs of using such devices.

With each interaction, Alexa is training to hear better, to interpret more precisely, to trigger actions that map to the user’s commands more accurately, and to build a more complete model of their preferences, habits and desires. What is required to make this possible? Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch. A full accounting for these costs is almost impossible, but it is increasingly important that we grasp the scale and scope if we are to understand and govern the technical infrastructures that thread through our lives.

It’s a tour de force. Here’s another extract:

When a human engages with an Echo, or another voice-enabled AI device, they are acting as much more than just an end-product consumer. It is difficult to place the human user of an AI system into a single category: rather, they deserve to be considered as a hybrid case. Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product. This multiple identity recurs for human users in many technological systems. In the specific case of the Amazon Echo, the user has purchased a consumer device for which they receive a set of convenient affordances. But they are also a resource, as their voice commands are collected, analyzed and retained for the purposes of building an ever-larger corpus of human voices and instructions. And they provide labor, as they continually perform the valuable service of contributing feedback mechanisms regarding the accuracy, usefulness, and overall quality of Alexa’s replies. They are, in essence, helping to train the neural networks within Amazon’s infrastructural stack.

Well worth a read, especially alongside another article in Bloomberg about what they call ‘oral literacy’ but which I referred to in my thesis as ‘oracy’:

Should the connection between the spoken word and literacy really be so alien to us? After all, starting in the 1950s, basic literacy training in elementary schools in the United States has involved ‘phonics.’ And what is phonics but a way of attaching written words to the sounds they had been or could become? The theory grew out of the belief that all those lines of text on the pages of schoolbooks had become too divorced from their sounds; phonics was intended to give new readers a chance to recognize written language as part of the world of language they already knew.

The technological landscape is reforming what it means to be literate in the 21st century. Interestingly, some of that is a kind of a return to previous forms of human interaction that we used to value a lot more.

Sources: Anatomy of AI and Bloomberg

The disappearing computer and the future of AI

I was at the Thinking Digital conference yesterday, which is always an inspiring event. It kicked off with a presentation from a representative of Amazon’s Alexa programme, who cited an article by Walt Mossberg from this time last year. I’m pretty sure I read about it, but didn’t necessarily write about it, at the time.

Mossberg talks about how computing will increasingly become invisible:

Let me start by revising the oft-quoted first line of my first Personal Technology column in the Journal on October 17th, 1991: “Personal computers are just too hard to use, and it’s not your fault.” It was true then, and for many, many years thereafter. Not only were the interfaces confusing, but most tech products demanded frequent tweaking and fixing of a type that required more technical skill than most people had, or cared to acquire. The whole field was new, and engineers weren’t designing products for normal people who had other talents and interests.

Things are different now, of course. We expect even small children to be able to use things like iPads with minimal help.

When the internet first arrived, it was a discrete activity you performed on a discrete hunk of metal and plastic called a PC, using a discrete software program called a browser. Even now, though the net is like the electrical grid, powering many things, you still use a discrete device — a smartphone, say — to access it. Sure, you can summon some internet smarts through an Echo, but there’s still a device there, and you still have to know the magic words to say. We are a long way from the invisible, omnipresent computer in Starship Enterprise.

The Amazon representative on-stage at the conference obviously believes that voice is the next frontier in computing. That’s his job. Nevertheless, he marshalled some pretty compelling, if anecdotal, evidence for that. A couple of videos showed older people, who had been completely bypassed by the smartphone revolution, interacting naturally with Alexa.

I expect that one end result of all this work will be that the technology, the computer inside all these things, will fade into the background. In some cases, it may entirely disappear, waiting to be activated by a voice command, a person entering the room, a change in blood chemistry, a shift in temperature, a motion. Maybe even just a thought.

In the same way that the front end of a website like Facebook, the user interface, is the tip of the iceberg, so voice assistants are the front end for artificial intelligence. Who gets to the process data harvested by these devices, and for what purposes, is an important issue — both now and in the future.

And, if ambient technology is to become as integrated into our lives as previous technological revolutions like wood joists, steel beams, and engine blocks, we need to subject it to the digital equivalent of enforceable building codes and auto safety standards. Nothing less will do. And health? The current medical device standards will have to be even tougher, while still allowing for innovation.

This was the last article Mossberg wrote anywhere, having been a tech journalist since 1991. In signing off, he became a little wistful about the age of gadgetry we’re leaving behind, but it’s hopefully for the wider good.

We’ve all had a hell of a ride for the last few decades, no matter when you got on the roller coaster. It’s been exciting, enriching, and transformative. But it’s also been about objects and processes. Soon, after a brief slowdown, the roller coaster will be accelerating faster than ever, only this time it’ll be about actual experiences, with much less emphasis on the way those experiences get made.

This is an important touchstone article, and one I’ll be returning to in future, no doubt.

Source: The Verge

Alexa for Kids as babysitter?

I’m just on my way out if the house to head for Scotland to climb some mountains with my wife.

But while she does (what I call) her ‘last minute faffing’ I read Dan Hon’s newsletter. I’ll just quite the relevant section without any attempt at comment or analysis.

He includes references in his newsletter, but you’ll just have to click through for those.

Mat Honan reminded me that Amazon have made an Alexa for Kids (during the course of which Tom Simonite had a great story about Alexa diligently and non-plussedly educating a group of preschoolers about the history of FARC after misunderstanding their requests for farts) and Honan has a great article about it. There are now enough Alexa (plural?) out there that the phenomenon of “the funny things kids say to Alexa” is pretty well documented as well as the earlier “Alexa is teaching my kid to be rude” observation. This isn’t to say that Amazon haven’t done *any* work thinking about how Alexa works in a kid context (Honan’s article shows that they’ve demonstrably thought about how Alexa might work and that they’ve made changes to the product to accommodate children as a specific class of user) but the overwhelming impression I had after reading Honan’s piece was that, as a parent, I still don’t think Amazon haven’t gone far enough in making Alexa kid-friendly.

They’ve made some executive decisions like coming down hard on curation versus algorithmic selection of content (see James Bridle’s excellent earlier essay on YouTube, that something is wrong on the internet and recent coverage of YouTube Kids’ content selection method still finding ways to recommend, shall we say, videos espousing extreme views). And Amazon have addressed one of the core reported issues of having an Alexa in the house (the rudeness) by designing in support for a “magic word” Easter Egg that will reward kids for saying “please”. But that seems rather tactical and dealing with a specific issue and not, well, foundational. I think that the foundational issue is something more like this: parenting is a *very* personal subject. As I have become a parent, I have discovered (and validated through experimental data) that parents have very specific views about how to do things! Many parents do not agree with each other! Parents who agree with each other on some things do not agree on other things! In families where there are two parents there is much scope for disagreement on both desired outcome and method!

All of which is to say is that the current design, architecture and strategy of Alexa for Kids indicates one sort of one-size-fits-all method and that there’s not much room for parental customization. This isn’t to say that Amazon are actively preventing it and might not add it down the line – it’s just that it doesn’t really exist right now. Honan’s got a great point that:

“[For example,] take the magic word we mentioned earlier. There is no universal norm when it comes to what’s polite or rude. Manners vary by family, culture, and even region. While “yes, sir” may be de rigueur in Alabama, for example, it might be viewed as an element of the patriarchy in parts of California.”

Some parents may have very specific views on how they want to teach their kids to be polite. This kind of thinking leads me down the path of: well, are we imagining a world where Alexa or something like it is a sort of universal basic babysitter, with default norms and those who can get, well, customization? Or what someone else might call: attentive, individualized parenting?

When Alexa for Kids came out, I did about 10 seconds’ worth of thinking and, based on how Alexa gets used in our house (two parents, a five year old and a 19 month old) and how our preschooler is behaving, I was pretty convinced that I’m in no way ready or willing to leave him alone with an Alexa for Kids in his room. My family is, in what some might see as that tedious middle class way, pretty strict about the amount of screen time our kids get (unsupervised and supervised) and suffice it to say that there’s considerable difference of opinion between my wife and myself on what we’re both comfortable with and at what point what level of exposure or usage might be appropriate.

And here’s where I reinforce that point again: are you okay with leaving your kids with a default babysitter, or are you the kind of person who has opinions about how you want your babysitter to act with your kids? (Yes, I imagine people reading this and clutching their pearls at the mere *thought* of an Alexa “babysitting” a kid but need I remind you that books are a technological object too and the issue here is in the degree of interactivity and access). At least with a babysitter I can set some parameters and I’ve got an idea of how the babysitter might interact with the kids because, well, that’s part of the babysitter screening process.

Source: Things That Have Caught My Attention s5e11

It doesn’t matter if you don’t use AI assistants if everyone else does

Email is an awesome system. It’s open, decentralised, and you can pick whoever you want to provide your emails. The trouble is, of course, that if you decide you don’t want a certain company, say Google, to read your emails, you only have control of your half of the equation. In other words, it doesn’t matter if you don’t want to use GMail, if most of your contacts do.

The same is true of AI assistant. You might not want an Amazon Echo device in your house, but you don’t spend all your life at home:

Amazon wants to bring Alexa to more devices than smart speakers, Fire TV and various other consumer electronics for the home, like alarm clocks. The company yesterday announced developer tools that would allow Alexa to be used in microwave ovens, for example – so you could just tell the oven what to do. Today, Amazon is rolling out a new set of developer tools, including one called the “Alexa Mobile Accessory Kit,” that would allow Alexa to work Bluetooth products in the wearable space, like headphones, smartwatches, fitness trackers, other audio devices, and more.

The future isn’t pre-ordained. We get to choose the society and culture in which we’d like to live. Huge, for-profit companies having listening devices everywhere sounds dystopian to me.

Source: TechCrunch