Tag: Alexa

Saturday shakings

Whew, so many useful bookmarks to re-read for this week’s roundup! It took me a while, so let’s get on with it…


Cartoon picture of someone working from home

What is the future of distributed work?

To Bharat Mediratta, chief technology officer at Dropbox, the quarantine experience has highlighted a huge gap in the market. “What we have right now is a bunch of different productivity and collaboration tools that are stitched together. So I will do my product design in Figma, and then I will submit the code change on GitHub, I will push the product out live on AWS, and then I will communicate with my team using Gmail and Slack and Zoom,” he says. “We have all that technology now, but we don’t yet have the ‘digital knowledge worker operating system’ to bring it all together.”

WIRED

OK, so this is a sponsored post by Dropbox on the WIRED website, but what it highlights is interesting. For example, Monday.com (which our co-op uses) rebranded itself a few months ago as a ‘Work OS’. There’s definitely a lot of money to be made for whoever manages to build an integrated solution, although I think we’re a long way off something which is flexible enough for every use case.


The Definition of Success Is Autonomy

Today, I don’t define success the way that I did when I was younger. I don’t measure it in copies sold or dollars earned. I measure it in what my days look like and the quality of my creative expression: Do I have time to write? Can I say what I think? Do I direct my schedule or does my schedule direct me? Is my life enjoyable or is it a chore?

Ryan Holiday

Tim Ferriss has this question he asks podcast guests: “If you could have a gigantic billboard anywhere with anything on it what would it say and why?” I feel like the title of this blog post is one of the answers I would give to that question.


Do The Work

We are a small group of volunteers who met as members of the Higher Ed Learning Collective. We were inspired by the initial demand, and the idea of self-study, interracial groups. The initial decision to form this initiative is based on the myriad calls from people of color for white-bodied people to do internal work. To do the work, we are developing a space for all individuals to read, share, discuss, and interrogate perspectives on race, racism, anti-racism, identity in an educational setting. To ensure that the fight continues for justice, we need to participate in our own ongoing reflection of self and biases. We need to examine ourselves, ask questions, and learn to examine our own perspectives. We need to get uncomfortable in asking ourselves tough questions, with an understanding that this is a lifelong, ongoing process of learning.

Ian O’Byrne

This is a fantastic resource for people who, like me, are going on a learning journey at the moment. I’ve found the podcast Seeing White by Scene on Radio particularly enlightening, and at times mind-blowing. Also, the Netflix documentary 13th is excellent, and available on YouTube.


Welding a motherboard

How to Make Your Tech Last Longer

If we put a small amount of time into caring for our gadgets, they can last indefinitely. We’d also be doing the world a favor. By elongating the life of our gadgets, we put more use into the energy, materials and human labor invested in creating the product.

Brian X. Chen (The new York times)

This is a pretty surface-level article that basically suggests people take their smartphone to a repair shop instead of buying a new one. What it doesn’t mention is that aftermarket operating systems such as the Android-based LineageOS can extend the lifetime of smartphones by providing security updates long beyond those provided by vendors.


Law enforcement arrests hundreds after compromising encrypted chat system

EncroChat sold customized Android handsets with GPS, camera, and microphone functionality removed. They were loaded with encrypted messaging apps as well as a secure secondary operating system (in addition to Android). The phones also came with a self-destruct feature that wiped the device if you entered a PIN.

The service had customers in 140 countries. While it was billed as a legitimate platform, anonymous sources told Motherboard that it was widely used among criminal groups, including drug trafficking organizations, cartels, and gangs, as well as hitmen and assassins.

EncroChat didn’t become aware that its devices had been breached until May after some users noticed that the wipe function wasn’t working. After trying and failing to restore the features and monitor the malware, EncroChat cut its SIM service and shut down the network, advising customers to dispose of their devices.

Monica Chin (The Verge)

It goes without saying that I don’t want assassins, drug traffickers, and mafia types to be successful in life. However, I’m always a little concerned when there are attacks on encryption, as they’re compromising systems also potentially used by protesters, activists, and those who oppose the status quo.


Uncovered: 1,000 phrases that incorrectly trigger Alexa, Siri, and Google Assistant

The findings demonstrate how common it is for dialog in TV shows and other sources to produce false triggers that cause the devices to turn on, sometimes sending nearby sounds to Amazon, Apple, Google, or other manufacturers. In all, researchers uncovered more than 1,000 word sequences—including those from Game of Thrones, Modern Family, House of Cards, and news broadcasts—that incorrectly trigger the devices.

“The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,” one of the researchers, Dorothea Kolossa, said. “Therefore, they are more likely to start up once too often rather than not at all.”

Dan Goodin (Ars Technica)

As anyone with voice assistant-enabled devices in their home will testify, the number of times they accidentally spin up, or misunderstand what you’re saying can be amusing. But we can and should be wary of what’s being listened to, and why.


The Five Levels of Remote Work

The Five Levels of Remote Work — and why you’re probably at Level 2

Effective written communication becomes critical the more companies embrace remote work. With an aversion to ‘jumping on calls’ at a whim, and a preference for asynchronous communication… [most] communications [are] text-based, and so articulate and timely articulation becomes key.

Steve Glaveski (The Startup)

This is from March and pretty clickbait-y, but everyone wants to know how they can improve – especially if didn’t work remotely before the pandemic. My experience is that actually most people are at Level 3 and, of course, I’d say that I and my co-op colleagues are at Level 5 given our experience…


Why Birds Can Fly Over Mount Everest

All mammals, including us, breathe in through the same opening that we breathe out. Can you imagine if our digestive system worked the same way? What if the food we put in our mouths, after digestion, came out the same way? It doesn’t bear thinking about! Luckily, for digestion, we have a separate in and out. And that’s what the birds have with their lungs: an in point and an out point. They also have air sacs and hollow spaces in their bones. When they breathe in, half of the good air (with oxygen) goes into these hollow spaces, and the other half goes into their lungs through the rear entrance. When they breathe out, the good air that has been stored in the hollow places now also goes into their lungs through that rear entrance, and the bad air (carbon dioxide and water vapor) is pushed out the front exit. So it doesn’t matter whether birds are breathing in or out: Good air is always going in one direction through their lungs, pushing all the bad air out ahead of it.

Walter Murch (Nautilus)

Incredible. Birds are badass (and also basically dinosaurs).


Montaigne Fled the Plague, and Found Himself

In the many essays of his life he discovered the importance of the moderate life. In his final essay, “On Experience,” Montaigne reveals that “greatness of soul is not so much pressing upward and forward as knowing how to circumscribe and set oneself in order.” What he finds, quite simply, is the importance of the moderate life. We must then, he writes, “compose our character, not compose books.” There is nothing paradoxical about this because his literary essays helped him better essay his life. The lesson he takes from this trial might be relevant for our own trial: “Our great and glorious masterpiece is to live properly.”

Robert Zaresky (The New York Times)

Every week, Bryan Alexander replies to the weekly Thought Shrapnel newsletter. Last week, he sent this article to both me and Chris Lott (who produces the excellent Notabilia).

We had a bit of a chat, with us sharing our love of How to Live: A Life of Montaigne in One Question and Twenty Attempts at An Answer by Sarah Bakewell, and well as the useful tidbits it’s possible glean from Stefan Zweig’s short biography simply entitled Montaigne.


Header image by Nicolas Comte

Technology is the name we give to stuff that doesn’t work properly yet

So said my namesake Douglas Adams. In fact, he said lots of wise things about technology, most of them too long to serve as a title.

I’m in a weird place, emotionally, at the moment, but sometimes this can be a good thing. Being taken out of your usual ‘autopilot’ can be a useful way to see things differently. So I’m going to take this opportunity to share three things that, to be honest, make me a bit concerned about the next few years…

Attempts to put microphones everywhere

Alexa-enabled EVERYTHING

In an article for Slate, Shannon Palus ranks all of Amazon’s new products by ‘creepiness’. The Echo Frames are, in her words:

A microphone that stays on your person all day and doesn’t look like anything resembling a microphone, nor follows any established social codes for wearable microphones? How is anyone around you supposed to have any idea that you are wearing a microphone?

Shannon Palus

When we’re not talking about weapons of mass destruction, it’s not the tech that concerns me, but the context in which the tech is used. As Palus points out, how are you going to be able to have a ‘quiet word’ with anyone wearing glasses ever again?

It’s not just Amazon, of course. Google and Facebook are at it, too.

Full-body deepfakes

Scary stuff

With the exception, perhaps, of populist politicians, I don’t think we’re ready for a post-truth society. Check out the video above, which shows Chinese technology that allows for ‘full body deepfakes’.

The video is embedded, along with a couple of others in an article for Fast Company by DJ Pangburn, who also notes that AI is learning human body movements from videos. Not only will you be able to prank your friends by showing them a convincing video of your ability to do 100 pull-ups, but the fake news it engenders will mean we can’t trust anything any more.

Neuromarketing

If you clicked on the ‘super-secret link’ in Sunday’s newsletter, you will have come across STEALING UR FEELINGS which is nothing short of incredible. As powerful as it is in showing you the kind of data that organisations have on us, it’s the tip of the iceberg.

Kaveh Waddell, in an article for Axios, explains that brains are the last frontier for privacy:

“The sort of future we’re looking ahead toward is a world where our neural data — which we don’t even have access to — could be used” against us, says Tim Brown, a researcher at the University of Washington Center for Neurotechnology.

Kaveh Waddell

This would lead to ‘neuromarketing’, with advertisers knowing what triggers and influences you better than you know yourself. Also, it will no doubt be used for discriminatory purposes and, because it’s coming directly from your brainwaves, short of literally wearing a tinfoil hat, there’s nothing much you can do.


So there we are. Am I being too fearful here?

The greatest obstacle to discovery is not ignorance—it is the illusion of knowledge

So said Daniel J. Boorstin. It’s been an interesting week for those, like me, who follow the development of interaction between humans and machines. Specifically, people seem shocked that voice assistants are being used for health questions, also that the companies who make them employ people to listen to samples of voice recordings to make them better.

Before diving into that, let’s just zoom out a bit and remind ourselves that the average level of digital literacies in the general population is pretty poor. Sometimes I wonder how on earth VC-backed companies manage to burn through so much cash. Then I remember the contortions that those who design visual interfaces go through so that people don’t have to think.

Discussing ‘fake news’ and our information literacy problem in Forbes, you can almost feel Kalev Leetaru‘s eye-roll when he says:

It is the accepted truth of Silicon Valley that every problem has a technological solution.

Most importantly, in the eyes of the Valley, every problem can be solved exclusively through technology without requiring society to do anything on its own. A few algorithmic tweaks, a few extra lines of code and all the world’s problems can be simply coded out of existence.

Kalev Leetaru

It’s somewhat tangential to the point I want to make in this article, but Cory Doctorow makes a a good point in this regard about fake news for Locus

Fake news is an instrument for measuring trauma, and the epistemological incoherence that trauma creates – the justifiable mistrust of the establishment that has nearly murdered our planet and that insists that making the richest among us much, much richer will benefit everyone, eventually.

Cory Doctorow

Before continuing, I’d just like to say that I’ve got some skin in the voice assistant game, given that our home has no fewer that six devices that use the Google Assistant (ten if you count smartphones and tablets).

Voice assistants are pretty amazing when you know exactly what you want and can form a coherent query. It’s essentially just clicking the top link on a Google search result, without any of the effort of pointing and clicking. “Hey Google, do I need an umbrella today?”

However, some people are suspicious of voice assistants to a degree that borders on the superstitious. There’s perhaps some valid reasons if you know your tech, but if you’re of the opinion that your voice assistant is ‘always recording’ and literally sending everything to Amazon, Google, Apple, and/or Donald Trump then we need to have words. Just think about that for a moment, realise how ridiculous it is, and move on.

This week an article by VRT NWS stoked fears like these. It was cleverly written so that those who read it quickly could easily draw the conclusion that Google is listening to everything you say. However, let me carve out the key paragraphs:

Why is Google storing these recordings and why does it have employees listening to them? They are not interested in what you are saying, but the way you are saying it. Google’s computer system consists of smart, self-learning algorithms. And in order to understand the subtle differences and characteristics of the Dutch language, it still needs to learn a lot.

[…]

Speech recognition automatically generates a script of the recordings. Employees then have to double check to describe the excerpt as accurately as possible: is it a woman’s voice, a man’s voice or a child? What do they say? They write out every cough and every audible comma. These descriptions are constantly improving Google’s search engines, which results in better reactions to commands. One of our sources explains how this works.

VRS NWS

Every other provider of speech recognition products does this. Obviously. How else would you manage to improve voice recognition in real-world situations? What VRS NWS did was to get a sub-contractor to break a Non-Disclosure Agreement (and violate GDPR) to share recordings.

Google responded on their blog The Keyword, saying:

As part of our work to develop speech technology for more languages, we partner with language experts around the world who understand the nuances and accents of a specific language. These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant.

We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.

We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.

The Keyword

As I’ve said before, due to the GDPR actually having teeth (British Airways was fined £183m last week) I’m a lot happier to share my data with large companies than I was before the legislation came in. That’s the whole point.

The other big voice assistant story, in the UK at least, was that the National Health Service (NHS) is partnering with Amazon Alexa to offer health advice. The BBC reports:

From this week, the voice-assisted technology is automatically searching the official NHS website when UK users ask for health-related advice.

The government in England said it could reduce demand on the NHS.

Privacy campaigners have raised data protection concerns but Amazon say all information will be kept confidential.

The partnership was first announced last year and now talks are under way with other companies, including Microsoft, to set up similar arrangements.

Previously the device provided health information based on a variety of popular responses.

The use of voice search is on the increase and is seen as particularly beneficial to vulnerable patients, such as elderly people and those with visual impairment, who may struggle to access the internet through more traditional means.

The BBC

So long as this is available to all types of voice assistants, this is great news. The number of people I know, including family members, who have convinced themselves they’ve got serious problems by spending ages searching their symptoms, is quite frightening. Getting sensible, prosaic advice is much better.

Iliana Magra writes in the The New York Times that privacy campaigners are concerned about Amazon setting up a health care division, but that there are tangible benefits to certain sections of the population.

The British health secretary, Matt Hancock, said Alexa could help reduce strain on doctors and pharmacists. “We want to empower every patient to take better control of their health care,” he said in a statement, “and technology like this is a great example of how people can access reliable, world-leading N.H.S. advice from the comfort of their home.”

His department added that voice-assistant advice would be particularly useful for “the elderly, blind and those who cannot access the internet through traditional means.”

Iliana Magra

I’m not dismissing the privacy issues, of course not. But what I’ve found, especially recently, is that the knowledge, skills, and expertise required to be truly ‘Google-free’ (or the equivalent) is an order of magnitude greater than what is realistically possible for the general population.

It might be fatalistic to ask the following question, but I’ll do it anyway: who exactly do we expect to be building these things? Mozilla, one of the world’s largest tech non-profits is conspicuously absent in these conversations, and somehow I don’t think people aren’t going to trust governments to get involved.

For years, techies have talked about ‘personal data vaults’ where you could share information in a granular way without being tracked. Currently being trialled is the BBC box to potentially help with some of this:

With a secure Databox at its heart, BBC Box offers something very unusual and potentially important: it is a physical device in the person’s home onto which personal data is gathered from a range of sources, although of course (and as mentioned above) it is only collected with the participants explicit permission, and processed under the person’s control.

Personal data is stored locally on the box’s hardware and once there, it can be processed and added to by other programmes running on the box – much like apps on a smartphone. The results of this processing might, for example be a profile of the sort of TV programmes someone might like or the sort of theatre they would enjoy. This is stored locally on the box – unless the person explicitly chooses to share it. No third party, not even the BBC itself, can access any data in ‘the box’ unless it is authorised by the person using it, offering a secure alternative to existing services which rely on bringing large quantities of personal data together in one place – with limited control by the person using it.

The BBC

It’s an interesting concept and, if they can get the user experience right, a potentially groundbreaking concept. Eventually, of course, it will be in your smartphone, which means that device really will be a ‘digital self’.

You can absolutely opt-out of whatever you want. For example, I opt out of Facebook’s products (including WhatsApp and Instagram). You can point out to others the reasons for that, but at some point you have to realise it’s an opinion, a lifestyle choice, an ideology. Not everyone wants to be a tech vegan, or live their lives under those who act as though they are one.

The Amazon Echo as an anatomical map of human labor, data and planetary resources

This map of what happens when you interact with a digital assistant such as the Amazon Echo is incredible. The image is taken from a length piece of work which is trying to bring attention towards the hidden costs of using such devices.

With each interaction, Alexa is training to hear better, to interpret more precisely, to trigger actions that map to the user’s commands more accurately, and to build a more complete model of their preferences, habits and desires. What is required to make this possible? Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch. A full accounting for these costs is almost impossible, but it is increasingly important that we grasp the scale and scope if we are to understand and govern the technical infrastructures that thread through our lives.

It’s a tour de force. Here’s another extract:

When a human engages with an Echo, or another voice-enabled AI device, they are acting as much more than just an end-product consumer. It is difficult to place the human user of an AI system into a single category: rather, they deserve to be considered as a hybrid case. Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product. This multiple identity recurs for human users in many technological systems. In the specific case of the Amazon Echo, the user has purchased a consumer device for which they receive a set of convenient affordances. But they are also a resource, as their voice commands are collected, analyzed and retained for the purposes of building an ever-larger corpus of human voices and instructions. And they provide labor, as they continually perform the valuable service of contributing feedback mechanisms regarding the accuracy, usefulness, and overall quality of Alexa’s replies. They are, in essence, helping to train the neural networks within Amazon’s infrastructural stack.

Well worth a read, especially alongside another article in Bloomberg about what they call ‘oral literacy’ but which I referred to in my thesis as ‘oracy’:

Should the connection between the spoken word and literacy really be so alien to us? After all, starting in the 1950s, basic literacy training in elementary schools in the United States has involved ‘phonics.’ And what is phonics but a way of attaching written words to the sounds they had been or could become? The theory grew out of the belief that all those lines of text on the pages of schoolbooks had become too divorced from their sounds; phonics was intended to give new readers a chance to recognize written language as part of the world of language they already knew.

The technological landscape is reforming what it means to be literate in the 21st century. Interestingly, some of that is a kind of a return to previous forms of human interaction that we used to value a lot more.

Sources: Anatomy of AI and Bloomberg

The disappearing computer and the future of AI

I was at the Thinking Digital conference yesterday, which is always an inspiring event. It kicked off with a presentation from a representative of Amazon’s Alexa programme, who cited an article by Walt Mossberg from this time last year. I’m pretty sure I read about it, but didn’t necessarily write about it, at the time.

Mossberg talks about how computing will increasingly become invisible:

Let me start by revising the oft-quoted first line of my first Personal Technology column in the Journal on October 17th, 1991: “Personal computers are just too hard to use, and it’s not your fault.” It was true then, and for many, many years thereafter. Not only were the interfaces confusing, but most tech products demanded frequent tweaking and fixing of a type that required more technical skill than most people had, or cared to acquire. The whole field was new, and engineers weren’t designing products for normal people who had other talents and interests.

Things are different now, of course. We expect even small children to be able to use things like iPads with minimal help.

When the internet first arrived, it was a discrete activity you performed on a discrete hunk of metal and plastic called a PC, using a discrete software program called a browser. Even now, though the net is like the electrical grid, powering many things, you still use a discrete device — a smartphone, say — to access it. Sure, you can summon some internet smarts through an Echo, but there’s still a device there, and you still have to know the magic words to say. We are a long way from the invisible, omnipresent computer in Starship Enterprise.

The Amazon representative on-stage at the conference obviously believes that voice is the next frontier in computing. That’s his job. Nevertheless, he marshalled some pretty compelling, if anecdotal, evidence for that. A couple of videos showed older people, who had been completely bypassed by the smartphone revolution, interacting naturally with Alexa.

I expect that one end result of all this work will be that the technology, the computer inside all these things, will fade into the background. In some cases, it may entirely disappear, waiting to be activated by a voice command, a person entering the room, a change in blood chemistry, a shift in temperature, a motion. Maybe even just a thought.

In the same way that the front end of a website like Facebook, the user interface, is the tip of the iceberg, so voice assistants are the front end for artificial intelligence. Who gets to the process data harvested by these devices, and for what purposes, is an important issue — both now and in the future.

And, if ambient technology is to become as integrated into our lives as previous technological revolutions like wood joists, steel beams, and engine blocks, we need to subject it to the digital equivalent of enforceable building codes and auto safety standards. Nothing less will do. And health? The current medical device standards will have to be even tougher, while still allowing for innovation.

This was the last article Mossberg wrote anywhere, having been a tech journalist since 1991. In signing off, he became a little wistful about the age of gadgetry we’re leaving behind, but it’s hopefully for the wider good.

We’ve all had a hell of a ride for the last few decades, no matter when you got on the roller coaster. It’s been exciting, enriching, and transformative. But it’s also been about objects and processes. Soon, after a brief slowdown, the roller coaster will be accelerating faster than ever, only this time it’ll be about actual experiences, with much less emphasis on the way those experiences get made.

This is an important touchstone article, and one I’ll be returning to in future, no doubt.

Source: The Verge

Alexa for Kids as babysitter?

I’m just on my way out if the house to head for Scotland to climb some mountains with my wife.

But while she does (what I call) her ‘last minute faffing’ I read Dan Hon’s newsletter. I’ll just quite the relevant section without any attempt at comment or analysis.

He includes references in his newsletter, but you’ll just have to click through for those.

Mat Honan reminded me that Amazon have made an Alexa for Kids (during the course of which Tom Simonite had a great story about Alexa diligently and non-plussedly educating a group of preschoolers about the history of FARC after misunderstanding their requests for farts) and Honan has a great article about it. There are now enough Alexa (plural?) out there that the phenomenon of “the funny things kids say to Alexa” is pretty well documented as well as the earlier “Alexa is teaching my kid to be rude” observation. This isn’t to say that Amazon haven’t done *any* work thinking about how Alexa works in a kid context (Honan’s article shows that they’ve demonstrably thought about how Alexa might work and that they’ve made changes to the product to accommodate children as a specific class of user) but the overwhelming impression I had after reading Honan’s piece was that, as a parent, I still don’t think Amazon haven’t gone far enough in making Alexa kid-friendly.

They’ve made some executive decisions like coming down hard on curation versus algorithmic selection of content (see James Bridle’s excellent earlier essay on YouTube, that something is wrong on the internet and recent coverage of YouTube Kids’ content selection method still finding ways to recommend, shall we say, videos espousing extreme views). And Amazon have addressed one of the core reported issues of having an Alexa in the house (the rudeness) by designing in support for a “magic word” Easter Egg that will reward kids for saying “please”. But that seems rather tactical and dealing with a specific issue and not, well, foundational. I think that the foundational issue is something more like this: parenting is a *very* personal subject. As I have become a parent, I have discovered (and validated through experimental data) that parents have very specific views about how to do things! Many parents do not agree with each other! Parents who agree with each other on some things do not agree on other things! In families where there are two parents there is much scope for disagreement on both desired outcome and method!

All of which is to say is that the current design, architecture and strategy of Alexa for Kids indicates one sort of one-size-fits-all method and that there’s not much room for parental customization. This isn’t to say that Amazon are actively preventing it and might not add it down the line – it’s just that it doesn’t really exist right now. Honan’s got a great point that:

“[For example,] take the magic word we mentioned earlier. There is no universal norm when it comes to what’s polite or rude. Manners vary by family, culture, and even region. While “yes, sir” may be de rigueur in Alabama, for example, it might be viewed as an element of the patriarchy in parts of California.”

Some parents may have very specific views on how they want to teach their kids to be polite. This kind of thinking leads me down the path of: well, are we imagining a world where Alexa or something like it is a sort of universal basic babysitter, with default norms and those who can get, well, customization? Or what someone else might call: attentive, individualized parenting?

When Alexa for Kids came out, I did about 10 seconds’ worth of thinking and, based on how Alexa gets used in our house (two parents, a five year old and a 19 month old) and how our preschooler is behaving, I was pretty convinced that I’m in no way ready or willing to leave him alone with an Alexa for Kids in his room. My family is, in what some might see as that tedious middle class way, pretty strict about the amount of screen time our kids get (unsupervised and supervised) and suffice it to say that there’s considerable difference of opinion between my wife and myself on what we’re both comfortable with and at what point what level of exposure or usage might be appropriate.

And here’s where I reinforce that point again: are you okay with leaving your kids with a default babysitter, or are you the kind of person who has opinions about how you want your babysitter to act with your kids? (Yes, I imagine people reading this and clutching their pearls at the mere *thought* of an Alexa “babysitting” a kid but need I remind you that books are a technological object too and the issue here is in the degree of interactivity and access). At least with a babysitter I can set some parameters and I’ve got an idea of how the babysitter might interact with the kids because, well, that’s part of the babysitter screening process.

Source: Things That Have Caught My Attention s5e11

It doesn’t matter if you don’t use AI assistants if everyone else does

Email is an awesome system. It’s open, decentralised, and you can pick whoever you want to provide your emails. The trouble is, of course, that if you decide you don’t want a certain company, say Google, to read your emails, you only have control of your half of the equation. In other words, it doesn’t matter if you don’t want to use GMail, if most of your contacts do.

The same is true of AI assistant. You might not want an Amazon Echo device in your house, but you don’t spend all your life at home:

Amazon wants to bring Alexa to more devices than smart speakers, Fire TV and various other consumer electronics for the home, like alarm clocks. The company yesterday announced developer tools that would allow Alexa to be used in microwave ovens, for example – so you could just tell the oven what to do. Today, Amazon is rolling out a new set of developer tools, including one called the “Alexa Mobile Accessory Kit,” that would allow Alexa to work Bluetooth products in the wearable space, like headphones, smartwatches, fitness trackers, other audio devices, and more.

The future isn’t pre-ordained. We get to choose the society and culture in which we’d like to live. Huge, for-profit companies having listening devices everywhere sounds dystopian to me.

Source: TechCrunch

Get a Thought Shrapnel digest in your inbox every Sunday (free!)
Holler Box