Category: 21st Century Society (page 2 of 5)

Alexa for Kids as babysitter?

I’m just on my way out if the house to head for Scotland to climb some mountains with my wife.

But while she does (what I call) her ‘last minute faffing’ I read Dan Hon’s newsletter. I’ll just quite the relevant section without any attempt at comment or analysis.

He includes references in his newsletter, but you’ll just have to click through for those.

Mat Honan reminded me that Amazon have made an Alexa for Kids (during the course of which Tom Simonite had a great story about Alexa diligently and non-plussedly educating a group of preschoolers about the history of FARC after misunderstanding their requests for farts) and Honan has a great article about it. There are now enough Alexa (plural?) out there that the phenomenon of “the funny things kids say to Alexa” is pretty well documented as well as the earlier “Alexa is teaching my kid to be rude” observation. This isn’t to say that Amazon haven’t done *any* work thinking about how Alexa works in a kid context (Honan’s article shows that they’ve demonstrably thought about how Alexa might work and that they’ve made changes to the product to accommodate children as a specific class of user) but the overwhelming impression I had after reading Honan’s piece was that, as a parent, I still don’t think Amazon haven’t gone far enough in making Alexa kid-friendly.

They’ve made some executive decisions like coming down hard on curation versus algorithmic selection of content (see James Bridle’s excellent earlier essay on YouTube, that something is wrong on the internet and recent coverage of YouTube Kids’ content selection method still finding ways to recommend, shall we say, videos espousing extreme views). And Amazon have addressed one of the core reported issues of having an Alexa in the house (the rudeness) by designing in support for a “magic word” Easter Egg that will reward kids for saying “please”. But that seems rather tactical and dealing with a specific issue and not, well, foundational. I think that the foundational issue is something more like this: parenting is a *very* personal subject. As I have become a parent, I have discovered (and validated through experimental data) that parents have very specific views about how to do things! Many parents do not agree with each other! Parents who agree with each other on some things do not agree on other things! In families where there are two parents there is much scope for disagreement on both desired outcome and method!

All of which is to say is that the current design, architecture and strategy of Alexa for Kids indicates one sort of one-size-fits-all method and that there’s not much room for parental customization. This isn’t to say that Amazon are actively preventing it and might not add it down the line – it’s just that it doesn’t really exist right now. Honan’s got a great point that:

“[For example,] take the magic word we mentioned earlier. There is no universal norm when it comes to what’s polite or rude. Manners vary by family, culture, and even region. While “yes, sir” may be de rigueur in Alabama, for example, it might be viewed as an element of the patriarchy in parts of California.”

Some parents may have very specific views on how they want to teach their kids to be polite. This kind of thinking leads me down the path of: well, are we imagining a world where Alexa or something like it is a sort of universal basic babysitter, with default norms and those who can get, well, customization? Or what someone else might call: attentive, individualized parenting?

When Alexa for Kids came out, I did about 10 seconds’ worth of thinking and, based on how Alexa gets used in our house (two parents, a five year old and a 19 month old) and how our preschooler is behaving, I was pretty convinced that I’m in no way ready or willing to leave him alone with an Alexa for Kids in his room. My family is, in what some might see as that tedious middle class way, pretty strict about the amount of screen time our kids get (unsupervised and supervised) and suffice it to say that there’s considerable difference of opinion between my wife and myself on what we’re both comfortable with and at what point what level of exposure or usage might be appropriate.

And here’s where I reinforce that point again: are you okay with leaving your kids with a default babysitter, or are you the kind of person who has opinions about how you want your babysitter to act with your kids? (Yes, I imagine people reading this and clutching their pearls at the mere *thought* of an Alexa “babysitting” a kid but need I remind you that books are a technological object too and the issue here is in the degree of interactivity and access). At least with a babysitter I can set some parameters and I’ve got an idea of how the babysitter might interact with the kids because, well, that’s part of the babysitter screening process.

Source: Things That Have Caught My Attention s5e11

Systems thinking and AI

Edge is an interesting website. Its aim is:

To arrive at the edge of the world’s knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.

One recent article on the site is from Mary Catherine Bateson, a writer and cultural anthropologist who retired in 2004 from her position as Professor in Anthropology and English at George Mason University. She’s got some interesting insights into systems thinking and artificial intelligence.

We all think with metaphors of various sorts, and we use metaphors to deal with complexity, but the way human beings use computers and AI depends on their basic epistemologies—whether they’re accustomed to thinking in systemic terms, whether they’re mainly interested in quantitative issues, whether they’re used to using games of various sorts. A great deal of what people use AI for is to simulate some pattern outside in the world. On the other hand, people use one pattern in the world as a metaphor for another one all the time.

That’s such an interesting way of putting it, the insinuation being that some people have epistemologies (theories of knowledge) that are not really nuanced enough to deal with the world in all of its complexity. As a result, they use reductive metaphors that don’t really work that well. This is obviously problematic when dealing with AI that you want to do some work for you, hence the bias (racism, sexism) which has plagued the field.

One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it’s willing to make projections when it hasn’t been provided with everything that would be relevant to those projections. How do we get there? I don’t know. It’s important to be aware of it, to realize that there are limits to what we can do with AI. It’s great for computation and arithmetic, and it saves huge amounts of labor. It seems to me that it lacks humility, lacks imagination, and lacks humor. It doesn’t mean you can’t bring those things into your interactions with your devices, particularly, in communicating with other human beings. But it does mean that elements of intelligence and wisdom—I like the word wisdom, because it’s more multi-dimensional—are going to be lacking.

Something I always say is that technology is not neutral and that anyone who claims it to be so is a charlatan. Technologies are always designed by a person, or group of people, for a particular purpose. That person, or people, has hopes, fears, dreams, opinions, and biases. Therefore, AI has limits.

You don’t have to know a lot of technical terminology to be a systems thinker. One of the things that I’ve been realizing lately, and that I find fascinating as an anthropologist, is that if you look at belief systems and religions going way back in history, around the world, very often what you realize is that people have intuitively understood systems and used metaphors to think about them. The example that grabbed me was thinking about the pantheon of Greek gods—Zeus and Hera, Apollo and Demeter, and all of them. I suddenly realized that in the mythology they’re married, they have children, the sun and the moon are brother and sister. There are quarrels among the gods, and marriages, divorces, and so on. So you can use the Greek pantheon, because it is based on kinship, to take advantage of what people have learned from their observation of their friends and relatives.

I like the way that Bateson talks about the difference between computer science and systems theory. It’s a bit like the argument I gave about why kids need to learn to code back in 2013: it’s more about algorithmic thinking than it is about syntax.

The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.

The article is worth reading in its entirety, as Bateson goes off at tangents that make it difficult to quote sections here. It reminds me that I need to revisit the work of Donella Meadows.

Source: Edge

Automated Chinese jaywalking fines are a foretaste of so-called ‘smart cities’

Given the choice of living in a so-called ‘smart city’ and living in rural isolation, I think I’d prefer the latter. This opinion has been strengthened by reading about what’s going on in China at the moment:

Last April, the industrial capital of Shenzhen installed anti-jaywalking cameras that use facial recognition to automatically identify people crossing without a green pedestrian light; jaywalkers are shamed on a public website and their photos are displayed on large screens at the intersection,

Nearly 14,000 people were identified by the system in its first ten months of its operation. Now, Intellifusion, who created the system, is planning to send warnings by WeChat and Sina Weibo messages; repeat offenders will get their social credit scores docked.

Yes, that’s right: social credit. Much more insidious than a fine, having a low social credit rating means that you can’t travel.

Certainly something to think about when you hear people talking about ‘smart cities of the future’.

Source: BoingBoing

(related: 99% Invisible podcast on the invention of ‘jaywalking’)

Moral needs and user needs

That products should be ‘user-focused’ goes without queustion these days. At least by everyone apart from Cassie Robinson, who writes:

This has been sitting uncomfortably with me for a while now. In part that’s because when anything becomes a bit of a dogma I question it, but it’s also because I couldn’t quite marry the mantra to my own personal experiences.

Sometimes, there’s more than user stories and ‘jobs to be done’:

For example, if we are designing the new digital justice system using success measures based on how efficiently the user can complete the thing they are trying to do rather than on whether they actually receive justice, what’s at risk there? And if we prioritise that over time, are we in some way eroding the collective awareness of what “good” justice as an outcome looks like?

She makes a good point. Robinson suggests that we consider ‘moral needs’ as well as ‘user needs’:

Designing and iterating services based on current user needs and behaviours means that they are never being designed for who isn’t there. Whose voice isn’t in the data? And how will the new institutions that are needed be created unless we focus more on collective agency and collective needs?

As I continue my thinking around Project MoodleNet this is definitely something to bear in mind.

Source: Cassie Robinson

Derek Sivers has quit Facebook (hint: you should, too)

I have huge respect for Derek Sivers, and really enjoyed his book Anything You WantHis book reviews are also worth trawling through.

In this post, which made its way to the Hacker News front page, Sivers talks about his relationship with Facebook, and why he’s finally decided to quit the platform:

When people would do their “DELETE FACEBOOK!” campaigns, I didn’t bother because I wasn’t using it anyway. It was causing me no harm. I think it’s net-negative for the world, and causing many people harm, but not me, so why bother deleting it?

But today I had a new thought:

Maybe the fact that I use it to share my blog posts is a tiny tiny reason why others are still using it. It’s like I’m still visiting friends in the smoking area, even though I don’t smoke. Maybe if I quit going entirely, it will help my friends quit, too.

Last year, I wrote a post entitled Friends don’t let friends use Facebook. The problem is, it’s difficult. Despite efforts to suggest alternatives, most of the clubs our children are part of (for activities such as swimming and karate) use Facebook. I don’t have an account, but my wife has to if we’re to keep up-to-date. It’s a vicious circle.

Like Sivers, I’ve considered just being on Facebook to promote my blog posts. But I don’t want to be part of the problem:

I had a selfish business reason to keep it. I’m going to be publishing three different books over the next year, and plan to launch a new business, too. But I’m willing to take that small loss in promotion, because it’s the right thing to do. It always feels good to get rid of things I’m not using.

So if you’ve got a Facebook account and reading the Cambridge Analytica revelations concerns you, then try to wean yourself of Facebook. It’s literally for the good of democracy.

Ultimately, as Sivers notes, Facebook will go away because of the adoption lifecycle of platforms and products. It’s difficult to think of that, but I’ll leave the last word to the late, great Ursula Le Guin:

We live in capitalism, its power seems inescapable – but then, so did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art. Very often in our art, the art of words.

Source: Sivers.org

Tech will eat itself

Mike Murphy has been travelling to tech conferences: CES, MWC, and SXSW. He hasn’t been overly-impressed by what he’s seen:

The role of technology should be to improve the quality of our lives in some meaningful way, or at least change our behavior. In years past, these conferences have seen the launch of technologies that have indeed impacted our lives to varying degrees, from the launch of Twitter to car stereos and video games.

However, it’s all been a little underwhelming:

People always ask me what trends I see at these events. There are the usual words I can throw out—VR, AR, blockchain, AI, big data, autonomy, automation, voice assistants, 3D-printing, drones—the list is endless, and invariably someone will write some piece on each of these at every event. But it’s rare to see something truly novel, impressive, or even more than mildly interesting at these events anymore. The blockchain has not revolutionized society, no matter what some bros would have you believe, nor has 3D-printing. Self-driving cars are still years away, AI is still mainly theoretical, and no one buys VR headsets. But these are the terms you’ll find associated with these events if you Google them.

There’s nothing of any real substance being launched at this big shiny events:

The biggest thing people will remember from this year’s CES is that it rained the first few days and then the power went out. From MWC, it’ll be that it snowed for the first time in years in Barcelona, and from SXSW, it’ll be the Westworld in the desert (which was pretty cool). Quickly forgotten are the second-tier phones, dating apps, and robots that do absolutely nothing useful. I saw a few things of note that point toward the future—a 3D-printed house that could actually better lives in developing nations; robots that could crush us at Scrabble—but obviously, the opportunity for a nascent startup to get its name in front of thousands of techies, influential people, and potential investors can be huge. Even if it’s just an app for threesomes.

As Murphy points out, the more important the destination (i.e. where the event is held) the less important the content (i.e. what is being announced):

When real technology is involved, the destinations aren’t as important as the substance of the events. But in the case of many of these conferences, the substance is the destinations themselves.

However, that shouldn’t necessarily be cause for concern: There is still much to be excited about in technology. You just won’t find much of it at the biggest conferences of the year, which are basically spring breaks for nerds. But there is value in bringing so many similarly interested people together.

[…]

Just don’t expect the world of tomorrow to look like the marketing stunts of today.

I see these events as a way to catch up the mainstream with what’s been happening in pockets of innovation over the past year or so. Unfortunately, this is increasingly being covered in a layer of marketing spin and hype so that it’s difficult to separate the useful from the trite.

Source: Quartz

To lose old styles of reading is to lose a part of ourselves

Sometimes I think we’re living in the end times:

Out for dinner with another writer, I said, “I think I’ve forgotten how to read.”

“Yes!” he replied, pointing his knife. “Everybody has.”

“No, really,” I said. “I mean I actually can’t do it any more.”

He nodded: “Nobody can read like they used to. But nobody wants to talk about it.”

I wrote my doctoral thesis on digital literacies. There was a real sense in the 1990s that reading on screen was very different to reading on paper. We’ve kind of lost that sense of difference, and I think perhaps we need to regain it:

For most of modern life, printed matter was, as the media critic Neil Postman put it, “the model, the metaphor, and the measure of all discourse.” The resonance of printed books – their lineal structure, the demands they make on our attention – touches every corner of the world we’ve inherited. But online life makes me into a different kind of reader – a cynical one. I scrounge, now, for the useful fact; I zero in on the shareable link. My attention – and thus my experience – fractures. Online reading is about clicks, and comments, and points. When I take that mindset and try to apply it to a beaten-up paperback, my mind bucks.

We don’t really talk about ‘hypertext’ any more, as it’s almost the default type of text that we read. As such, reading on paper doesn’t really prepare us for it:

For a long time, I convinced myself that a childhood spent immersed in old-fashioned books would insulate me somehow from our new media climate – that I could keep on reading and writing in the old way because my mind was formed in pre-internet days. But the mind is plastic – and I have changed. I’m not the reader I was.

Me too. I train myself to read longer articles through mechanisms such as writing Thought Shrapnel posts and newsletters each week. But I don’t read like I used to; I read for utility rather than pleasure and just for the sake of it.

The suggestion that, in a few generations, our experience of media will be reinvented shouldn’t surprise us. We should, instead, marvel at the fact we ever read books at all. Great researchers such as Maryanne Wolf and Alison Gopnik remind us that the human brain was never designed to read. Rather, elements of the visual cortex – which evolved for other purposes – were hijacked in order to pull off the trick. The deep reading that a novel demands doesn’t come easy and it was never “natural.” Our default state is, if anything, one of distractedness. The gaze shifts, the attention flits; we scour the environment for clues. (Otherwise, that predator in the shadows might eat us.) How primed are we for distraction? One famous study found humans would rather give themselves electric shocks than sit alone with their thoughts for 10 minutes. We disobey those instincts every time we get lost in a book.

It’s funny. We’ve such a connection with books, but for most of human history we’ve done without them:

Literacy has only been common (outside the elite) since the 19th century. And it’s hardly been crystallized since then. Our habits of reading could easily become antiquated. The writer Clay Shirky even suggests that we’ve lately been “emptily praising” Tolstoy and Proust. Those old, solitary experiences with literature were “just a side-effect of living in an environment of impoverished access.” In our online world, we can move on. And our brains – only temporarily hijacked by books – will now be hijacked by whatever comes next.

There’s several theses in all of this around fake news, the role of reading in a democracy, and how information spreads. For now, I continue to be amazed at the power of the web on the fabric of societies.

Source: The Globe and Mail

Is the gig economy the mass exploitation of millennials?

The answer is, “yes, probably”.

If the living wage is a pay scale calculated to be that of an appropriate amount of money to pay a worker so they can live, how is it possible, in a legal or moral sense to pay someone less? We are witnessing a concerted effort to devalue labour, where the primary concern of business is profit, not the economic wellbeing of its employees.

The ‘sharing economy’ and ‘gig economy’ are nothing of the sort. They’re a problematic and highly disingenuous way for employers to not care about the people who create value in their business.

The employer washes their hands of the worker. Their immediate utility is the sole concern. From a profit point of view, absolutely we can appreciate the logic. However, we forget that the worker also exists as a member of society, and when business is allowed to use and exploit people in this manner, we endanger societal cohesiveness.

The problem, of course, is late-stage capitalism:

The neoliberal project has encouraged us to adopt a hyper-individualistic approach to life and work. For all the speak of teamwork, in this economy the individual reigns supreme and it is destroying young workers. The present system has become unfeasible. The neoliberal project needs to be reeled back in. The free market needs a firm hand because the invisible one has lost its grip.

And the alternative? Co-operation.

Source: The Irish Times

Creating media, not just consuming it

My wife and I are fans of Common Sense Media, and often use their film and TV reviews when deciding what to watch as a family. In their newsletter, they had a link to an article about strategies to help kids create media, rather than just consume it:

Kids actually love to express themselves, but sometimes they feel like they don’t have much of a voice. Encouraging your kid to be more of a maker might just be a matter of pointing to someone or something they admire and giving them the technology to make their vision come alive. No matter your kids’ ages and interests, there’s a method and medium to encourage creativity.

They link to apps for younger and older children, and break things down by what kind of kids you’ve got. It’s a cliché, but nevertheless true, that every child is different. My son, for example, has just given up playing the piano, but loves making electronic music:

Most kids love music right out of the womb, so transferring that love into creation isn’t hard when they’re little. Banging on pots and pans is a good place to start — but they can take that experience with them using apps that let them play around with sound. Little kids can start to learn about instruments and how sounds fit together into music. Whether they’re budding musicians or just appreciators, older kids can use tools to compose, stay motivated, and practice regularly. And when tweens and teens want to start laying down some tracks, they can record, edit, and share their stuff.

The post is chock-full of links, so there’s something for everyone. I’m delighted to be able to pair it with a recent image Amy shared in our Slack channel which lists the rules she has for her teenage daughter around screentime. I’d like to frame it for our house!

Source: Common Sense Media

Image: Amy Burvall (you can hire her)

Designing social systems

This article is too long and written in a way that could be more direct, but it still makes some good points. Perhaps the best bit is the comparison of iOS lockscreen (left) with a redesigned one (right).

Most platforms encourage us to act against our values: less humbly, less honestly, less thoughtfully, and so on. Using these platforms while sticking to our values would mean constantly fighting their design. Unless we’re prepared for that fight, we’ll regret our choices.

When we’re joining in with conversations online, then we’re not always part of a group, sometimes we’re part of a network. It seems to me like most of the points the author is making pertain to social networks like Facebook, as opposed to those like Twitter and Mastodon.

He does, however, make a good point about a shift towards people feeling they have to act in a particular way:

Groups are held together by a particular kind of conversation, which I’ll call wisdom. It’s a kind of conversation that people are starved for right now—even amidst nonstop communication, amidst a torrent of articles, videos, and posts.

When this type of conversation is missing, people feel that no one understands or cares about what’s important to them. People feel their values are unheeded and unrecognized.

[T]his situation is easy to exploit, and the media and fake news ecosystems have done just that. As a result, conversations become ideological and polarized, and elections are manipulated.

Tribal politics in social networks are caused by people not having strong offline affinity groups, so they seek their ‘tribe’ online.

If social platforms can make it easier to share our personal values (like small town living) directly, and to acknowledge one another and rally around them, we won’t need to turn them into ideologies or articles. This would do more to heal politics and media than any “fake news” initiative. To do this, designers need to know what this kind of conversation sounds like, how to encourage it, and how to avoid drowning it out.

Ultimately, the author has no answer and (wisely) turns to the community for help. I like the way he points to exercises we can do and groups we can form. I’m not sure it’ll scale, though…

Source: Human Systems