Category: 21st Century Society (page 1 of 4)

Moral needs and user needs

That products should be ‘user-focused’ goes without queustion these days. At least by everyone apart from Cassie Robinson, who writes:

This has been sitting uncomfortably with me for a while now. In part that’s because when anything becomes a bit of a dogma I question it, but it’s also because I couldn’t quite marry the mantra to my own personal experiences.

Sometimes, there’s more than user stories and ‘jobs to be done’:

For example, if we are designing the new digital justice system using success measures based on how efficiently the user can complete the thing they are trying to do rather than on whether they actually receive justice, what’s at risk there? And if we prioritise that over time, are we in some way eroding the collective awareness of what “good” justice as an outcome looks like?

She makes a good point. Robinson suggests that we consider ‘moral needs’ as well as ‘user needs’:

Designing and iterating services based on current user needs and behaviours means that they are never being designed for who isn’t there. Whose voice isn’t in the data? And how will the new institutions that are needed be created unless we focus more on collective agency and collective needs?

As I continue my thinking around Project MoodleNet this is definitely something to bear in mind.

Source: Cassie Robinson

Derek Sivers has quit Facebook (hint: you should, too)

I have huge respect for Derek Sivers, and really enjoyed his book Anything You WantHis book reviews are also worth trawling through.

In this post, which made its way to the Hacker News front page, Sivers talks about his relationship with Facebook, and why he’s finally decided to quit the platform:

When people would do their “DELETE FACEBOOK!” campaigns, I didn’t bother because I wasn’t using it anyway. It was causing me no harm. I think it’s net-negative for the world, and causing many people harm, but not me, so why bother deleting it?

But today I had a new thought:

Maybe the fact that I use it to share my blog posts is a tiny tiny reason why others are still using it. It’s like I’m still visiting friends in the smoking area, even though I don’t smoke. Maybe if I quit going entirely, it will help my friends quit, too.

Last year, I wrote a post entitled Friends don’t let friends use Facebook. The problem is, it’s difficult. Despite efforts to suggest alternatives, most of the clubs our children are part of (for activities such as swimming and karate) use Facebook. I don’t have an account, but my wife has to if we’re to keep up-to-date. It’s a vicious circle.

Like Sivers, I’ve considered just being on Facebook to promote my blog posts. But I don’t want to be part of the problem:

I had a selfish business reason to keep it. I’m going to be publishing three different books over the next year, and plan to launch a new business, too. But I’m willing to take that small loss in promotion, because it’s the right thing to do. It always feels good to get rid of things I’m not using.

So if you’ve got a Facebook account and reading the Cambridge Analytica revelations concerns you, then try to wean yourself of Facebook. It’s literally for the good of democracy.

Ultimately, as Sivers notes, Facebook will go away because of the adoption lifecycle of platforms and products. It’s difficult to think of that, but I’ll leave the last word to the late, great Ursula Le Guin:

We live in capitalism, its power seems inescapable – but then, so did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art. Very often in our art, the art of words.


Tech will eat itself

Mike Murphy has been travelling to tech conferences: CES, MWC, and SXSW. He hasn’t been overly-impressed by what he’s seen:

The role of technology should be to improve the quality of our lives in some meaningful way, or at least change our behavior. In years past, these conferences have seen the launch of technologies that have indeed impacted our lives to varying degrees, from the launch of Twitter to car stereos and video games.

However, it’s all been a little underwhelming:

People always ask me what trends I see at these events. There are the usual words I can throw out—VR, AR, blockchain, AI, big data, autonomy, automation, voice assistants, 3D-printing, drones—the list is endless, and invariably someone will write some piece on each of these at every event. But it’s rare to see something truly novel, impressive, or even more than mildly interesting at these events anymore. The blockchain has not revolutionized society, no matter what some bros would have you believe, nor has 3D-printing. Self-driving cars are still years away, AI is still mainly theoretical, and no one buys VR headsets. But these are the terms you’ll find associated with these events if you Google them.

There’s nothing of any real substance being launched at this big shiny events:

The biggest thing people will remember from this year’s CES is that it rained the first few days and then the power went out. From MWC, it’ll be that it snowed for the first time in years in Barcelona, and from SXSW, it’ll be the Westworld in the desert (which was pretty cool). Quickly forgotten are the second-tier phones, dating apps, and robots that do absolutely nothing useful. I saw a few things of note that point toward the future—a 3D-printed house that could actually better lives in developing nations; robots that could crush us at Scrabble—but obviously, the opportunity for a nascent startup to get its name in front of thousands of techies, influential people, and potential investors can be huge. Even if it’s just an app for threesomes.

As Murphy points out, the more important the destination (i.e. where the event is held) the less important the content (i.e. what is being announced):

When real technology is involved, the destinations aren’t as important as the substance of the events. But in the case of many of these conferences, the substance is the destinations themselves.

However, that shouldn’t necessarily be cause for concern: There is still much to be excited about in technology. You just won’t find much of it at the biggest conferences of the year, which are basically spring breaks for nerds. But there is value in bringing so many similarly interested people together.


Just don’t expect the world of tomorrow to look like the marketing stunts of today.

I see these events as a way to catch up the mainstream with what’s been happening in pockets of innovation over the past year or so. Unfortunately, this is increasingly being covered in a layer of marketing spin and hype so that it’s difficult to separate the useful from the trite.

Source: Quartz

To lose old styles of reading is to lose a part of ourselves

Sometimes I think we’re living in the end times:

Out for dinner with another writer, I said, “I think I’ve forgotten how to read.”

“Yes!” he replied, pointing his knife. “Everybody has.”

“No, really,” I said. “I mean I actually can’t do it any more.”

He nodded: “Nobody can read like they used to. But nobody wants to talk about it.”

I wrote my doctoral thesis on digital literacies. There was a real sense in the 1990s that reading on screen was very different to reading on paper. We’ve kind of lost that sense of difference, and I think perhaps we need to regain it:

For most of modern life, printed matter was, as the media critic Neil Postman put it, “the model, the metaphor, and the measure of all discourse.” The resonance of printed books – their lineal structure, the demands they make on our attention – touches every corner of the world we’ve inherited. But online life makes me into a different kind of reader – a cynical one. I scrounge, now, for the useful fact; I zero in on the shareable link. My attention – and thus my experience – fractures. Online reading is about clicks, and comments, and points. When I take that mindset and try to apply it to a beaten-up paperback, my mind bucks.

We don’t really talk about ‘hypertext’ any more, as it’s almost the default type of text that we read. As such, reading on paper doesn’t really prepare us for it:

For a long time, I convinced myself that a childhood spent immersed in old-fashioned books would insulate me somehow from our new media climate – that I could keep on reading and writing in the old way because my mind was formed in pre-internet days. But the mind is plastic – and I have changed. I’m not the reader I was.

Me too. I train myself to read longer articles through mechanisms such as writing Thought Shrapnel posts and newsletters each week. But I don’t read like I used to; I read for utility rather than pleasure and just for the sake of it.

The suggestion that, in a few generations, our experience of media will be reinvented shouldn’t surprise us. We should, instead, marvel at the fact we ever read books at all. Great researchers such as Maryanne Wolf and Alison Gopnik remind us that the human brain was never designed to read. Rather, elements of the visual cortex – which evolved for other purposes – were hijacked in order to pull off the trick. The deep reading that a novel demands doesn’t come easy and it was never “natural.” Our default state is, if anything, one of distractedness. The gaze shifts, the attention flits; we scour the environment for clues. (Otherwise, that predator in the shadows might eat us.) How primed are we for distraction? One famous study found humans would rather give themselves electric shocks than sit alone with their thoughts for 10 minutes. We disobey those instincts every time we get lost in a book.

It’s funny. We’ve such a connection with books, but for most of human history we’ve done without them:

Literacy has only been common (outside the elite) since the 19th century. And it’s hardly been crystallized since then. Our habits of reading could easily become antiquated. The writer Clay Shirky even suggests that we’ve lately been “emptily praising” Tolstoy and Proust. Those old, solitary experiences with literature were “just a side-effect of living in an environment of impoverished access.” In our online world, we can move on. And our brains – only temporarily hijacked by books – will now be hijacked by whatever comes next.

There’s several theses in all of this around fake news, the role of reading in a democracy, and how information spreads. For now, I continue to be amazed at the power of the web on the fabric of societies.

Source: The Globe and Mail

Is the gig economy the mass exploitation of millennials?

The answer is, “yes, probably”.

If the living wage is a pay scale calculated to be that of an appropriate amount of money to pay a worker so they can live, how is it possible, in a legal or moral sense to pay someone less? We are witnessing a concerted effort to devalue labour, where the primary concern of business is profit, not the economic wellbeing of its employees.

The ‘sharing economy’ and ‘gig economy’ are nothing of the sort. They’re a problematic and highly disingenuous way for employers to not care about the people who create value in their business.

The employer washes their hands of the worker. Their immediate utility is the sole concern. From a profit point of view, absolutely we can appreciate the logic. However, we forget that the worker also exists as a member of society, and when business is allowed to use and exploit people in this manner, we endanger societal cohesiveness.

The problem, of course, is late-stage capitalism:

The neoliberal project has encouraged us to adopt a hyper-individualistic approach to life and work. For all the speak of teamwork, in this economy the individual reigns supreme and it is destroying young workers. The present system has become unfeasible. The neoliberal project needs to be reeled back in. The free market needs a firm hand because the invisible one has lost its grip.

And the alternative? Co-operation.

Source: The Irish Times

Creating media, not just consuming it

My wife and I are fans of Common Sense Media, and often use their film and TV reviews when deciding what to watch as a family. In their newsletter, they had a link to an article about strategies to help kids create media, rather than just consume it:

Kids actually love to express themselves, but sometimes they feel like they don’t have much of a voice. Encouraging your kid to be more of a maker might just be a matter of pointing to someone or something they admire and giving them the technology to make their vision come alive. No matter your kids’ ages and interests, there’s a method and medium to encourage creativity.

They link to apps for younger and older children, and break things down by what kind of kids you’ve got. It’s a cliché, but nevertheless true, that every child is different. My son, for example, has just given up playing the piano, but loves making electronic music:

Most kids love music right out of the womb, so transferring that love into creation isn’t hard when they’re little. Banging on pots and pans is a good place to start — but they can take that experience with them using apps that let them play around with sound. Little kids can start to learn about instruments and how sounds fit together into music. Whether they’re budding musicians or just appreciators, older kids can use tools to compose, stay motivated, and practice regularly. And when tweens and teens want to start laying down some tracks, they can record, edit, and share their stuff.

The post is chock-full of links, so there’s something for everyone. I’m delighted to be able to pair it with a recent image Amy shared in our Slack channel which lists the rules she has for her teenage daughter around screentime. I’d like to frame it for our house!

Source: Common Sense Media

Image: Amy Burvall (you can hire her)

Designing social systems

This article is too long and written in a way that could be more direct, but it still makes some good points. Perhaps the best bit is the comparison of iOS lockscreen (left) with a redesigned one (right).

Most platforms encourage us to act against our values: less humbly, less honestly, less thoughtfully, and so on. Using these platforms while sticking to our values would mean constantly fighting their design. Unless we’re prepared for that fight, we’ll regret our choices.

When we’re joining in with conversations online, then we’re not always part of a group, sometimes we’re part of a network. It seems to me like most of the points the author is making pertain to social networks like Facebook, as opposed to those like Twitter and Mastodon.

He does, however, make a good point about a shift towards people feeling they have to act in a particular way:

Groups are held together by a particular kind of conversation, which I’ll call wisdom. It’s a kind of conversation that people are starved for right now—even amidst nonstop communication, amidst a torrent of articles, videos, and posts.

When this type of conversation is missing, people feel that no one understands or cares about what’s important to them. People feel their values are unheeded and unrecognized.

[T]his situation is easy to exploit, and the media and fake news ecosystems have done just that. As a result, conversations become ideological and polarized, and elections are manipulated.

Tribal politics in social networks are caused by people not having strong offline affinity groups, so they seek their ‘tribe’ online.

If social platforms can make it easier to share our personal values (like small town living) directly, and to acknowledge one another and rally around them, we won’t need to turn them into ideologies or articles. This would do more to heal politics and media than any “fake news” initiative. To do this, designers need to know what this kind of conversation sounds like, how to encourage it, and how to avoid drowning it out.

Ultimately, the author has no answer and (wisely) turns to the community for help. I like the way he points to exercises we can do and groups we can form. I’m not sure it’ll scale, though…

Source: Human Systems

The military implications of fitness tech

I was talking about this last night with a guy who used to be in the army. It’s a BFD.

In March 2017, a member of the Royal Navy ran around HMNB Clyde, the high-security military base that’s home to Trident, the UK’s nuclear deterrent. His pace wasn’t exceptional, but it wasn’t leisurely either.

His run, like millions of others around the world, was recorded through the Strava app. A heatmap of more than one billion activities – comprising of 13 billion GPS data points – has been criticised for showing the locations of supposedly secretive military bases. It was thought that, at the very least, the data was totally anonymised. It isn’t.


The fitness app – which can record a person’s GPS location and also host data from devices such as Fitbits and Garmin watches – allows users to create segments and leaderboards. These are areas where a run, swim, or bike ride can be timed and compared. Segments can be seen on the Strava website, rather than on the heatmap.

Computer scientist and developer Steve Loughran detailed how to create a GPS segment and upload it to Strava as an activity. Once uploaded, a segment shows the top times of people running in an area. Which is how it’s possible to see the running routes of people inside the high-security walls of HMNB Clyde.

Of course, this is an operational security issue. The military personnel shouldn’t really be using Strava while they’re living/working on bases.

“The underlying problem is that the devices we wear, carry and drive are now continually reporting information about where and how they are used ‘somewhere’,” Loughran said. “In comparison to the datasets which the largest web companies have, Strava’s is a small set of files, voluntarily uploaded by active users.”

Source: WIRED

No cash, no freedom?

The ‘cashless’ society, eh?

Every time someone talks about getting rid of cash, they are talking about getting rid of your freedom. Every time they actually limit cash, they are limiting your freedom. It does not matter if the people doing it are wonderful Scandinavians or Hindu supremacist Indians, they are people who want to know and control what you do to an unprecedentedly fine-grained scale.

Yep, just because someone cool is doing it doesn’t mean it won’t have bad consequences. In the rush to add technology to things, we create future dystopias.

Cash isn’t completely anonymous. There’s a reason why old fashioned crooks with huge cash flows had to money-launder: Governments are actually pretty good at saying, “Where’d you get that from?” and getting an explanation. Still, it offers freedom, and the poorer you are, the more freedom it offers. It also is very hard to track specifically, i.e., who made what purchase.

Blockchains won’t be untaxable. The ones which truly are unbreakable will be made illegal; the ones that remain, well, it’s a ledger with every transaction on it, for goodness sakes.

It’s this bit that concerns me:

We are creating a society where even much of what you say, will be knowable and indeed, may eventually be tracked and stored permanently.

If you do not understand why this is not just bad, but terrible, I cannot explain it to you. You have some sort of mental impairment of imagination and ethics.

Source: Ian Welsh

Using VR with kids

I’ve seen conflicting advice regarding using Virtual Reality (VR) with kids, so it’s good to see this from the LSE:

Children are becoming aware of virtual reality (VR) in increasing numbers: in autumn 2016, 40% of those aged 2-15 surveyed in the US had never heard of VR, and this number was halved less than one year later. While the technology is appealing and exciting to children, its potential health and safety issues remain questionable, as there is, to date, limited research into its long-term effects.

I have given my two children (six and nine at the time) experience of VR — albeit in limited bursts. The concern I have is about eyesight, mainly.

As a young technology there are still many unknowns about the long-term risks and effects of VR gaming, although Dubit found no negative effects from short-term play for children’s visual acuity, and little difference between pre- and post-VR play in stereoacuity (which relies on good eyesight for both eyes and good coordination between the two) and balance tests. Only 2 of the 15 children who used the fully immersive head-mounted display showed some stereoacuity after-effects, and none of those using the low-cost Google Cardboard headset showed any. Similarly, a few seemed to be at risk of negative after-effects to their balance after using VR, but most showed no problems.

There’s some good advice in this post for VR games/experience designers, and for parents. I’ll quote the latter:

While much of a child’s experience with VR may still be in museums, schools or other educational spaces under the guidance of trained adults, as the technology becomes more available in domestic settings, to ensure health and safety at home, parents and carers need to:

  • Allow children to preview the game on YouTube, if available.
  • Provide children with time to readjust to the real world after playing, and give them a break before engaging with activities like crossing roads, climbing stairs or riding bikes, to ensure that balance is restored.
  • Check on the child’s physical and emotional wellbeing after they play.

There’s a surprising lack of regulation and guidance in this space, so it’s good to see the LSE taking the initiative!

Source: Parenting for a Digital Future