Idleness always produces fickle changes of mind

If you've never read Michel de Montaigne's Essays then you're missing a treat. He's thought of as the prototypical 'blogger' and most of what he's written has survived the vicissitudes of changes in opinion over the last 450 years. The quotation for today's article comes from him.

As Austin Kleon notes in the post that accompanies the image that also illustrates this post, idleness is not the same as laziness:

I’m... a practitioner of intentional idleness: blocking off time in which I can do absolutely nothing. (Like Terry Gilliam, I would like to be known as an “Arch Idler.”) “Creative people need time to just sit around and do nothing,” I wrote in Steal Like An Artist.  (See Jenny Odell’s How To Do Nothing, Robert Louis Stevenson’s An Apology for Idlers, Tom Hodgkinson’s “The Idle Parent,” Tim Kreider’s “The ‘Busy’ Trap,” etc. )

Austin Kleon

There's a great post on The Art of Manliness by Brett and Kate McKay about practising productive procrastination, and how positive it can be. They break down the types of tasks that we perform on an average down into three groups:

Tier 1: tasks that are the most cognitively demanding — hard decisions, challenging writing, boring reading, tough analysis, etc.

Tier 2: tasks that take effort, but not as much — administrative work, making appointments, answering emails, etc.

Tier 3: tasks that still require a bit of effort, but in terms of cognitive load are nearly mindless — cleaning, organizing, filing, paying bills, etc.

Brett and Kate McKay

As I've said many times before, I can only really do four hours of really deep work (the 'Tier 1' tasks) per day. Of course, the demands of any job and most life admin, mostly form into Tier 2, with a bit of Tier 3 for good measure.

The thrust of their mantra to 'practise productive procrastination' is that, if you're not feeling up to a Tier 1 task, you should do a Tier 2 or Tier 3 task. Apparently, and I have to say I'm obviously not their target audience here, most people instead of doing a Tier 1 task instead do nothing useful and instead do things like checking Facebook, gossiping, and playing games.

The trouble is that with new workplace tools we can almost be encouraged into low-level tasks, as an article by Rani Molla for Recode explains:

On average, employees at large companies are each sending more than 200 Slack messages per week, according to Time Is Ltd., a productivity-analytics company that taps into workplace programs — including Slack, calendar apps, and the Office Suite — in order to give companies recommendations on how to be more productive. Power users sending out more than 1,000 messages per day are “not an exception.”

Keeping up with these conversations can seem like a full-time job. After a while, the software goes from helping you work to making it impossible to get work done.

Rani Molla

Constant interruptions aren't good for deep work, nor are open plan offices. However, I remember working in an office that had both. There was a self-policed time shortly after lunch (never officially sanctioned or promoted) where, for an hour or two, people really got 'in the zone'. It was great.

What we need, is a way to block out our calendars for unstructured, but deep work, and be trusted to do so. I actually think that most workplaces and most bosses would actually be OK with this. Perhaps we just need to get on with it?


Also check out:

Friday finds

Check out these links that I came across this week and thought you'd find interesting:

  • Netflix Saves Our Kids From Up To 400 Hours of Commercials a Year (Local Babysitter) — "We calculated a series of numbers related to standard television homes, compared them to Netflix-only homes and found an interesting trend with regard to how many commercials a streaming-only household can save their children from having to watch."
  • The Emotional Charge of What We Throw Away (Kottke.org) — "consumers actually care more about how their stuff is discarded, than how it is manufactured"
  • Sidewalk Labs' street signs alert people to data collection in use (Engadget) — "The idea behind Sidewalk Labs' icons is pretty simple. The company wants to create an image-based language that can quickly convey information to people the same way that street and traffic signs do. Icons on the signs would show if cameras or other devices are capturing video, images, audio or other information."
  • The vision of the home as a tranquil respite from labour is a patriarchal fantasy (Dezeen) — "[F]or a growing number of critics, the nuclear house is a deterministic form of architecture which stifles individual and collective potential. Designed to enforce a particular social structure, nuclear housing hardwires divisions in labour, gender and class into the built fabric of our cities. Is there now a case for architects to take a stand against nuclear housing?
  • The Anarchists Who Took the Commuter Train (Longreads) — "In the twenty-first century, the word “anarchism” evokes images of masked antifa facing off against neo-Nazis. What it meant in the early twentieth century was different, and not easily defined. "

Image from These gorgeous tiny houses can operate entirely off the grid (Fast Company)

The school system is a modern phenomenon, as is the childhood it produces

Good old Ivan Illich with today's quotation-as-title. If you haven't read his Deschooling Society yet, you must. Given actions speak louder than words, it really makes you think about what we're actually doing to children when we send them off to the world of formal education.

The pupil is thereby "schooled" to confuse teaching with learning, grade advancement with education, a diploma with competence, and fluency with the ability to say something new.

ivan illich

I left teaching almost a decade ago and still have a strong connection to the classroom through my wife (who's a teacher), my children (who are at school) and my friends/network (many of whom are involved in formal education.

That's why a post entitled The Absurd Structure of High School by Bernie Bleske resonated with me, even though it's based on his experience in the US:

The system’s scheduling fails on every possible level. If the goal is productivity, the fractured nature of the tasks undermines efficient product. So much time is spent in transition that very little is accomplished before there is a demand to move on. If the goal is maximum content conveyed, then the system works marginally well, in that students are pretty much bombarded with detail throughout their school day. However, that breadth of content comes at the cost of depth of understanding. The fractured nature of the work, the short amount of time provided, and the speed of change all undermine learning beyond the superficial. It’s shocking, really, that students learn as much as they do.

Bernie Bleske

We've known for a long time now, that a 'stage, not age' approach is much better for everyone involved. My daughter, sadly, enjoys school but is pretty bored there. And, frustratingly, there's not much we as parents can do about it.

If you've got an academically-able child, on the surface it seems like part of the problem is them being 'held back' by their peers. However, studies show that there's little empirical evidence for this being true — as Oscar Hedstrom points out in Why streaming kids according to ability is a terrible idea:

Despite all this, there is limited empirical evidence to suggest that streaming results in better outcomes for students. Professor John Hattie, director of the Melbourne Education Research Institute, notes that ‘tracking has minimal effects on learning outcomes and profound negative equity effects’. Streaming significantly – and negatively – affects those students placed in the bottom sets. These students tend to have much higher representation of low socioeconomic backgrounds. Less significant is the small benefit for those lucky clever students in the higher sets. The overall result is relative inequality. The smart stay smart, and the dumb get dumber, further entrenching social disadvantage.

Oscar Hedstrom

I worked in a school in a rough area that streamed kids based on the results of a 'literacy skills' test on entry. The result was actually middle-class segregation within the school. As a child myself, I also went to a pretty tough school in an ex-mining town, which was a bit more integrated.

The trouble with all of this is that most of the learning that happens in school is inside some form of classroom. As a recent Innovation Unit report entitled Local Learning Ecosystems: emerging models discusses, 'learning ecosystem' is a bit of a buzz-term at the moment, but with potentially useful applications:

It remains to be seen whether the education ecosystem idea, as expressed in these varieties, will evolve as a truly significant new driver in public education on a large scale. These initiatives reflect ambitious visions well beyond current achievements. Conventional systems, with their excessive assessment routines, pressurized school communities, and entrenched vestigial approaches, are difficult to shift. But this report offers a taste of the creative flourishing in education thinking today that has emerged against, and perhaps in response to, the erosion of resources for public education, often abetted by indifferent, even hostile government.

Local Learning Ecosystems: emerging models

My go-to book around all of this is still Prof. Keri Facer's excellent Learning Futures: education, technology and social change. I still haven't come across another book with such a hopeful, practical vision for the future since reading it when it came out in 2011.

Hopefully, taking a learning ecosystem or 'ecology' approach will provide the necessary shift of perspective to move us to the world beyond (just) classrooms.


Also check out:

Form is the possibility of structure

The philosopher Ludwig Wittgenstein with today's quotation-as-title. I'm using it as a way in to discuss some things around city planning, and in particular an article I've been meaning to discuss for what seems like ages.

In an article for The LA Times, Jessica Roy highlights a phenomenon I wish I could take back and show my 12 year-old self:

Thirty years ago, Maxis released “SimCity” for Mac and Amiga. It was succeeded by “SimCity 2000” in 1993, “SimCity 3000” in 1999, “SimCity 4” in 2003, a version for the Nintendo DS in 2007, “SimCity: BuildIt” in 2013 and an app launched in 2014.

Along the way, the games have introduced millions of players to the joys and frustrations of zoning, street grids and infrastructure funding — and influenced a generation of people who plan cities for a living. For many urban and transit planners, architects, government officials and activists, “SimCity” was their first taste of running a city. It was the first time they realized that neighborhoods, towns and cities were things that were planned, and that it was someone's job to decide where streets, schools, bus stops and stores were supposed to go.

Jessica Roy

Some games are just awesome. SimCity is still popular now on touchscreen devices, and my kids play it occasionally. It's interesting to read in the article how different people, now responsible for real cities, played the game, for example Roy quotes the Vice President of Transportation and Housing at the non-profit Silicon Valley Leadership Group

"I was not one of the players who enjoyed Godzilla running through your city and destroying it. I enjoyed making my city run well."

Jason Baker

I, on the other hand, particularly enjoyed booting up 'scenario mode' where you had to rescue a city that had been ravaged by Godzilla, aliens, or a natural disaster.

This isn't an article about nostalgia, though, and if you read the article in more depth you realise that it's an interesting insight into our psychology around governance of cities and nations. For example, going back to an article from 2018 that also references SimCity, Devon Zuegel writes:

The way we live is shaped by our infrastructure — the public spaces, building codes, and utilities that serve a city or region. It can act as the foundation for thriving communities, but it can also establish unhealthy patterns when designed poorly.

[...]

People choose to drive despite its costs because they lack reasonable alternatives. Unfortunately, this isn’t an accident of history. Our transportation system has been overly focused on automobile traffic flow as its metric of success. This single-minded focus has come at the cost of infrastructure that supports alternative ways to travel. Traffic flow should, instead, be one goal out of many. Communities would be far healthier if our infrastructure actively encouraged walking, cycling, and other forms of transportation rather than subsidizing driving and ignoring alternatives.

Devon Zuegel

In other words, the decisions we ask our representatives to make have a material impact in shaping our environment. That, in turn, affects our decisions about how to live and work.

When we don't have data about what people actually do, it's easy for ideology and opinions to get in the way. That's why I'm interested in what Los Angeles is doing with its public transport system. As reported by Adam Rogers in WIRED, the city is using mobile phone data to see how it can 'reboot' its bus system. It turns out that the people running the system had completely the wrong assumptions:

In fact, Metro's whole approach turned out to be skewed to the wrong kinds of trips. “Traditionally we're trying to provide fast service for long-distance trips,” [Anurag Komanduri, a data anlyst] says. That's something the Orange Line and trains are good at. But the cell phone data showed that only 16 percent of trips in LA County were longer than 10 miles. Two-thirds of all travel was less than five miles. Short hops, not long hauls, rule the roads.

Adam Rogers

There's some discussion later in the article about the "baller move" of ripping down some of the freeways to force people to use public transportation. Perhaps that's actually what's required.

In Barcelona, for example, "fiery leftist housing activist" Ada Colau became the city's mayor in 2015. Since then, they've been doing some radical experimentation. David Roberts reports for Vox on what they've done with one area of the city that I've actually seen with my own eyes:

Inside the superblock in the Poblenou neighborhood, in the middle of what used to be an intersection, there’s a small playground, with a set of about a dozen picnic tables next to it, just outside a local cafe. On an early October evening, neighbors sit and sip drinks to the sound of children’s shouts and laughter. The sun is still out, and the warm air smells of wild grasses growing in the fresh plantings nearby.

David Roberts

I can highly recommended watching this five-minute video overview of the benefits of this approach:

[www.youtube.com/watch](https://www.youtube.com/watch?v=ZORzsubQA_M)

So if it work, why aren't we seeing more of this? Perhaps it's because, as Simon Wren-Lewis points out on his blog, most of us are governed by incompetents:

An ideology is a collection of ideas that can form a political imperative that overrides evidence. Indeed most right wing think tanks are designed to turn the ideology of neoliberalism into policy based evidence. It was this ideology that led to austerity, the failed health reforms and the privatisation of the probation service. It also played a role in Brexit, with many of its protagonists dreaming of a UK free from regulations on workers rights and the environment. It is why most of the recent examples of incompetence come from the political right.

A pluralist democracy has checks and balances in part to guard against incompetence by a government or ministers. That is one reason why Trump and the Brexiters so often attack elements of a pluralist democracy. The ultimate check on incompetence should be democracy itself: incompetent politicians are thrown out. But when a large part of the media encourage rather than expose acts of incompetence, and the non-partisan media treat knowledge as just another opinion, that safegurd against persistent incompetence is put in danger.

Simon Wren-Lewis

We seem to have started with SimCity and ended with Trump and Brexit. Sorry about that, but without decent government, we can't hope to improve our communities and environment.


Also check out:

  • ‘Nation as a service’ is the ultimate goal for digitized governments (TNW) — "Right now in Estonia, when you have a baby, you automatically get child benefits. The user doesn’t have to do anything because the government already has all the data to make sure the citizen receives the benefits they’re entitled to."
  • The ethics of smart cities (RTE) — "With ethics-washing, a performative ethics is being practised designed to give the impression that an issue is being taken seriously and meaningful action is occurring, when the real ambition is to avoid formal regulation and legal mechanisms."
  • Cities as learning platforms (Harold Jarche) — "For the past century we have compartmentalized the life of the citizen. At work, the citizen is an ‘employee’. Outside the office he may be a ‘consumer’. Sometimes she is referred to as a ‘taxpayer’. All of these are constraining labels, ignoring the full spectrum of citizenship.

Life is like riding a bicycle. To keep your balance, you must keep moving

Thanks to Einstein for today's quote-as-title. Having once again witnessed the joy of electric scooters in Lisbon recently, I thought I'd look at this trend of 'micromobility'.

Let's begin with Horace Dediu, who explains the term:

Simply, Micromobility promises to have the same effect on mobility as microcomputing had on computing. Bringing transportation to many more and allowing them to travel further and faster.  I use the term micromobility precisely because of the connotation with computing and the expansion of consumption but also because it focuses on the vehicle rather than the service. The vehicle is small, the service is vast.

Horace Dediu

Micromobility covers mainly electric scooters and (e-)bikes, which can be found in many of the cities I've visited over the past year. Not in the UK, though, where riding electric scooters is technically illegal. Why? Because of a 183 year-old law, explains Jeff Parsons Metro:

You can’t ride scooters on the road, because the DVLA requires that electric vehicles be registered and taxed. And you can’t ride scooters on the pavement because of the 1835 Highways Act that prohibits anyone from riding a ‘carriage’ on the pavement.

Jeff Parsons

It's only a matter of time, though, before legislation is passed to remove this anachronism. And, to be honest, I can't imagine the police with their stretched resources pulling over anyone who's using one sensibly.

Electric scooters in particular are great and, if you haven't tried one, you should. Florent Crivello, one of Uber's product managers, explains why they're not just fun, but actually valuable:

  1. Cleaner and more energy efficient
  2. More space efficient
  3. Safer
  4. Making the world city a better place
  5. Force for economic inclusion

You might be wondering about the third one of these, as I was. Crivello includes this chart:

Courtesy of Florent Crivello

Of course, as he points out, you can prevent cars running into scooters, bikes, and pedestrians by building separate lanes for them, with a high kerb in between. Countries that have done this, like the Netherlands, have seen a sharp decline in fatalities and injuries.

Despite the title, I'm focusing on electric scooters because of my enthusiasm for them and because of the huge growth since they became a thing about 18 months ago. Just look at this chart that Megan Rose Dickey includes in a recent TechCrunch article:

Chart courtesy of TechCrunch

One of the biggest downsides to electric scooters at the moment, and one which threatens the whole idea of 'micromobility' is over-supply. As this photograph in an article by Alan Taylor for The Atlantic shows, this can quickly get out-of-hand when VC-backed companies are involved:

Unused shared bikes in a vacant lot in Xiamen, Fujian province, China (photo courtesy of The Atlantic)

This can scare cities, who don't know how to deal with these kinds of potential consequences. That's why it's refreshing to see Charlotte in North Carolina lead the way by partnering with Passport, a transportation logistics company. As John R. Quain reports for Digital Trends:

“When e-scooters first came to town,” said Charlotte’s city manager Marcus Jones, “it left our shared bike program in the dust.”

[...]

By tracking scooter rentals and coordinating it with other information about public transit routes, congestion, and parking information, Passport can report on where scooters and bikes tend to be idle, where they get the most use, and how they might be deployed to serve more people. Furthermore, rather than railing against escooters, such information can help a city encourage proper use and behavior.

John R. Quain

I'm really quite excited about e-scooters, and can't wait until I can buy and use one legally in the UK!


Also check out:

That which we do not bring to consciousness appears in our lives as fate

Today's title is quotation from Carl Jung, via a recent issue of New Philosopher magazine. I thought it was a useful frame for a discussion around a few things I've been reading recently, including an untranslatable Finnish word, music and teen internet culture, as well as whether life does indeed get better once you turn forty.

Let's start with that Finnish word, discussed in Quartzy by Olivia Goldhill:

At some point in life, all of us get that unexpected call on a Tuesday afternoon that distorts our world and makes everything else irrelevant: There’s been an accident. Or, you need surgery. Or, come home now, he’s dying. We get through that time, somehow, drawing on energy reserves we never knew we had and persevering, despite the exhaustion. There’s no word in English for the specific strength it takes to pull through, but there is a word in Finnish: sisu.

Olivia Goldhill

I'm guessing Goldhill is American, as we English have a term for that: Blitz spirit. It's even been invoked as a way of getting us through the vagaries of Brexit! 🙄

Despite my flippancy, there are, of course, words that are pretty untranslatable between languages. But one thing that unites us no matter what language we speak is music. Interestingly, Alexis Petridis in The Guardian notes that there's teenage musicians making music in their bedrooms that really resonates across language barriers:

For want of a better name, you might call it underground bedroom pop, an alternate musical universe that feels like a manifestation of a generation gap: big with teenagers – particularly girls – and invisible to anyone over the age of 20, because it exists largely in an online world that tweens and teens find easy to navigate, but anyone older finds baffling or risible. It doesn’t need Radio 1 or what is left of the music press to become popular because it exists in a self-contained community of YouTube videos and influencers; some bedroom pop artists found their music spread thanks to its use in the background of makeup tutorials or “aesthetic” videos, the latter a phenomenon whereby vloggers post atmospheric videos of, well, aesthetically pleasing things.

Alexis Petridis

Some people find this scary. I find it completely awesome, but may be over-compensating now that I've passed 35 years of age. Who wants to listen to and like the same music as everyone else?

Talking of getting older, there's a saying that "life begins at forty". Well, an article in The Economist would suggest that, on average, the happiness of males in Western Europe doesn't vary that much.

The Economist: graph showing self-reported happiness levels

I'd love to know what causes that decline in the former USSR states, and the uptick in the United States? The article isn't particularly forthcoming, which is a shame.

Perhaps as you get to middle-age there's a realisation that this is pretty much going to be it for the rest of your life. In some places, if you have the respect of your family, friends, and culture, and are reasonably well-off, that's no bad thing. In other cultures, that might be a sobering thought.

One of the great things about studying Philosophy since my teenage years is that I feel very prepared for getting old. Perhaps that's what's needed here? More philosophical thinking and training? I don't think it would go amiss.


Also check out:

  • What your laptop-holding position says about you (Quartz at Work) — "Over the past few weeks, we’ve been observing Quartzians in their natural habitat and have tried to make sense of their odd office rituals in porting their laptops from one meeting to the next."
  • Meritocracy doesn’t exist, and believing it does is bad for you (Fast Company) — "Simply holding meritocracy as a value seems to promote discriminatory behavior."
  • Your Body as a Map (Sapiens) — "Reading the human body canvas is much like reading a map. But since we are social beings in complex contemporary situations, the “legend” changes depending on when and where a person looks at the map."

Fascinating Friday Facts

Here's some links I thought I'd share which struck me as interesting:


Header image: Keep out! The 100m² countries – in pictures (The Guardian)

There is no exercise of the intellect which is not, in the final analysis, useless

A quotation from a short story from Jorge Luis Borges' Labyrinths provides the title for today's article. I want to dig into the work of danah boyd and the transcript of a talk she gave recently, entitled Agnotology and Epistemological Fragmentation. It helps us understand what's going on behind the seemingly-benign fascias of social networks and news media outlets.

She explains the title of her talk:

Epistemology is the term that describes how we know what we know. Most people who think about knowledge think about the processes of obtaining it. Ignorance is often assumed to be not-yet-knowledgeable. But what if ignorance is strategically manufactured? What if the tools of knowledge production are perverted to enable ignorance? In 1995, Robert Proctor and Iain Boal coined the term “agnotology” to describe the strategic and purposeful production of ignorance. In an edited volume called Agnotology, Proctor and Londa Schiebinger collect essays detailing how agnotology is achieved. Whether we’re talking about the erasure of history or the undoing of scientific knowledge, agnotology is a tool of oppression by the powerful.

danah boyd

Having already questioned 'media literacy' the way it's currently taught through educational institutions and libraries, boyd explains how the alt-right are streets ahead of educators when it comes to pushing their agenda:

One of the best ways to seed agnotology is to make sure that doubtful and conspiratorial content is easier to reach than scientific material. And then to make sure that what scientific information is available, is undermined. One tactic is to exploit “data voids.” These are areas within a search ecosystem where there’s no relevant data; those who want to manipulate media purposefully exploit these. Breaking news is one example of this.

[...]

Today’s drumbeat happens online. The goal is no longer just to go straight to the news media. It’s to first create a world of content and then to push the term through to the news media at the right time so that people search for that term and receive specific content. Terms like caravan, incel, crisis actor. By exploiting the data void, or the lack of viable information, media manipulators can help fragment knowledge and seed doubt.

danah boyd

Harold Jarche uses McLuhan's tetrads to understand this visually, commenting: "This is an information war. Understanding this is the first step in fighting for democracy."

Harold Jarche on Agnotology

We can teach children sitting in classrooms all day about checking URLs and the provenance of the source, but how relevant is that when they're using YouTube as their primary search engine? Returning to danah boyd:

YouTube has great scientific videos about the value of vaccination, but countless anti-vaxxers have systematically trained YouTube to make sure that people who watch the Center for Disease Control and Prevention’s videos also watch videos asking questions about vaccinations or videos of parents who are talking emotionally about what they believe to be the result of vaccination. They comment on both of these videos, they watch them together, they link them together. This is the structural manipulation of media.

danah boyd

It's not just the new and the novel. Even things that are relatively obvious to those of us who have grown up as adults online are confusing to older generations. As this article by BuzzFeed News reporter Craig Silverman points out, conspiracy-believing retirees have disproportionate influence on our democratic processes:

Older people are also more likely to vote and to be politically active in other ways, such as making political contributions. They are wealthier and therefore wield tremendous economic power and all of the influence that comes with it. With more and more older people going online, and future 65-plus generations already there, the online behavior of older people, as well as their rising power, is incredibly important — yet often ignored.

Craig Silverman

So when David Buckingham asks 'Who needs digital literacy?' I think the answer is everyone. Having been a fan of his earlier work, it saddens me to realise that he hasn't kept up with the networked era:

These days, I find the notion of digital literacy much less useful – and to some extent, positively misleading. The fundamental problem is that the idea is defined by technology itself. It makes little sense to distinguish between texts (or media) on the grounds of whether they are analogue or digital: almost all media (including print media) involve the use of digital technology at some stage or other. Fake news and disinformation operate as much in old, analogue media (like newspapers) as they do online. Meanwhile, news organisations based in old media make extensive and increasing use of online platforms. The boundaries between digital and analogue may still be significant in some situations, but they are becoming ever more blurred.

David Buckingham

Actually, as Howard Rheingold pointed out a number of years ago in Net Smart, and as boyd has done in her own work, networks change everything. You can't seriously compare pre-networked and post-networked cultures in any way other than in contrast.

Buckingham suggests that, seeing as the (UK) National Literacy Trust are on the case, we "don't need to reinvent the wheel". The trouble is that the wheel has already been reinvented, and lots of people either didn't notice, or are acting as though it hasn't been.

There's a related article by Anna Mckie in the THE entitled Teaching intelligence: digital literacy in the ‘alternative facts’ era which, unfortunately, is now behind a paywall. It reports on a special issue of the journal Teaching in Higher Education where the editors have brought together papers on the contribution made by Higher Education to expertise and knowledge in the age of 'alternative facts':

[S]ocial media has changed the dynamic of information in our society, [editor] Professor Harrison added. “We've moved away from the idea of experts who assess information to one where the validity of a statement is based on the likes, retweets and shares it gets, rather than whether the information is valid.”

The first task of universities is to go back to basics and “help students to understand the difference between knowledge and information, and how knowledge is created, which is separate to how information is created”, Professor Harrison said. “Within [each] discipline, what are the skills needed to assess that?”

Many assume that schools or colleges are teaching this, but that is not the case, he added. “Academics should also be wary of the extent to which they themselves understand the new paradigms of knowledge creation,” Professor Harrison warned.

Anna McKie

One of the reasons I decided not to go into academia is that, certain notable exceptions aside, the focus is on explaining rather than changing. Or, to finish with another quotation, this time from Karl Marx, "Philosophers have hitherto only interpreted the world in various ways; the point is to change it."


Also check out:

Sometimes even to live is an act of courage

Thank you to Seneca for the quotation for today's title, which sprang to mind after reading Rosie Spinks' claim in Quartz that we've reached 'peak influencer'.

Where once the social network was basically lunch and sunsets, it’s now a parade of strategically-crafted life updates, career achievements, and public vows to spend less time online (usually made by people who earn money from social media)—all framed with the carefully selected language of a press release. Everyone is striving, so very hard.

Thank goodness for that. The selfie-obsessed influencer brigade is an insidious effect of the neoliberalism that permeates western culture:

For the internet influencer, everything from their morning sun salutation to their coffee enema (really) is a potential money-making opportunity. Forget paying your dues, or working your way up—in fact, forget jobs. Work is life, and getting paid to live your best life is the ultimate aspiration.

[...]

“Selling out” is not just perfectly OK in the influencer economy—it’s the raison d’etre. Influencers generally do not have a craft or discipline to stay loyal to in the first place, and by definition their income comes from selling a version of themselves.

As Yascha Mounk, writing in The Atlantic, explains the problem isn't necessarily with social networks. It's that you care about them. Social networks flatten everything into a never-ending stream. That stream makes it very difficult to differentiate between gossip and (for example) extremely important things that are an existential threat to democratic institutions:

“When you’re on Twitter, every controversy feels like it’s at the same level of importance,” one influential Democratic strategist told me. Over time, he found it more and more difficult to tune Twitter out: “People whose perception of reality is shaped by Twitter live in a different world and a different country than those off Twitter.”

It's easier for me to say these days that our obsession with Twitter and Instagram is unhealthy. While I've never used Instagram (because it's owned by Facebook) a decade ago I was spending hours each week on Twitter. My relationship with the service has changed as I've grown up and it has changed — especially after it became a publicly-traded company in 2013.

Twitter, in particular, now feels like a neverending soap opera similar to EastEnders. There's always some outrage or drama running. Perhaps it's better, as Catherine Price suggests in The New York Times, just to put down our smartphones?

Until now, most discussions of phones’ biochemical effects have focused on dopamine, a brain chemical that helps us form habits — and addictions. Like slot machines, smartphones and apps are explicitly designed to trigger dopamine’s release, with the goal of making our devices difficult to put down.

This manipulation of our dopamine systems is why many experts believe that we are developing behavioral addictions to our phones. But our phones’ effects on cortisol are potentially even more alarming.

Cortisol is our primary fight-or-flight hormone. Its release triggers physiological changes, such as spikes in blood pressure, heart rate and blood sugar, that help us react to and survive acute physical threats.

Depending on how we use them, social networks can stoke the worst feelings in us: emotions such as jealousy, anger, and worry. This is not conducive to healthy outcomes, especially for children where stress has a direct correlation to the take-up of addictive substances, and to heart disease in later life.

I wonder how future generations will look back at this time period?


Also check out:

Anything invented after you're thirty-five is against the natural order of things

I'm fond of the above quotation by Douglas Adams that I've used for the title of this article. It serves as a reminder to myself that I've now reached an age when I'll look at a technology and wonder: why?

Despite this, I'm quite excited about the potential of two technologies that will revolutionise our digital world both in our homes and offices and when we're out-and-about. Those technologies? Wi-Fi 6, as it's known colloquially, and 5G networks.

Let's take Wi-Fi 6 first, which Chuong Nguyen explains in an article for Digital Trends, isn't just about faster speeds:

A significant advantage for Wi-Fi 6 devices is better battery life. Though the standard promotes Internet of Things (IoT) devices being able to last for weeks, instead of days, on a single charge as a major benefit, the technology could even prove to be beneficial for computers, especially since Intel’s latest 9th-generation processors for laptops come with Wi-Fi 6 support.

Likewise, Alexis Madrigal, writing in The Atlantic, explains that mobile 5G networks bring benefits other than streaming YouTube videos at ever-higher resolutions, but are quite a technological hurdle:

The fantastic 5G speeds require higher-frequency, shorter-wavelength signals. And the shorter the wavelength, the more likely it is to be blocked by obstacles in the world.

[...]

Ideally, [mobile-associated companies] would like a broader set of customers than smartphone users. So the companies behind 5G are also flaunting many other applications for these networks, from emergency services to autonomous vehicles to every kind of “internet of things” gadget.

If you've been following the kerfuffle around the UK using Huawei's technology for its 5G infrastructure, you'll already know about the politics and security issues at stake here.

Sue Halpern, writing in The New Yorker, outlines the claimed benefits:

Two words explain the difference between our current wireless networks and 5G: speed and latency. 5G—if you believe the hype—is expected to be up to a hundred times faster. (A two-hour movie could be downloaded in less than four seconds.) That speed will reduce, and possibly eliminate, the delay—the latency—between instructing a computer to perform a command and its execution. This, again, if you believe the hype, will lead to a whole new Internet of Things, where everything from toasters to dog collars to dialysis pumps to running shoes will be connected. Remote robotic surgery will be routine, the military will develop hypersonic weapons, and autonomous vehicles will cruise safely along smart highways. The claims are extravagant, and the stakes are high. One estimate projects that 5G will pump twelve trillion dollars into the global economy by 2035, and add twenty-two million new jobs in the United States alone. This 5G world, we are told, will usher in a fourth industrial revolution.

But greater speeds and lower latency isn't all upside for all members of societies, as I learned in this BBC Beyond Today podcast episode about Korean spy cam porn. Halpern explains:

In China, which has installed three hundred and fifty thousand 5G relays—about ten times more than the United States—enhanced geolocation, coupled with an expansive network of surveillance cameras, each equipped with facial-recognition technology, has enabled authorities to track and subordinate the country’s eleven million Uighur Muslims. According to the Times, “the practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.”

Automated racism, now there's a thing. It turns out that technologies amplify our existing prejudices. Perhaps we should be a bit more careful and ask more questions before we march down the road of technological improvements? Especially given 5G could affect our ability to predict major storms. I'm reading Low-tech Magazine: The Printed Website at the moment, and it's pretty eye-opening about what we could be doing instead.


Also check out:

The smallest deed is better than the greatest intention

Thanks to John Burroughs for today's title. For me, it's an oblique reference to some of the situations I find myself in, both in my professional and personal life. After all, words are cheap and actions are difficult.

I'm going to take the unusual step of quoting someone who's quoting me. In this case, it's Stephen Downes picking up on a comment I made in the cc-openedu Google Group. I'd link directly to my comments, but for some reason a group about open education is... closed?

I'd like to echo a point David Kernohan made when I worked with him on the Jisc OER programme. He said: "OER is a supply-side term". Let's face it, there are very few educators specifically going out and looking for "Openly Licensed Resources". What they actuallywant are resources that they can access for free (or at a low cost) and that they can legally use. We've invented OER as a term to describe that, but it may actually be unhelpfully ambiguous.

Shortly after posting that, I read this post from Sarah Lambert on the GO-GN (Global OER Graduate Network) blog. She says:

[W]hile we’re being all inclusive and expanding our “open” to encompass any collaborative digital practice, then our “open” seems to be getting less and less distinctive. To the point where it’s getting quite easily absorbed by the mainstream higher education digital learning (eLearning, Technology Enhanced Learning, ODL, call it what you will). Is it a win for higher education to absorb and assimilate “open” (and our gift labour) as the latest innovation feeding the hungry marketised university that Kate Bowles spoke so eloquently about? Is it a problem if not only the practice, but the research field of open education becomes inseparable with mainstream higher education digital learning research?

My gloss on this is that 'open education' may finally have moved into the area of productive ambiguity. I talked about this back in 2016 in a post on a blog I post to only very infrequently, so I might as well quote myself again:

Ideally, I’d like to see ‘open education’ move into the realm of what I term productive ambiguity. That is to say, we can do some workwith the idea and start growing the movement beyond small pockets here and there. I’m greatly inspired by Douglas Rushkoff’s new Team Human podcast at the moment, feeling that it’s justified the stance that I and others have taken for using technology to make us more human (e.g. setting up a co-operative) and against the reverse (e.g. blockchain).

That's going to make a lot of people uncomfortable, and hopefully uncomfortable enough to start exploring new, even better areas. 'Open Education' now belongs, for better or for worse, to the majority. Whether that's 'Early majority' or 'Late majority' on the innovation adoption lifecycle curve probably depends where in the world you live.

Diffusion of innovation curve
CC BY Pnautilus (Wikipedia)

Things change and things move on. The reason I used that xkcd cartoon about IRC at the top of this post is because there has been much (OK, some) talk about Mozilla ending its use of IRC.

While we still use it heavily, IRC is an ongoing source of abuse and harassment for many of our colleagues and getting connected to this now-obscure forum is an unnecessary technical barrier for anyone finding their way to Mozilla via the web. Available interfaces really haven’t kept up with modern expectations, spambots and harassment are endemic to the platform, and in light of that it’s no coincidence that people trying to get in touch with us from inside schools, colleges or corporate networks are finding that often as not IRC traffic isn’t allowed past institutional firewalls at all.

Cue much hand-wringing from the die-hards in the Mozilla community. Unfortunately, Slack, which originally had a bridge/gateway for IRC has pulled up the drawbridge on that front, so they could go with something like Mattermost, but given recently history I bet they go with Discord (or similar).

As Seth Godin points out in his most recent podcast episode, everyone wants be described as 'supple', nobody wants to be described as 'brittle'. Yet, the actions we take suggest otherwise. We expect that just because the change we see in the world isn't convenient, that we can somehow slow it down. Nope, you just have to roll with it, whether that's changing technologies, or different approaches to organising ideas and people.


Also check out:

  • Do Experts Listen to Other Experts? (Marginal Revolution) —"very little is known about how experts influence each others’ opinions, and how that influence affects final evaluations."
  • Why Symbols Aren’t Forever (Sapiens) — "The shifting status of cultural symbols reveals a lot about who we are and what we value."
  • Balanced Anarchy or Open Society? (Kottke.org) — "Personal computing and the internet changed (and continues to change) the balance of power in the world so much and with such speed that we still can’t comprehend it."

A little Friday randomness

Not everything I read and bookmark to come back to is serious. So here for the sake of a little levity, are some things I've discovered recently that either made me smile, or think "that's cool":


Header image: xkcd

Educational institutions are at a crossroads of relevance

One of the things that attracted me to the world of Open Badges and digital credentialing back in 2011 was the question of relevance. As a Philosophy graduate, I'm absolutely down with the idea of a broad, balanced education, and learning as a means of human flourishing.

However, in a world where we measure schools, colleges, and universities through an economic lens, it's inevitable that learners do so too. As I've said in presentations and to clients many times, I want my children to choose to go to university because it's the right choice for them, not because they have to.

In an article in Forbes, Brandon Busteed notes that we're on the verge of a huge change in Higher Education:

This shift will go down as the biggest disruption in higher education whereby colleges and universities will be disintermediated by employers and job seekers going direct. Higher education won’t be eliminated from the model; degrees and other credentials will remain valuable and desired, but for a growing number of young people they’ll be part of getting a job as opposed to college as its own discrete experience. This is already happening in the case of working adults and employers that offer college education as a benefit. But it will soon be true among traditional age students. Based on a Kaplan University Partners-QuestResearch study I led and which was released today, I predict as many as one-third of all traditional students in the next decade will "Go Pro Early" in work directly out of high school with the chance to earn a college degree as part of the package.

This is true to some degree in the UK as well, through Higher Apprenticeships. University study becomes a part-time deal with the 'job' paying for fees. It's easy to see how this could quickly become a two-tier system for rich and poor.

A "job-first, college included model" could well become one of the biggest drivers of both increasing college completion rates in the U.S. and reducing the cost of college. In the examples of employers offering college degrees as benefits, a portion of the college expense will shift to the employer, who sees it as a valuable talent development and retention strategy with measurable return on investment benefits. This is further enhanced through bulk-rate tuition discounts offered by the higher educational institutions partnering with these employers. Students would still be eligible for federal financial aid, and they’d be making an income while going to college. To one degree or another, this model has the potential to make college more affordable for more people, while lowering or eliminating student loan debt and increasing college enrollments. It would certainly help bridge the career readiness gap that many of today’s college graduates encounter.

The 'career readiness' that Busteed discusses here is an interesting concept, and one that I think has been invented by employers who don't want to foot the bill for training. Certainly, my parents' generation weren't supposed to be immediately ready for employment straight after their education — and, of course, they weren't saddled with student debt, either.

Related, in my mind, is the way that we treat young people as data to be entered on a spreadsheet. This is managerialism at its worst. Back when I was a teacher and a form tutor, I remember how sorry I felt for the young people in my charge, who were effectively moved around a machine for 'processing' them.

Now, in an article for The Guardian, Jeremy Hannay tells it like it is for those who don't have an insight into the Kafkaesque world of schools:

Let me clear up this edu-mess for you. It’s not Sats. It’s not workload. The elephant in the room is high-stakes accountability. And I’m calling bullshit. Our education system actively promotes holding schools, leaders and teachers at gunpoint for a very narrow set of test outcomes. This has long been proven to be one of the worst ways to bring about sustainable change. It is time to change this educational paradigm before we have no one left in the classroom except the children.

Just like our dog-eat-dog society in the UK could be much more collaborative, so our education system badly needs remodelling. We've deprofessionalised teaching, and introduced a managerial culture. Things could be different, as they are elsewhere in the world.

In such systems – and they do exist in some countries, such as Finland and Canada, and even in some brave schools in this country – development isn’t centred on inspection, but rather professional collaboration. These schools don’t perform regular observations and monitoring, or fire out over-prescriptive performance policies. Instead, they discuss and design pedagogy, engage in action research, and regularly perform activities such as learning and lesson study. Everyone understands that growing great educators involves moments of brilliance and moments of mayhem.

That's the key: "moments of brilliance and moments of mayhem". Ironically, bureaucratic, hierarchical systems cannot cope with amazing teachers, because they're to some extent unpredictable. You can't put them in a box (on a spreadsheet).

Actually, perhaps it's not the hierarchy per se, but the power dynamics, as Richard D. Bartlett points out in this post.

Yes, when a hierarchical shape is applied to a human group, it tends to encourage coercive power dynamics. Usually the people at the top are given more importance than the rest. But the problem is the power, not the shape. 

What we're doing is retro-fitting the worst forms of corporate power dynamics onto education and expecting everything to be fine. Newsflash: learning is different to work, and always will be.

Interestingly, Bartlett defines three different forms of power dynamics, which I think is enlightening:

Follett coined the terms “power-over” and “power-with” in 1924. Starhawk adds a third category “power-from-within”. These labels provide three useful lenses for analysing the power dynamics of an organisation. With apologies to the original authors, here’s my definitions:

  • power-from-within or empowerment — the creative force you feel when you’re making art, or speaking up for something you believe in
  • power-with or social power — influence, status, rank, or reputation that determines how much you are listened to in a group
  • power-over or coercion — power used by one person to control another

The problem with educational institutions, I feel, is that we've largely done away with empowerment and social power, and put all of our eggs in the basket of coercion.


Also check out:

  • Working collaboratively and learning cooperatively (Harold Jarche) — "Two types of behaviours are necessary in the network era workplace — collaboration and cooperation. Cooperation is not the same as collaboration, though they are complementary."
  • Learning Alignment Model (Tom Barrett) - "It is not a step by step process to design learning, but more of a high-level thinking model to engage with that uncovers some interesting potential tensions in our classroom work."
  • A Definition of Academic Innovation (Inside Higher Ed) - "What if academic innovation was built upon the research and theory of our field, incorporating social constructivist, constructionist and activity theory?"

Remote work is a different beast

You might not work remotely right now, but the chances are that at some point in your career, and in some capacity, you will do. Remote work has its own challenges and benefits, which are alluded to in three articles in Fast Company that I want to highlight. The first is an article summarising a survey Google performed amongst 5,600 of its remote workers.

On the outset of the study, the team hypothesized that distributed teams might not be as productive as their centrally located counterparts. “We were a little nervous about that,” says [Veronica] Gilrane [manager of Google’s People Innovation Lab]. She was surprised to find that distributed teams performed just as well. Unfortunately, she also found that there is a lot more frustration involved in working remotely. Workers in other offices can sometimes feel burdened to sync up their schedules with the main office. They can also feel disconnected from the team.

That doesn't surprise me at all. Even though probably spend less AFK (Away From Keyboard) as a remote worker than I would in an office, there's not that performative element, where you have to look like you're working. Sometimes work doesn't look like work; it looks like going for a run to think about a problem, or bouncing an idea off a neighbour as you walk back to your office with a cup of tea.

The main thing, as this article points out, is that it's really important to have an approach that focuses on results rather than time spent doing the work. You do have to have some process, though:

[I]t’s imperative that you stress disciplinary excellence; workers at home don’t have a manager peering over their shoulder, so they have to act as their own boss and maintain a strict schedule to get things done. Don’t try to dictate every aspect of their lives–remote work is effective because it offers workers flexibility, after all. Nonetheless, be sure that you’re requesting regular status updates, and that you have a system in place to measure productivity.

Fully-remote working is different to 'working from home' a day or two per week. It does take discipline, if only to stop raiding the biscuit tin. But it's also a different mindset, including intentionally sharing your work much more than you'd do in a co-located setting.

Fundamentally, as Greg Galant, CEO of a full-remote organisation, comments, it's about trust:

“My friends always say to me, ‘How do you know if anyone is really working?’ and I always ask them, ‘How do you know if anybody is really working if they are at the office?'” says Galant. “Because the reality is, you can see somebody at their desk and they can stay late, but that doesn’t mean they’re really working.”

[...]

If managers are adhering to traditional management practices, they’re going to feel anxiety with remote teams. They’re going to want to check in constantly to make sure people are working. But checking in constantly prevents work from getting done.

Remote work is strange and difficult to describe to anyone who hasn't experienced it. You can, for example, in the same day feel isolated and lonely, while simultaneously getting annoyed with all of the 'pings' and internal communication coming at you.

At the end of the day, companies need to set expectations, and remote workers need to set boundaries. It's the only way to avoid burnout, and to ensure that what can be a wonderful experience doesn't turn into a nightmare.


Also check out:

  • 5 Great Resources for Remote Workers (Product Hunt) — "If you’re a remote worker or spend part of your day working from outside of the office, the following tools will help you find jobs, discover the best cities for remote workers, and learn from people who have built successful freelance careers or location-independent companies."
  • Stop Managing Your Remote Workers As If They Work Onsite (ThinkGrowth) — "Managers need to back away from their conventional views of what “working hard” looks like and instead set specific targets, explain what success looks like, and trust the team to get it done where, when, and however works best for them."
  • 11 Tools That Allow us to Work from Anywhere on Earth as a Distributed Company (Ghost) —"In an office, the collaboration tools you use are akin to a simple device like a screwdriver. They assist with difficult tasks and lessen the amount of effort required to complete them. In a distributed team, the tools you use are more like life-support. Everything to do with distributed team tools is about clawing back some of that contextual awareness which you've lost by not being in the same space."

Culture eats strategy for breakfast

The title of this post is a quotation from management consultant, educator, and author Peter Drucker. Having worked in a variety of organisations, I can attest to its truth.

That's why, when someone shared this post by Grace Krause, which is basically a poem about work culture, I paid attention. Entitled Appropriate Channels, here's a flavour:

We would like to remind you all
That we care deeply
About our staff and our students
And in no way do we wish to silence criticism
But please make use of the
Appropriate Channels

The Appropriate Channel is tears cried at home
And not in the workplace
Please refrain from crying at your desk
As it might lower the productivity of your colleagues

Organisational culture is difficult because of the patriarchy. I selected this part of the poem, as I've come to realise just how problematic it is to let people know (through words, actions, or policies) that it's not OK to cry at work. If we're to bring our full selves to work, then emotion is part of it.

Any organisation has a culture, and that culture can be changed, for better or for worse. Restaurants are notoriously toxic places to work, which is why this article in Quartz, is interesting:

Since four-time James Beard award winner Gabrielle Hamilton opened Prune’s doors in 1999, she, along with her co-chef Ashley Merriman, have established a set of principles that help guide employees at the restaurant. According to Hamilton and Merriman, the code has a kind of transformative power. It’s helped the kitchen avoid becoming a hierarchical, top-down fiefdom—a concentration of power that innumerable chefs have abused in the past. It can turn obnoxious, entitled patrons into polite diners who are delighted to have a seat at the table. And it’s created the kind of environment where Hamilton and Merriman, along with their staff, want to spend much of their day.

The five core values of their restaurant, which I think you could apply to any organisation, are:

  1. Be thorough and excellent in everything that you do
  2. Be smart and funny
  3. Be disarmingly honest
  4. Work without division of any kind
  5. Practise servant leadership

We live in the 'age of burnout', according to another article in Quartz, but there's no reason why we can't love the work we do. It's all about finding the meaning behind the stuff we get done on a daily basis:

Our freedom to make meaning is both a blessing and a curse. To get somewhat existential about it, “work,” and the problems associated with it as an amorphous whole, do not exist: For the individual, only his or her work exists, and the individual is in control of that, with the very real power radically to change the situation. You could start the process of changing your job right now, today. Yes, arguments about the practicality of that choice well up fast and high. Yes, you would have to find another way to pay the bills. That doesn’t negate the fact that, fundamentally, you are free.

It's important to remember this, that we choose to do the work we do, that we don't have to work for a single employer, and that we can tell a different story about ourselves at any point we choose. It might not be easy, but it's certainly doable.


Also check out:

Things that people think are wrong (but aren't)

I've collected a bunch of diverse articles that seem to be around the topic of things that people think are wrong, but aren't really. Hence the title.

I'll start with something that everyone over a certain age seems to have a problem with, except for me: sleep. BBC Health lists five sleep myths:

  1. You can cope on less than five hours' sleep
  2. Alcohol before bed boosts your sleep
  3. Watching TV in bed helps you relax
  4. If you're struggling to sleep, stay in bed
  5. Hitting the snooze button
  6. Snoring is always harmless

My smartband regularly tells me that I sleep better than 93% of people, and I think that's because of how much I prioritise sleep. I've also got a system, which I've written about before for the times when I do have a rough night.

I like routine, but I also like mixing things up, which is why I appreciate chunks of time at home interspersed with travel. Oliver Burkeman, writing in The Guardian, suggests, however, that routines aren't the be-all and end-all:

Some people are so disorganised that a strict routine is a lifesaver. But speaking as a recovering rigid-schedules addict, trust me: if you click excitedly on each new article promising the perfect morning routine, you’re almost certainly not one of those people. You’re one of the other kind – people who’d benefit from struggling less to control their day, responding a bit more intuitively to the needs of the moment. This is the self-help principle you might call the law of unwelcome advice: if you love the idea of implementing a new technique, it’s likely to be the opposite of what you need.

Expecting something new to solve an underlying problem is a symptom of our culture's focus on the new and novel. While there's so much stuff out there we haven't experienced, should we spend our lives seeking it out to the detriment of the tried and tested, the things that we really enjoy?

On the recommendation of my wife, I recently listened to a great episode of the Off Menu podcast featuring Victoria Cohen Mitchell. It's not only extremely entertaining, but she mentions how, for her, a nice Ploughman's lunch is better than some fancy meal.

This brings me to an article in The Atlantic by Joe Pinsker, who writes that kids who watch and re-watch the same film might be on to something:

In general, psychological and behavioral-economics research has found that when people make decisions about what they think they’ll enjoy, they often assign priority to unfamiliar experiences—such as a new book or movie, or traveling somewhere they’ve never been before. They are not wrong to do so: People generally enjoy things less the more accustomed to them they become. As O’Brien [professor at the University of Chicago’s Booth School of Business] writes, “People may choose novelty not because they expect exceptionally positive reactions to the new option, but because they expect exceptionally dull reactions to the old option.” And sometimes, that expected dullness might be exaggerated.

So there's something to be said for re-reading novels you read when you were younger instead of something shortlisted for a prize, or discounted in the local bookshop. I found re-reading Dostoevsky's Crime & Punishment recently exhilarating as I probably hadn't ready it since I became a parent. Different periods of your life put different spins on things that you think you already know.


Also check out:

  • The ‘Dark Ages’ Weren’t As Dark As We Thought (Literary Hub) — "At the back of our minds when thinking about the centuries when the Roman Empire mutated into medieval Europe we are unconsciously taking on the spurious guise of specific communities."
  • An Easy Mode Has Never Ruined A Game (Kotaku) — "There are myriad ways video games can turn the dials on various systems to change our assessment of how “hard” they seem, and many developers have done as much without compromising the quality or integrity of their games."
  • Millennials destroyed the rules of written English – and created something better (Mashable) — "For millennials who conduct so many of their conversations online, this creativity with written English allows us to express things that we would have previously only been conveyed through volume, cadence, tone, or body language."


Cutting the Gordian knot of 'screen time'

Let's start this with an admission: my wife and I limit our children's time on their tablets, and they're only allowed on our games console at weekends. Nevertheless, I still maintain that wielding 'screen time' as a blunt instrument does more harm than good.

There's a lot of hand-wringing on this subject, especially around social skills and interaction. Take a recent article in The Guardian, for example, where Peter Fonagy, who is a professor of Contemporary Psychoanalysis and Developmental Science at UCL, comments:

“My impression is that young people have less face-to-face contact with older people than they once used to. The socialising agent for a young person is another young person, and that’s not what the brain is designed for.

“It is designed for a young person to be socialised and supported in their development by an older person. Families have fewer meals together as people spend more time with friends on the internet. The digital is not so much the problem – it’s what the digital pushes out.”

I don't disagree that we all need a balance here, but where's the evidence? On balance, I spend more time with my children than my father spent with my sister and I, yet my wife, two children and me probably have fewer mealtimes sat down at a table together than I did with my parents and sister. Different isn't always worse, and in our case it's often due to their sporting commitments.

So I'd agree with Jordan Shapiro who writes that the World Health Organisation's guidelines on screen time for kids isn't particularly useful. He quotes several sources that dismiss the WHO's recommendations:

Andrew Przybylski, the Director of Research at the Oxford Internet Institute, University of Oxford, said: “The authors are overly optimistic when they conclude screen time and physical activity can be swapped on a 1:1 basis.” He added that, “the advice overly focuses on quantity of screen time and fails to consider the content and context of use. Both the American Academy of Pediatricians and the Royal College of Paediatrics and Child Health now emphasize that not all screen time is created equal.”

That being said, parents still need some guidance. As I've said before, my generation of parents are the first ones having to deal with all of this, so where do we turn for advice?

An article by Roja Heydarpour suggests three strategies, including one from Mimi Ito who I hold in the utmost respect for her work around Connected Learning:

“Just because [kids] may meet an unsavory person in the park, we don’t ban them from outdoor spaces,” said Mimi Ito, director of the Connected Learning Lab at University of California-Irvine, at the 10th annual Women in the World Summit on Thursday. After years of research, the mother of two college-age children said she thinks parents need to understand how important digital spaces are to children and adjust accordingly.

Taking away access to these spaces, she said, is taking away what kids perceive as a human right. Gaming is like the proverbial water cooler for many boys, she said. And for many girls, social media can bring access to friends and stave off social isolation. “We all have to learn how to regulate our media consumption,” Ito said. “The longer you delay kids being able to use those muscles, the longer you delay kids learning how to regulate.”

I feel a bit bad reading that, as we've recently banned my son from the game Fortnite, which we felt was taking over his life a little too much. It's not forever, though, and he does have to find that balance between it having a place in his life and literally talking about it all of the freaking time.

One authoritative voice in the area is my friend and sometimes collaborator Ian O'Byrne, who, together with Kristen Hawley Turner, has created screentime.me which features a blog, podcast, and up-to-date research on the subject. Well worth checking out!


Also check out:

  • Teens 'not damaged by screen time', study finds (BBC Technology) — "The analysis is robust and suggests an overall population effect too small to warrant consideration as a public health problem. They also question the widely held belief that screens before bedtime are especially bad for mental health."
  • Human Contact Is Now a Luxury Good (The New York Times) — "The rich have grown afraid of screens. They want their children to play with blocks, and tech-free private schools are booming. Humans are more expensive, and rich people are willing and able to pay for them. Conspicuous human interaction — living without a phone for a day, quitting social networks and not answering email — has become a status symbol."
  • NHS sleep programme ‘life changing’ for 800 Sheffield children each year (The Guardian) — "Families struggling with children’s seriously disrupted sleep have seen major improvements by deploying consistent bedtimes, banning sugary drinks in the evening and removing toys and electronics from bedrooms."

The benefits of Artificial Intelligence

As an historian, I’m surprisingly bad at recalling facts and dates. However, I’d argue that the study of history is actually about the relationship between those facts and dates — which, let’s face it, so long as you’re in the right ballpark, you can always look up.

Understanding the relationship between things, I’d argue, is a demonstration of higher-order competence. This is described well by the SOLO Taxonomy, which I featured in my ebook on digital literacies:

SOLO Taxonomy

This is important, as it helps to explain two related concepts around which people often get confused: ‘artificial intelligence’ and ‘machine learning’. If you look at the diagram above, you can see that the ‘Extended Abstract’ of the SOLO taxonomy also includes the ‘Relational’ part. Similarly, the field of ‘artificial intelligence’ includes ‘machine learning’.

There are some examples of each in this WIRED article, but for the purposes of this post let’s just leave it there. Some of what I want to talk about here involves machine learning and some artificial intelligence. It’s all interesting and affects the future of tech in education and society.

If you’re a gamer, you’ll already be familiar with some of the benefits of AI. No longer are ‘CPU players’ dumb, but actually play a lot like human players. That means with no unfair advantages programmed in by the designers of the game, the AI can work out strategies to defeat opponents. The recent example of OpenAI Five beating the best players at a game called Dota 2, and then internet teams finding vulnerabilities in the system, is a fascinating battle of human versus machine:

“Beating OpenAI Five is a testament to human tenacity and skill. The human teams have been working together to get those wins. The way people win is to take advantage of every single weakness in Five—some coming from the few parts of Five that are scripted rather than learned—gradually build up resources, and most importantly, never engage Five in a fair fight.” OpenAI co-founder Greg Brockman told Motherboard.
Deepfakes, are created via "a technique for human image synthesis based on artificial intelligence... that can depict a person or persons saying things or performing actions that never occurred in reality". There's plenty of porn, of course, but also politically-motivated videos claiming that people said things they never did.

There’s benefits here, though, too. Recent AI research shows how, soon, it will be possible to replace any game character with one created from your own videos. In other words, you will be able to be in the game!

It only took a few short videos of each activity -- fencing, dancing and tennis -- to train the system. It was able to filter out other people and compensate for different camera angles. The research resembles Adobe's "content-aware fill" that also uses AI to remove elements from video, like tourists or garbage cans. Other companies, like NVIDIA, have also built AI that can transform real-life video into virtual landscapes suitable for games.
It's easy to be scared of all of this, fearful that it's going to ravage our democratic institutions and cause a meltdown of civilisation. But, actually, the best way to ensure that it's not used for those purposes is to try and understand it. To play with it. To experiment.

Algorithms have already been appointed to the boards of some companies and, if you think about it, there’s plenty of job roles where automated testing is entirely normal. I’m looking forward to a world where AI makes our lives a whole lot easier and friction-free.


Also check out:

  • AI generates non-stop stream of death metal (Engadget) — "The result isn't entirely natural, if simply because it's not limited by the constraints of the human body. There are no real pauses. However, it certainly sounds the part you'll find plenty of hyper-fast drums, guitar thrashing and guttural growling."
  • How AI Will Turn Us All Into Filmmakers (WIRED) "AI-assisted editing won’t make Oscar-­worthy auteurs out of us. But amateur visual storytelling will probably explode in complexity."
  • Experts Weigh in on Merits of AI in Education (THE Journal) — "AI systems are perfect for analyzing students’ progress, providing more practice where needed and moving on to new material when students are ready," she stated. "This allows time with instructors to focus on more complex learning, including 21st-century skills."

The drawbacks of Artificial Intelligence

It’s really interesting to do philosophical thought experiments with kids. For example, the trolley problem, a staple of undergradate Philosophy courses, is also accessible to children from a fairly young age.

You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:
  1. Do nothing and allow the trolley to kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the more ethical option?
With the advent of autonomous vehicles, these are no longer idle questions. The vehicles, which have to make split-second decision, may have to decide whether to hit a pram containing a baby, or swerve and hit a couple of pensioners. Due to cultural differences, even that's not something that can be easily programmed, as the diagram below demonstrates. Self-driving cards: pedestrians vs passengers

For two countries that are so close together, it’s really interesting that Japan and China are on the opposite ends of the spectrum when it comes to saving passengers or pedestrians!

The authors of the paper cited in the article are careful to point out that countries shouldn’t simply create laws based on popular opinion:

Edmond Awad, an author of the paper, brought up the social status comparison as an example. “It seems concerning that people found it okay to a significant degree to spare higher status over lower status,” he said. “It's important to say, ‘Hey, we could quantify that’ instead of saying, ‘Oh, maybe we should use that.’” The results, he said, should be used by industry and government as a foundation for understanding how the public would react to the ethics of different design and policy decisions.
This is why we need more people with a background in the Humanities in tech, and be having a real conversation about ethics and AI.

Of course, that’s easier said than done, particularly when those companies who are in a position to make significant strides in this regard have near-monopolies in their field and are pulling in eye-watering amounts of money. A recent example of this, where Google convened an AI ethics committee was attacked as a smokescreen:

Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just “ethics washing,” a strategy to avoid government regulation. When researchers uncover new ways for technology to harm marginalized groups or infringe on civil liberties, tech companies can point to their boards and charters and say, “Look, we’re doing something.” It deflects criticism, and because the boards lack any power, it means the companies don’t change.

 [...]

“It’s not that people are against governance bodies, but we have no transparency into how they’re built,” [Rumman] Chowdhury [a data scientist and lead for responsible AI at management consultancy Accenture] tells The Verge. With regard to Google’s most recent board, she says, “This board cannot make changes, it can just make suggestions. They can’t talk about it with the public. So what oversight capabilities do they have?”

As we saw around privacy, it takes a trusted multi-national body like the European Union to create a regulatory framework like GDPR for these issues. Thankfully, they've started that process by releasing guidelines containing seven requirements to create trustworthy AI:
  1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  4. Transparency: The traceability of AI systems should be ensured.
  5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
The problem isn't that people are going out of their way to build malevolent systems to rob us of our humanity. As usual, bad things happen because of more mundane requirements. For example, The Guardian has recently reported on concerns around predictive policing and hospitals using AI to predict everything from no-shows to risk of illness.

When we throw facial recognition into the mix, things get particularly scary. It’s all very well for Taylor Swift to use this technology to identify stalkers at her concerts, but given its massive drawbacks, perhaps we should restrict facial recognition somehow?

Human bias can seep into AI systems. Amazon abandoned a recruiting algorithm after it was shown to favor men’s resumes over women’s; researchers concluded an algorithm used in courtroom sentencing was more lenient to white people than to black people; a study found that mortgage algorithms discriminate against Latino and African American borrowers.
Facial recognition might be a cool way to unlock your phone, but the kind of micro-expressions that made for great television in the series Lie to Me is now easily exploited in what is expected to become a $20bn industry.

The difficult thing with all of this is that it’s very difficult for us as individuals to make a difference here. The problem needs to be tackled at a much higher level, as with GDPR. That will take time, and meanwhile the use of AI is exploding. Be careful out there.


Also check out:

Opting in and out of algorithms

It's now over seven years since I submitted my doctoral thesis on digital literacies. Since then, almost the entire time my daughter has been alive, the world has changed a lot.

Writing in The Conversation, Anjana Susarla explains her view that digital literacy goes well beyond functional skills:

In my view, the new digital literacy is not using a computer or being on the internet, but understanding and evaluating the consequences of an always-plugged-in lifestyle. This lifestyle has a meaningful impact on how people interact with others; on their ability to pay attention to new information; and on the complexity of their decision-making processes.

Digital literacies are plural, context-dependent and always evolving. Right now, I think Susarla is absolutely correct to be focusing on algorithms and the way they interact with society. Ben Williamson is definitely someone to follow and read up on in that regard.

Over the past few years I've been trying (both directly and indirectly) to educate people about the impact of algorithms on everything from fake news to privacy. It's one of the reasons I don't use Facebook, for example, and go out of my way to explain to others why they shouldn't either:

A study of Facebook usage found that when participants were made aware of Facebook’s algorithm for curating news feeds, about 83% of participants modified their behavior to try to take advantage of the algorithm, while around 10% decreased their usage of Facebook.

[...]

However, a vast majority of platforms do not provide either such flexibility to their end users or the right to choose how the algorithm uses their preferences in curating their news feed or in recommending them content. If there are options, users may not know about them. About 74% of Facebook’s users said in a survey that they were not aware of how the platform characterizes their personal interests.

Although I'm still not going to join Facebook, one reason I'm a little more chilled out about algorithms and privacy these days is because of the GDPR. If it's regulated effectively (as I think it will be) then it should really keep Big Tech in check:

As part of the recently approved General Data Protection Regulation in the European Union, people have “a right to explanation” of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.

[...]

But transparency is not a panacea. Even when an algorithm’s overall process is sketched out, the details may still be too complex for users to comprehend. Transparency will help only users who are sophisticated enough to grasp the intricacies of algorithms.

I agree that it's not enough to just tell people that they're being tracked without them being able to do something about it. That leads to technological defeatism. We need a balance between simple, easy-to-use tools that enable user privacy and security. These aren't going to come through tech industry self-regulation, but through regulatory frameworks like GDPR.

Source: The Conversation


Also check out: