Author: Doug Belshaw (page 1 of 31)

How to be super-productive

Not a huge sample size, but this article has studied what makes ‘super-productive’ people tick:

We collected data on over 7,000 people who were rated by their manager on their level of their productivity and 48 specific behaviors. Each person was also rated by an average of 11 other people, including peers, subordinates, and others. We identified the specific behaviors that were correlated with high levels of productivity — the top 10% in our sample — and then performed a factor analysis.

Here’s the list of seven things that came out of the study:

  1. Set stretch goals
  2. Show consistency
  3. Have knowledge and technical expertise
  4. Drive for results
  5. Anticipate and solve problems
  6. Take initiative
  7. Be collaborative

In my experience, you could actually just focus on helping people with three things:

  • Show up
  • Be proactive
  • Collaborate

That’s certainly been my experience of high-performers over my career so far!

Source: Harvard Business Review (via Ian O’Byrne)

"We do not belong to those who have ideas only among books, when stimulated by books. It is our habit to think outdoors — walking, leaping, climbing, dancing, preferably on lonely mountains or near the sea where even the trails become thoughtful." (Friedrich Nietzsche)

Issue #301: Endless horse

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Clickbait and switch?

Should you design for addiction or for loyalty? That’s the question posed by Michelle Manafy in this post for Nieman Lab. It all depends, she says, on whether you’re trying to attract users or an audience.

With advertising as the primary driver of web revenue, many publishers have chased the click dragon. Seeking to meet marketers’ insatiable desire for impressions, publishers doubled down on quick clicks. Headlines became little more than a means to a clickthrough, often regardless of whether the article would pay off or even if the topic was worthy of coverage. And — since we all know there are still plenty of publications focusing on hot headlines over substance — this method pays off. In short-term revenue, that is.

However, the reader experience that shallow clicks deliver doesn’t develop brand affinity or customer loyalty. And the negative consumer experience has actually been shown to extend to any advertising placed in its context. Sure, there are still those seeking a quick buck — but these days, we all see clickbait for what it is.

Audiences mature over time and become wary of particular approaches. Remember “…and you’ll not believe what came next” approaches?

Ask Manafy notes, it’s much easier to design for addiction than to build an audience. The former just requires lots and lots of tracking — something at which the web has become spectacularly good at, due to advertising.

For example, many push notifications are specifically designed to leverage the desire for human interaction to generate clicks (such as when a user is alerted that their friend liked an article). Push notifications and alerts are also unpredictable (Will we have likes? Mentions? New followers? Negative comments?). And this unpredictability, or B.F. Skinner’s principle of variable rewards, is the same one used in those notoriously addictive slot machines. They’re also lucrative — generating more revenue in the U.S. than baseball, theme parks, and movies combined. A pull-to-refresh even smacks of a slot machine lever.

The problem is that designing for addiction isn’t a long-term strategy. Who plays Farmville these days? And the makers of Candy Crush aren’t exactly crushing it with their share price these days.

Sure, an addict is “engaged” — clicking, liking, swiping — but what if they discover that your product is bad for them? Or that it’s not delivering as much value as it does harm? The only option for many addicts is to quit, cold turkey. Sure, many won’t have the willpower, and you can probably generate revenue off these users (yes, users). But is that a long-term strategy you can live with? And is it a growth strategy, should the philosophical, ethical, or regulatory tide turn against you?

The ‘regulatory tide’ referenced here is exemplified through GDPR, which is already causing a sea change in attitude towards user data. Compliance with teeth, it seems, gets results.

Designing for sustainability isn’t just good from a regulatory point of view, it’s good for long-term business, argues Manafy:

Where addiction relies on an imbalanced and unstable relationship, loyal customers will return willingly time and again. They’ll refer you to others. They’ll be interested in your new offerings, because they will already rely on you to deliver. And, as an added bonus, these feelings of goodwill will extend to any advertising you deliver too. Through the provision of quality content, delivered through excellent experiences at predictable and optimal times, content can become a trusted ally, not a fleeting infatuation or unhealthy compulsion.

Instead of thinking of your audience as ‘users’ waiting for their next hit, she suggests, think of them as your audience. That’s a much better approach and will help you make much better design decisions.

Source: Nieman Lab

"Once you learn to read, you will be forever free."

(Frederick Douglass)

Soviet-era industrial design

While the prospects of me learning the Russian language anytime soon are effectively zero, I do have a soft spot for the country. My favourite novels are 19th century Russian fiction, the historical time period I’m most fond of is the Russian revolutions of 1917*, and I really like some of the designs that came out of Bolshevik and Stalinist Russia. (That doesn’t mean I condone the atrocities, of course.)

The Soviet era, from 1950 onwards, isn’t really a time period I’ve studied in much depth. I taught it as a History teacher as part of a module on the Cold War, but that was very much focused on the American and British side of things. So I’ve missed out on some of the wonderful design that came out of that time period. Here’s a couple of my favourites featured in this article. I may have to buy the book it mentions!

Soviet radio

Soviet textiles

Source: Atlas Obscura

* I’m currently reading October: the story of the Russian Revolution by China Mieville, which I’d recommend.

Conversational implicature

In references for jobs, former employers are required to be positive. Therefore, a reference that focuses on how polite and punctual someone is could actually be a damning indictment of their ability. Such ‘conversational implicature’ is the focus of this article:

When we convey a message indirectly like this, linguists say that we implicate the meaning, and they refer to the meaning implicated as an implicature. These terms were coined by the British philosopher Paul Grice (1913-88), who proposed an influential account of implicature in his classic paper ‘Logic and Conversation’ (1975), reprinted in his book Studies in the Way of Words (1989). Grice distinguished several forms of implicature, the most important being conversational implicature. A conversational implicature, Grice held, depends, not on the meaning of the words employed (their semantics), but on the way that the words are used and interpreted (their pragmatics).

From my point of view, this is similar to the difference between productive and unproductive ambiguity.

The distinction between what is said and what is conversationally implicated isn’t just a technical philosophical one. It highlights the extent to which human communication is pragmatic and non-literal. We routinely rely on conversational implicature to supplement and enrich our utterances, thus saving time and providing a discreet way of conveying sensitive information. But this convenience also creates ethical and legal problems. Are we responsible for what we implicate as well as for what we actually say?

For example, and as the article notes, “shall we go upstairs?” can mean a sexual invitation, which may or may not later imply consent. It’s a tricky area.

I’ve noted that the more technically-minded a person, the less they use conversational implicature. In addition, and I’m not sure if this is true or just my own experience, I’ve found that Americans tend to be more literal in their communication than Europeans.

 To avoid disputes and confusion, perhaps we should use implicature less and communicate more explicitly? But is that recommendation feasible, given the extent to which human communication relies on pragmatics?

To use conversational implicature is human. It can be annoying. It can turn political. But it’s an extremely useful tool, and certainly lubricates us all rubbing along together.

Source: Aeon

Ryan Holiday’s 13 daily life-changing habits

Articles like this are usually clickbait with two or three useful bits of advice that you’ve already read elsewhere, coupled with some other random things to pad it out. That’s not the case with Ryan Holiday’s post, which lists:

  1. Prepare for the hours ahead
  2. Go for a walk
  3. Do the deep work
  4. Do a kindness
  5. Read. Read. Read.
  6. Find true quiet
  7. Make time for strenuous exercise
  8. Think about death
  9. Seize the alive time
  10. Say thanks — to the good and bad
  11. Put the day up for review
  12. Find a way to connect to something big
  13. Get eight hours of sleep

I’m doing pretty well on all of these at the moment, except perhaps number eleven. I used to ‘call myself into the office‘ each month. Perhaps I should start doing that again?


Source: Thought Catalog

Valuing and signalling your skills

When I rocked up to the MoodleMoot in Miami back in November last year, I ran a workshop that involved human spectrograms, post-it notes, and participatory activities. Although I work in tech and my current role is effectively a product manager for Moodle, I still see myself primarily as an educator.

This, however, was a surprise for some people who didn’t know me very well before I joined Moodle. As one person put it, “I didn’t know you had that in your toolbox”. The same was true at Mozilla; some people there just saw me as a quasi-academic working on web literacy stuff.

Given this, I was particularly interested in a post from Steve Blank which outlined why he enjoys working with startup-like organisations rather than large, established companies:

It never crossed my mind that I gravitated to startups because I thought more of my abilities than the value a large company would put on them. At least not consciously. But that’s the conclusion of a provocative research paper, Asymmetric Information and Entrepreneurship, that explains a new theory of why some people choose to be entrepreneurs. The authors’ conclusion — Entrepreneurs think they are better than their resumes show and realize they can make more money by going it alone.And in most cases, they are right.

If you stop and think for a moment, it’s entirely obvious that you know your skills, interests, and knowledge better than anyone who hires you for a specific role. Ordinarily, they’re interested in the version of you that fits the job description, rather than you as a holistic human being.

The paper that Blank cites covers research which followed 12,686 people over 30+ years. It comes up with seven main findings, but the most interesting thing for me (given my work on badges) is the following:

If the authors are right, the way we signal ability (resumes listing education and work history) is not only a poor predictor of success, but has implications for existing companies, startups, education, and public policy that require further thought and research.

It’s perhaps a little simplistic as a binary, but Blank cites a 1970s paper that uses ‘lemons’ and ‘cherries’ as a metaphors to compare workers:

Lemons Versus Cherries. The most provocative conclusion in the paper is that asymmetric information about ability leads existing companies to employ only “lemons,” relatively unproductive workers. The talented and more productive choose entrepreneurship. (Asymmetric Information is when one party has more or better information than the other.) In this case the entrepreneurs know something potential employers don’t – that nowhere on their resume does it show resiliency, curiosity, agility, resourcefulness, pattern recognition, tenacity and having a passion for products.

This implication, that entrepreneurs are, in fact, “cherries” contrasts with a large body of literature in social science, which says that the entrepreneurs are the “lemons”— those who cannot find, cannot hold, or cannot stand “real jobs.”

My main takeaway from this isn’t necessarily that entrepreneurship is always the best option, but that we’re really bad at signalling abilities and finding the right people to work with. I’m convinced that using digital credentials can improve that, but only if we use them in transformational ways, rather than replicate the status quo.

Source: Steve Blank

Intimate data analytics in education

The ever-relevant and compulsively-readable Ben Williamson turns his attention to ‘precision education’ in his latest post. It would seem that now that the phrase ‘personalised learning’ has jumped the proverbial shark, people are doubling down on the rather dangerous assumption that we just need more data to provide better learning experiences.

In some ways, precision education looks a lot like a raft of other personalized learning practices and platform developments that have taken shape over the past few years. Driven by developments in learning analytics and adaptive learning technologies, personalized learning has become the dominant focus of the educational technology industry and the main priority for philanthropic funders such as Bill Gates and Mark Zuckerberg.


A particularly important aspect of precision education as it is being advocated by others, however, is its scientific basis. Whereas most personalized learning platforms tend to focus on analysing student progress and outcomes, precision education requires much more intimate data to be collected from students. Precision education represents a shift from the collection of assessment-type data about educational outcomes, to the generation of data about the intimate interior details of students’ genetic make-up, their psychological characteristics, and their neural functioning.

As Williamson points out, the collection of ‘intimate data’ is particularly concerning, particularly in the wake of the Cambridge Analytica revelations.

Many people will find the ideas behind precision education seriously concerning. For a start, there appear to be some alarming symmetries between the logics of targeted learning and targeted advertising that have generated heated public and media attention already in 2018. Data protection and privacy are obvious risks when data are collected about people’s private, intimate and interior lives, bodies and brains. The ethical stakes in using genetics, neural information and psychological profiles to target students with differentiated learning inputs are significant.

There’s a very definite worldview which presupposes that we just need to throw more technology at a problem until it goes away. That may be true in some situations, but at what cost? And to what extent is the outcome an artefact of the constraints of the technologies? Hopefully my own kids will be finished school before this kind of nonsense becomes mainstream. I do, however, worry about my grandchildren.

The technical machinery alone required for precision education would be vast. It would have to include neurotechnologies for gathering brain data, such as neuroheadsets for EEG monitoring. It would require new kinds of tests, such as those of personality and noncognitive skills, as well as real-time analytics programs of the kind promoted by personalized-learning enthusiasts. Gathering intimate data might also require genetics testing technologies, and perhaps wearable-enhanced learning devices for capturing real-time data from students’ bodies as proxy psychometric measures of their responses to learning inputs and materials.

Thankfully, Williamson cites the work of academics who are proposing a different way forward. Something that respects the social aspect of learning rather than a reductionist view that focuses on inputs and outputs.

One productive way forward might be to approach precision education from a ‘biosocial’ perspective. As Deborah Youdell  argues, learning may be best understood as the result of ‘social and biological entanglements.’ She advocates collaborative, inter-disciplinary research across social and biological sciences to understand learning processes as the dynamic outcomes of biological, genetic and neural factors combined with socially and culturally embedded interactions and meaning-making processes. A variety of biological and neuroscientific ideas are being developed in education, too, making policy and practice more bio-inspired.

The trouble is, of course, is that it’s not enough for academics to write papers about things. Or even journalists to write newspaper articles. Even with all of the firestorm over Facebook recently, people are still using the platform. If the advocates of ‘precision education’  have their way, I wonder who will actually create something meaningful that opposes their technocratic worldview?

Source: Code Acts in Education