What can dreams of a communist robot utopia teach us about human nature?
This article in Aeon by Victor Petrov posits that, in the post-industrial age, we no longer see human beings as primarily manual workers, but as thinkers using digital screens to get stuff done. What does that do to our self-image?
The communist parties of eastern Europe grappled with this new question, too. The utopian social order they were promising from Berlin to Vladivostok rested on the claim that proletarian societies would use technology to its full potential, in the service of all working people. Bourgeois information society would alienate workers even more from their own labour, turning them into playthings of the ruling classes; but a socialist information society would free Man from drudgery, unleash his creative powers, and enable him to ‘hunt in the morning … and criticise after dinner’, as Karl Marx put it in 1845. However, socialist society and its intellectuals foresaw many of the anxieties that are still with us today. What would a man do in a world of no labour, and where thinking was done by machines?Bulgaria was a communist country that, after the Second World War, went from producing cigarettes to being one of the world's largest producers of computers. This had a knock-on effect on what people wrote about in the country.
The Bulgarian reader was increasingly treated to debates about what humanity would be in this new age. Some, such as the philosopher Mityu Yankov, argued that what set Man apart from the animals was his ability to change and shape nature. For thousands of years, he had done this through physical means and his own brawn. But the Industrial Revolution had started a change of Man’s own nature, which was culminating with the Information Revolution – humanity now was becoming not a worker but a ‘governor’, a master of nature, and the means of production were not machines or muscles, but the human brain.Lyuben Dilov, a popular sci-fi author, focused on "the boundaries between man and machine, brain and computer". His books were full of societies obsessed with technology.
Added to this, there is technological anxiety, too – what is it to be a man when there are so many machines? Thus, Dilov invents a Fourth Law of Robotics, to supplement Asimov’s famous three, which states that ‘the robot must, in all circumstances, legitimate itself as a robot’. This was a reaction by science to the roboticists’ wish to give their creations ever more human qualities and appearance, making them subordinate to their function – often copying animal or insect forms. Zenon muses on human interactions with robots that start from a young age, giving the child power over the machine from the outset. This undermines our trust in the very machines on which we depend. Humans need a distinction from the robots, they need to know that they are always in power and couldn’t be lied to. For Dilov, the anxiety was about the limits of humanity, at least in its current stage – fearful, humans could not yet treat anything else, including their machines, as equals.This all seems very pertinent at a time when deepfakes make us question what is real online. We're perhaps less worried about a Blade Runner-style dystopia and more concerned about digital 'reality' but, nevertheless, questions about what it means to be human persist.
Bulgarian robots were both to be feared and they were the future. Socialism promised to end meaningless labour but reproduced many of the anxieties that are still with us today in our ever-automating world. What can Man do that a machine cannot do is something we still haven’t solved. But, like Kesarovski, perhaps we need not fear this new world so much, nor give up our reservations for the promise of a better, easier world.Source: Aeon
Escaping from the crush of circumstances
“Today I escaped from the crush of circumstances, or better put, I threw them out, for the crush wasn’t from outside me but in my own assumptions.”
(Marcus Aurelius)
The benefits of reading aloud to children
This article in the New York Times by Perri Klass, M.D. focuses on studies that show a link between parents reading to their children and a reduction in problematic behaviour.
I really like the way that they focus on the positives and point out how much the child loves the interaction with their parent through the text.This study involved 675 families with children from birth to 5; it was a randomized trial in which 225 families received the intervention, called the Video Interaction Project, and the other families served as controls. The V.I.P. model was originally developed in 1998, and has been studied extensively by this research group.
Participating families received books and toys when they visited the pediatric clinic. They met briefly with a parenting coach working with the program to talk about their child’s development, what the parents had noticed, and what they might expect developmentally, and then they were videotaped playing and reading with their child for about five minutes (or a little longer in the part of the study which continued into the preschool years). Immediately after, they watched the videotape with the study interventionist, who helped point out the child’s responses.
I don't know enough about the causes of ADHD to be able to comment, but as a teacher and parent, I do know there's a link between the attention you give and the attention you receive.The Video Interaction Project started as an infant-toddler program, working with low-income urban families in New York during clinic visits from birth to 3 years of age. Previously published data from a randomized controlled trial funded by the National Institute of Child Health and Human Development showed that the 3-year-olds who had received the intervention had improved behavior — that is, they were significantly less likely to be aggressive or hyperactive than the 3-year-olds in the control group.
It is a bit sad when we have to encourage parents to play with children between the ages of birth and three, but I guess in the age of smartphone addiction, we kind of have to.“The reduction in hyperactivity is a reduction in meeting clinical levels of hyperactivity,” Dr. Mendelsohn said. “We may be helping some children so they don’t need to have certain kinds of evaluations.” Children who grow up in poverty are at much higher risk of behavior problems in school, so reducing the risk of those attention and behavior problems is one important strategy for reducing educational disparities — as is improving children’s language skills, another source of school problems for poor children.
Source: The New York Times
Image CC BY Jason Lander
You need more daylight to sleep better
An an historian, I’ve often been fascinated about what life must have been like before the dawn of electricity. I have a love-hate relationship with artificial light. On the one hand, I use a lightbox to stave off Seasonal Affective Disorder. On the other hand, I’ve got (my optician tells me) not only pale blue irises but very thin corneas. That makes me photophobic and subject to the kind of glare on a regular basis I can only imagine ‘normal’ people get after staring at a lightbulb for a while.
In this article, Linda Geddes describes an experiment in which she decided to forgo artificial life for a number of weeks to see what effect it had on her health and, most importantly, her sleep.
Working with sleep researchers Derk-Jan Dijk and Nayantara Santhi at the University of Surrey, I designed a programme to go cold-turkey on artificial light after dark, and to try to maximise exposure to natural light during the day – all while juggling an office job and busy family life in urban Bristol.By the end of 2017, instead of having to manually install something like f.lux on my devices, they all started to have it built-in. There's a general realisation that blue light before bedtime is a bad idea. What this article points out, however, is another factor: how bright the light is that you're subjected to during the day.
Light enables us to see, but it affects many other body systems as well. Light in the morning advances our internal clock, making us more lark-like, while light at night delays the clock, making us more owlish. Light also suppresses a hormone called melatonin, which signals to the rest of the body that it’s night-time – including the parts that regulate sleep. “Apart from vision, light has a powerful non-visual effect on our body and mind, something to remember when we stay indoors all day and have lights on late into the night,” says Santhi, who previously demonstrated that the evening light in our homes suppresses melatonin and delays the timing of our sleep.The important correlation here is between the strength of light Geddes experienced during her waking hours, and the quality of her sleep.
But when I correlated my sleep with the amount of light I was exposed to during the daytime, an interesting pattern emerged. On the brightest days, I went to bed earlier. And for every 100 lux increase in my average daylight exposure, I experienced an increase in sleep efficiency of almost 1% and got an extra 10 minutes of sleep.This isn't just something that Geddes has experienced; studies have also found this kind of correlation.
In March 2007, Dijk and his colleagues replaced the light bulbs on two floors of an office block in northern England, housing an electronic parts distribution company. Workers on one floor of the building were exposed to blue-enriched lighting for four weeks; those on the other floor were exposed to white light. Then the bulbs were switched, meaning both groups were ultimately exposed to both types of light. They found that exposure to the blue-enriched white light during daytime hours improved the workers’ subjective alertness, performance, and evening fatigue. They also reported better quality and longer sleep.So the key takeaway message?
It’s ridiculously simple. But spending more time outdoors during the daytime and dimming the lights in the evening really could be a recipe for better sleep and health. For millennia, humans have lived in synchrony with the Sun. Perhaps it's time we got reacquainted.Source: BBC Future
On the cultural value of memes
I’ve always been a big fan of memes. In fact, I discuss them in my thesis, ebook, and TEDx talk. This long-ish article from Jay Owens digs into their relationship with fake news and what he calls ‘post-authenticity’. What I’m really interested in, though, comes towards the end. He gets into the power of memes and why they’re the perfect form of online cultural expression.
So through humour, exaggeration, and irony — a truth emerges about how people are actually feeling. A truth that they may not have felt able to express straightforwardly. And there’s just as much, and potentially more, community present in these groups as in many of the more traditional civic-oriented groups Zuckerberg’s strategy may have had in mind.The thing that can be missing from text-based interactions is empathy. The right kind of meme, however, speaks using images, words, but also to something else that a group have in common.
Meme formats — from this week’s American Chopper dialectic model to now classics like the “Exploding Brain,” “Distracted Boyfriend,” and “Tag Yourself” templates — are by their very nature iterative and quotable. That is how the meme functions, through reference to the original context and memes that have come before, coupled with creative remixing to speak to a particular audience, topic, or moment. Each new instance of a meme is thereby automatically familiar and recognisable. The format carries a meta-message to the audience: “This is familiar, not weird.” And the audience is prepared to know how to react: you like, you respond with laughter-referencing emoji, you tag your friends in the comments.Let's take this example, that Owens cites in the article. I sent it to my wife via Telegram, which an instant messaging app that we use as a permanent backchannel).
Her response, inevitably was: 😂
It’s funny because it’s true. But it also quickly communicates solidarity and empathy.
The format acts as a kind of Trojan horse, then, for sharing difficult feelings — because the format primes the audience to respond hospitably. There isn’t that moment of feeling stuck over how to respond to a friend’s emotional disclosure, because she hasn’t made the big statement directly, but instead through irony and cultural quotation — distancing herself from the topic through memes, typically by using stock photography (as Leigh Alexander notes) rather than anything as gauche as a picture of oneself. This enables you the viewer to sidestep the full intensity of it in your response, should you choose, but still, crucially, to respond). And also to DM your friend and ask, “Hey, are you alright?” and cut to the realtalk should you so choose to.So, effectively, you can be communicating different things to different people. If, instead of sending the 90s kids image above directly to my wife via Telegram, I'd shared it to my Twitter followers, it may have elicited a different response. Some people would have liked and retweeted it, for sure, but someone who knows me well might ask if I'm OK. After all, there's a subtext in there of feeling like you're "stuck".
Owens goes on to talk about how that memetic culture means that we’re living in a ‘post authentic’ world. But did such authenticity ever really exist?
So perhaps to say that this post-authentic moment is one of evolving, increasingly nuanced collective communication norms, able to operate with multi-layered recursive meanings and ironies in disposable pop culture content… is kind of cold comfort.Amen to that.Nonetheless, author Robin Sloan described the genius of the “American Chopper” meme as being that “THIS IS THE ONLY MEME FORMAT THAT ACKNOWLEDGES THE EXISTENCE OF COMPETING INFORMATION, AND AS SUCH IT IS THE ONLY FORMAT SUITED TO THE COMPLEXITY OF OUR WORLD!”
Source: Jay Owens
The résumé is a poor proxy for a human being
I’ve never been a fan of the résumé, or ‘Curriculum Vitae’ (CV) as we tend to call them in the UK. How on earth can a couple of sheets of paper ever hope to sum up an individual in all of their complexity? It inevitably leads to the kind of things that end up on LinkedIn profiles: your academic qualifications, job history, and a list of hobbies that don’t make you sound like a loser.
In this (long-ish) article for Quartz, Oliver Staley looks at what Laszlo Bock is up to with his new startup, with a detour through the history of the résumé.
“Resumes are terrible,” says Laszlo Bock, the former head of human resources at Google, where his team received 50,000 resumes a week. “It doesn’t capture the whole person. At best, they tell you what someone has done in the past and not what they’re capable of doing in the future.”
I really dislike résumés, and I’m delighted that I’ve managed to get my last couple of jobs without having to rely on them. I guess that’s a huge benefit of working openly; the web is your résumé.
Resumes force job seekers to contort their work and life history into corporately acceptable versions of their actual selves, to better conform to the employer’s expectation of the ideal candidate. Unusual or idiosyncratic careers complicate resumes. Gaps between jobs need to be accounted for. Skills and abilities learned outside of formal work or education aren’t easily explained. Employers may say they’re looking for job seekers to distinguish themselves, but the resume requires them to shed their distinguishing characteristics.
Unfortunately, Henry Ford’s ‘faster horses’ rule also applies to résumés. And (cue eye roll) people need to find a way to work in buzzwords like ‘blockchain’.
The resume of the near future will be a document with far more information—and information that is far more useful—than the ones we use now. Farther out, it may not be a resume at all, but rather a digital dossier, perhaps secured on the blockchain (paywall), and uploaded to a global job-pairing engine that is sorting you, and billions of other job seekers, against millions of openings to find the perfect match.
I’m more interested in different approaches, rather than doubling-down on the existing approach, so it’s good to see large multinational companies like Unilever doing away with résumés. They prefer game-like assessments.
Two years ago, the North American division of Unilever—the consumer products giant—stopped asking for resumes for the approximately 150-200 positions it fills from college campuses annually. Instead, it’s relying on a mix of game-like assessments, automated video interviews, and in-person problem solving exercises to winnow down the field of 30,000 applicants.
It all sounds great but, at the end of the day it’s extra unpaid work, and more jumping through hoops.
The games are designed so there are no wrong answers— a weakness in one characteristic, like impulsivity, can reveal strength in another, like efficiency—and pymetrics gives candidates who don’t meet the standards for one position the option to apply for others at the company, or even at other companies. The algorithm matches candidates to the opportunities where they’re most likely to succeed. The goal, Polli says, is to eliminate the “rinse and repeat” process of submitting near identical applications for dozens of jobs, and instead use data science to target the best match of job and employee.
Back to Laszlo Bock, who claims that we should have an algorithmic system that matches people to available positions. I’m guessing he hasn’t read Brave New World.
For the system to work, it would need an understanding of a company’s corporate culture, and how people actually function within its walls—not just what the company says about its culture. And employees and applicants would need to be comfortable handing over their personal data.For-profit entities wouldn’t be trusted as stewards of such sensitive information. Nor would governments, Bock says, noting that in communist Romania, where he was born, “the government literally had dossiers on every single citizen.”
Ultimately, Bock says, the system should be maintained by a not-for-profit, non-governmental organization. “What I’m imagining, no human being should ever look inside this thing. You shouldn’t need to,” he says.
Source: Quartz at Work
OEP (Open Educational Pragmatism?)
This is an interesting post to read, not least because I sat next to the author at the conference he describes last week, and we had a discussion about related issues. Michael Shaw, who’s a great guy and I’ve known for a few years, is in charge of Tes Resources.
Shaw notes he was wary in attending the conference, not only because it's a fairly tight-knit community:I wondered if I would feel like an interloper at the first conference I’ve ever attended on Open Educational Resources (OERs).
It wasn’t a dress code issue (though in hindsight I should have worn trainers) but that most of the attendees at #OER18 were from universities, while only a few of us there worked for education businesses.
I work for a commercial company, one that makes money from advertising and recruitment services, plus — even more controversially in this context — by letting teachers sell resources to each other, and taking a percentage on transactions.However, he found the hosts and participants "incredibly welcoming" and the debates "more open than [he'd] expected on how commercial organisations could play a part" in the ecosystem.
Shaw is keen to point out that the Tes Resources site that he manages is “a potential space for OER-sharing”. He goes on to talk about how he’s an ‘OER pragmatist’ rather than an ‘OER purist’. As a former journalist, Shaw is a great writer. However, I want to tease apart some things I think he conflates.
In his March 2018 post announcing the next phase of development for Tes Resources, Shaw announced that the goal was to create “a community of authors providing high-quality resources for educators”. He conflates that in this post with educators sharing Open Educational Resources. I don’t think the two things are the same, and that’s not because I’m an ‘OER purist’.
The concern that I, and others in the Open Education community, have around commercial players in ecosystem is the tendency to embrace, extend, and extinguish:
So, think of Twitter before they closed their API: a thousand Twitter clients bloomed, and innovations such as pull-to-refresh were invented. Then Twitter decided to 'own the experience' of users and changed their API so that those third-party clients withered.
- Embrace: Development of software substantially compatible with a competing product, or implementing a public standard.
- Extend: Addition and promotion of features not supported by the competing product or part of the standard, creating interoperability problems for customers who try to use the 'simple' standard.
- Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors that do not or cannot support the new extensions.
Tes Resources, Shaw admitted to me, doesn’t even have an API. It’s a bit like Medium, the place he chose to publish this post. If he’d written the post in something like WordPress, he’d be notified of my reply via web standard technologies. Medium doesn’t adhere to those standards. Nor does Tes Resources. It’s a walled garden.
My call, then, would be for Tes Resources to develop an API so that services such as the MoodleNet project I’m leading, can query and access it. Up until then, it’s not a repository. It’s just another silo.
Source: Michael Shaw
Image: CC BY Jess
Everything is potentially a meme
Despite — or perhaps because of — my feelings towards the British monarchy, this absolutely made my day:
Isn’t the internet great?
Source: Haha
How to be super-productive
Not a huge sample size, but this article has studied what makes ‘super-productive’ people tick:
We collected data on over 7,000 people who were rated by their manager on their level of their productivity and 48 specific behaviors. Each person was also rated by an average of 11 other people, including peers, subordinates, and others. We identified the specific behaviors that were correlated with high levels of productivity — the top 10% in our sample — and then performed a factor analysis.Here's the list of seven things that came out of the study:
- Set stretch goals
- Show consistency
- Have knowledge and technical expertise
- Drive for results
- Anticipate and solve problems
- Take initiative
- Be collaborative
- Show up
- Be proactive
- Collaborate
Source: Harvard Business Review (via Ian O’Byrne)
Thinking outdoors
“We do not belong to those who have ideas only among books, when stimulated by books. It is our habit to think outdoors — walking, leaping, climbing, dancing, preferably on lonely mountains or near the sea where even the trails become thoughtful.” (Friedrich Nietzsche)
Clickbait and switch?
Should you design for addiction or for loyalty? That’s the question posed by Michelle Manafy in this post for Nieman Lab. It all depends, she says, on whether you’re trying to attract users or an audience.
With advertising as the primary driver of web revenue, many publishers have chased the click dragon. Seeking to meet marketers’ insatiable desire for impressions, publishers doubled down on quick clicks. Headlines became little more than a means to a clickthrough, often regardless of whether the article would pay off or even if the topic was worthy of coverage. And — since we all know there are still plenty of publications focusing on hot headlines over substance — this method pays off. In short-term revenue, that is.Audiences mature over time and become wary of particular approaches. Remember “…and you’ll not believe what came next” approaches?However, the reader experience that shallow clicks deliver doesn’t develop brand affinity or customer loyalty. And the negative consumer experience has actually been shown to extend to any advertising placed in its context. Sure, there are still those seeking a quick buck — but these days, we all see clickbait for what it is.
Ask Manafy notes, it’s much easier to design for addiction than to build an audience. The former just requires lots and lots of tracking — something at which the web has become spectacularly good at, due to advertising.
For example, many push notifications are specifically designed to leverage the desire for human interaction to generate clicks (such as when a user is alerted that their friend liked an article). Push notifications and alerts are also unpredictable (Will we have likes? Mentions? New followers? Negative comments?). And this unpredictability, or B.F. Skinner’s principle of variable rewards, is the same one used in those notoriously addictive slot machines. They’re also lucrative — generating more revenue in the U.S. than baseball, theme parks, and movies combined. A pull-to-refresh even smacks of a slot machine lever.The problem is that designing for addiction isn't a long-term strategy. Who plays Farmville these days? And the makers of Candy Crush aren't exactly crushing it with their share price these days.
Sure, an addict is “engaged” — clicking, liking, swiping — but what if they discover that your product is bad for them? Or that it’s not delivering as much value as it does harm? The only option for many addicts is to quit, cold turkey. Sure, many won’t have the willpower, and you can probably generate revenue off these users (yes, users). But is that a long-term strategy you can live with? And is it a growth strategy, should the philosophical, ethical, or regulatory tide turn against you?The 'regulatory tide' referenced here is exemplified through GDPR, which is already causing a sea change in attitude towards user data. Compliance with teeth, it seems, gets results.
Designing for sustainability isn’t just good from a regulatory point of view, it’s good for long-term business, argues Manafy:
Where addiction relies on an imbalanced and unstable relationship, loyal customers will return willingly time and again. They’ll refer you to others. They’ll be interested in your new offerings, because they will already rely on you to deliver. And, as an added bonus, these feelings of goodwill will extend to any advertising you deliver too. Through the provision of quality content, delivered through excellent experiences at predictable and optimal times, content can become a trusted ally, not a fleeting infatuation or unhealthy compulsion.Instead of thinking of your audience as 'users' waiting for their next hit, she suggests, think of them as your audience. That's a much better approach and will help you make much better design decisions.
Source: Nieman Lab
Soviet-era industrial design
While the prospects of me learning the Russian language anytime soon are effectively zero, I do have a soft spot for the country. My favourite novels are 19th century Russian fiction, the historical time period I’m most fond of is the Russian revolutions of 1917*, and I really like some of the designs that came out of Bolshevik and Stalinist Russia. (That doesn’t mean I condone the atrocities, of course.)
The Soviet era, from 1950 onwards, isn’t really a time period I’ve studied in much depth. I taught it as a History teacher as part of a module on the Cold War, but that was very much focused on the American and British side of things. So I’ve missed out on some of the wonderful design that came out of that time period. Here’s a couple of my favourites featured in this article. I may have to buy the book it mentions!
Source: Atlas Obscura
- I’m currently reading October: the story of the Russian Revolution by China Mieville, which I’d recommend.
Conversational implicature
In references for jobs, former employers are required to be positive. Therefore, a reference that focuses on how polite and punctual someone is could actually be a damning indictment of their ability. Such ‘conversational implicature’ is the focus of this article:
When we convey a message indirectly like this, linguists say that we implicate the meaning, and they refer to the meaning implicated as an implicature. These terms were coined by the British philosopher Paul Grice (1913-88), who proposed an influential account of implicature in his classic paper ‘Logic and Conversation’ (1975), reprinted in his book Studies in the Way of Words (1989). Grice distinguished several forms of implicature, the most important being conversational implicature. A conversational implicature, Grice held, depends, not on the meaning of the words employed (their semantics), but on the way that the words are used and interpreted (their pragmatics).From my point of view, this is similar to the difference between productive and unproductive ambiguity.
The distinction between what is said and what is conversationally implicated isn’t just a technical philosophical one. It highlights the extent to which human communication is pragmatic and non-literal. We routinely rely on conversational implicature to supplement and enrich our utterances, thus saving time and providing a discreet way of conveying sensitive information. But this convenience also creates ethical and legal problems. Are we responsible for what we implicate as well as for what we actually say?For example, and as the article notes, "shall we go upstairs?" can mean a sexual invitation, which may or may not later imply consent. It's a tricky area.
I’ve noted that the more technically-minded a person, the less they use conversational implicature. In addition, and I’m not sure if this is true or just my own experience, I’ve found that Americans tend to be more literal in their communication than Europeans.
To avoid disputes and confusion, perhaps we should use implicature less and communicate more explicitly? But is that recommendation feasible, given the extent to which human communication relies on pragmatics?To use conversational implicature is human. It can be annoying. It can turn political. But it's an extremely useful tool, and certainly lubricates us all rubbing along together.
Source: Aeon
Ryan Holiday's 13 daily life-changing habits
Articles like this are usually clickbait with two or three useful bits of advice that you’ve already read elsewhere, coupled with some other random things to pad it out. That’s not the case with Ryan Holiday’s post, which lists:
- Prepare for the hours ahead
- Go for a walk
- Do the deep work
- Do a kindness
- Read. Read. Read.
- Find true quiet
- Make time for strenuous exercise
- Think about death
- Seize the alive time
- Say thanks — to the good and bad
- Put the day up for review
- Find a way to connect to something big
- Get eight hours of sleep
Source: Thought Catalog
Valuing and signalling your skills
When I rocked up to the MoodleMoot in Miami back in November last year, I ran a workshop that involved human spectrograms, post-it notes, and participatory activities. Although I work in tech and my current role is effectively a product manager for Moodle, I still see myself primarily as an educator.
This, however, was a surprise for some people who didn’t know me very well before I joined Moodle. As one person put it, “I didn’t know you had that in your toolbox”. The same was true at Mozilla; some people there just saw me as a quasi-academic working on web literacy stuff.
Given this, I was particularly interested in a post from Steve Blank which outlined why he enjoys working with startup-like organisations rather than large, established companies:
It never crossed my mind that I gravitated to startups because I thought more of my abilities than the value a large company would put on them. At least not consciously. But that’s the conclusion of a provocative research paper, Asymmetric Information and Entrepreneurship, that explains a new theory of why some people choose to be entrepreneurs. The authors’ conclusion — Entrepreneurs think they are better than their resumes show and realize they can make more money by going it alone.And in most cases, they are right.If you stop and think for a moment, it's entirely obvious that you know your skills, interests, and knowledge better than anyone who hires you for a specific role. Ordinarily, they're interested in the version of you that fits the job description, rather than you as a holistic human being.
The paper that Blank cites covers research which followed 12,686 people over 30+ years. It comes up with seven main findings, but the most interesting thing for me (given my work on badges) is the following:
If the authors are right, the way we signal ability (resumes listing education and work history) is not only a poor predictor of success, but has implications for existing companies, startups, education, and public policy that require further thought and research.It's perhaps a little simplistic as a binary, but Blank cites a 1970s paper that uses 'lemons' and 'cherries' as a metaphors to compare workers:
Lemons Versus Cherries. The most provocative conclusion in the paper is that asymmetric information about ability leads existing companies to employ only “lemons,” relatively unproductive workers. The talented and more productive choose entrepreneurship. (Asymmetric Information is when one party has more or better information than the other.) In this case the entrepreneurs know something potential employers don’t – that nowhere on their resume does it show resiliency, curiosity, agility, resourcefulness, pattern recognition, tenacity and having a passion for products.My main takeaway from this isn’t necessarily that entrepreneurship is always the best option, but that we’re really bad at signalling abilities and finding the right people to work with. I’m convinced that using digital credentials can improve that, but only if we use them in transformational ways, rather than replicate the status quo.This implication, that entrepreneurs are, in fact, “cherries” contrasts with a large body of literature in social science, which says that the entrepreneurs are the “lemons”— those who cannot find, cannot hold, or cannot stand “real jobs.”
Source: Steve Blank
Intimate data analytics in education
The ever-relevant and compulsively-readable Ben Williamson turns his attention to ‘precision education’ in his latest post. It would seem that now that the phrase ‘personalised learning’ has jumped the proverbial shark, people are doubling down on the rather dangerous assumption that we just need more data to provide better learning experiences.
In some ways, precision education looks a lot like a raft of other personalized learning practices and platform developments that have taken shape over the past few years. Driven by developments in learning analytics and adaptive learning technologies, personalized learning has become the dominant focus of the educational technology industry and the main priority for philanthropic funders such as Bill Gates and Mark Zuckerberg.As Williamson points out, the collection of ‘intimate data’ is particularly concerning, particularly in the wake of the Cambridge Analytica revelations.[…]
A particularly important aspect of precision education as it is being advocated by others, however, is its scientific basis. Whereas most personalized learning platforms tend to focus on analysing student progress and outcomes, precision education requires much more intimate data to be collected from students. Precision education represents a shift from the collection of assessment-type data about educational outcomes, to the generation of data about the intimate interior details of students’ genetic make-up, their psychological characteristics, and their neural functioning.
Many people will find the ideas behind precision education seriously concerning. For a start, there appear to be some alarming symmetries between the logics of targeted learning and targeted advertising that have generated heated public and media attention already in 2018. Data protection and privacy are obvious risks when data are collected about people’s private, intimate and interior lives, bodies and brains. The ethical stakes in using genetics, neural information and psychological profiles to target students with differentiated learning inputs are significant.There's a very definite worldview which presupposes that we just need to throw more technology at a problem until it goes away. That may be true in some situations, but at what cost? And to what extent is the outcome an artefact of the constraints of the technologies? Hopefully my own kids will be finished school before this kind of nonsense becomes mainstream. I do, however, worry about my grandchildren.
The technical machinery alone required for precision education would be vast. It would have to include neurotechnologies for gathering brain data, such as neuroheadsets for EEG monitoring. It would require new kinds of tests, such as those of personality and noncognitive skills, as well as real-time analytics programs of the kind promoted by personalized-learning enthusiasts. Gathering intimate data might also require genetics testing technologies, and perhaps wearable-enhanced learning devices for capturing real-time data from students’ bodies as proxy psychometric measures of their responses to learning inputs and materials.Thankfully, Williamson cites the work of academics who are proposing a different way forward. Something that respects the social aspect of learning rather than a reductionist view that focuses on inputs and outputs.
One productive way forward might be to approach precision education from a ‘biosocial’ perspective. As Deborah Youdell argues, learning may be best understood as the result of ‘social and biological entanglements.’ She advocates collaborative, inter-disciplinary research across social and biological sciences to understand learning processes as the dynamic outcomes of biological, genetic and neural factors combined with socially and culturally embedded interactions and meaning-making processes. A variety of biological and neuroscientific ideas are being developed in education, too, making policy and practice more bio-inspired.The trouble is, of course, is that it's not enough for academics to write papers about things. Or even journalists to write newspaper articles. Even with all of the firestorm over Facebook recently, people are still using the platform. If the advocates of 'precision education' have their way, I wonder who will actually create something meaningful that opposes their technocratic worldview?
Source: Code Acts in Education
All killer, no filler
This short posts cites a talk entitled 10 Timeframes given by Paul Ford back in 2012:
Ford asks a deceivingly simple question: when you spend a portion of your life (that is, your time) working on a project, do you take into account how your work will consume, spend, or use portions of other lives? How does the ‘thing’ you are working on right now play out in the future when there are “People using your systems, playing with your toys, [and] fiddling with your abstractions”?In the talk, Ford mentions that in a 200-seat auditorium, his speaking for an extra minute wastes over three hours of human time, all told. Not to mention those who watch the recording, of course.
When we’re designing things for other people, or indeed working with our colleagues, we need to think not only about our own productivity but how that will impact others. I find it sad when people don’t do the extra work to make it easier for the person they have the power to impact. That could be as simple as sending an email that, you know, includes the link to the think being referenced. Or it could be an entire operating system, a building, or a new project management procedure.
I often think about this when editing video: does this one-minute section respect the time of future viewers? A minute multiplied by the number of times a video might be video suddenly represents a sizeable chunk of collective human resources. In this respect, ‘filler’ is irresponsible: if you know something is not adding value or meaning to future ‘consumers,’ then you are, in a sense, robbing life from them. It seems extreme to say that, yes, but hopefully the contemplating the proposition has not wasted your time.My son's at an age where he's started to watch a lot of YouTube videos. Due to the financial incentives of advertising, YouTubers fill the first minute (at least) with tell you what you're going to find out, or with meaningless drivel. Unfortunately, my son's too young to have worked that out for himself yet. And at eleven years old, you can't just be told.
In my own life and practice, I go out of my way to make life easier for other people. Ultimately, of course, it makes life easier for me. By modelling behaviours that other people can copy, you’re more likely to be the recipient of time-saving practices and courteous behaviour. I’ve still a lot to learn, but it’s nice to be nice.
Source: James Shelley (via Adam Procter)