Issue #303: Rest your weary head

    The latest issue of the newsletter hit inboxes earlier today!

    💥 Read

    🔗 Subscribe

    Altruism

    “Idealistic as it may sound, altruism should be the driving force in business, not just competition and a desire for wealth.”

    (Dalai Lama)

    Work-life balance is actually a circle, according to Jeff Bezos

    Whatever your thoughts about Amazon, it’s hard to disagree that they’ve changed the world. Their CEO, Jeff Bezos, has some thoughts about what’s usually termed ‘work-life balance’:

    This work-life harmony thing is what I try to teach young employees and actually senior executives at Amazon too. But especially the people coming in. I get asked about work-life balance all the time. And my view is, that’s a debilitating phrase because it implies there’s a strict trade-off. And the reality is, if I am happy at home, I come into the office with tremendous energy. And if I am happy at work, I come home with tremendous energy.
    Of course, if you work from home (as I do) being happy at home is crucial to being happy at work.

    I like his metaphor of a circle, about it not being a trade-off or ‘balance’:

    It actually is a circle; it’s not a balance. And I think that is worth everybody paying attention to it. You never want to be that guy — and we all have a coworker who’s that person — who as soon as they come into a meeting they drain all the energy out of the room. You can just feel the energy go whoosh! You don’t want to be that guy. You want to come into the office and give everyone a kick in their step.
    All of the most awesome people I know have nothing like a work-life 'balance'. Instead, they work hard, play hard, and tie that to a mission bigger than themselves.

    Whether that’s true for the staff on targets in Amazon warehouses is a different matter, of course. But for knowledge workers, I think it’s spot-on.

    Source: Chicago Tribune

    Nothing better to do

    “Work is the refuge of people who have nothing better to do.”

    (Oscar Wilde)

    The virtue of rest

    This article in The Washington Post is, inevitably, focused on American work culture. However, I think it’s more widely applicable, even if we are a bit more chilled out in Europe.

    Many victories of the labor movement were premised on the precise notion that the majority of one’s life shouldn’t be made up of work: It was the socialist Robert Owen who championed the eight-hour workday, coining the slogan “Eight hours labour, eight hours recreation, eight hours rest.” For Owen, it was important not only that workers had time to sleep after a hard day’s labor, but also that they had time to pursue their own interests — to enjoy leisure activities, cultivate their own projects, spend time with their families and so forth. After all, a life with nothing but work and sleep is akin to slavery, and not particularly dignified.
    Most mornings, I wake up rested and ready for work. Like most people, there are some mornings that I don't. Unsurprisingly, the mornings when I don't feel ready for work are those that follow days when I've had to do more work-related tasks than usual.
    There’s a balance to be struck where it comes to work and rest, but in the United States, values and laws are already slanted drastically in favor of work. I would advise those concerned about Americans’ dignity, freedom and independence to not focus on compelling work for benefits or otherwise trying to marshal people into jobs when what they really need are health care, housing assistance, unemployment benefits and so forth.
    I'm reading Utopia for Realists at the moment, which has some excellent suggestions. It presents some startlingly-simple, well-researched ways forward. I think my favourite part is where the author, Rutger Bregman, points out that people who are in need require direct help, rather than complex schemes.

    The same goes with our so-called ‘work-life’ balance. What we actually need for a flourishing, healthy society and democracy is more rest. As Alex Pang, author of the book Rest notes, leisure is usually framed these days as a way to get more work done. Instead, we should value it for its own sake.

    Source: The Washington Post

    Tolerating uncertainty

    Although claims about the ‘unprecedented’ times we live in can be overblown, I think it’s reasonable to state that we exist in an uncertain world.

    This article by Kristin Wong in The Cut talks about the importance of being able to tolerate uncertainty, as this “improves our decisions, promotes empathy, and boosts creativity,” — according to Jamie Holmes, a Future Tense Fellow at New America and author of the book, Nonsense: The Power of Not Knowing.

    Uncertainty can create cognitive dissonance, the discomfort of holding two contradictory thoughts, opinions, or beliefs. Ironically, though, not being able to deal with uncertainty can be equally distressing. An intolerance of uncertainty is linked to anxiety and depression. So how do you get better at tolerating it?
    The article suggests that you start off with a quiz to ascertain your tolerance to ambiguity and uncertainty. However, life is short, so I'd skip that and move onto the meat of the article.

    We’re better or worse at tolerating uncertainty and ambiguity in different situations. It’s not like we have a single emotional gear.

    There are certain times you might be extra susceptible to certainty, Holmes suggests. “Our need for closure is heightened when we’re rushed, bored, tired, or tipsy,” he said. So when you’re feeling any of those things, or maybe all of them, be aware that you might be prone to cognitive closure at that time.

    Your desire for certainty probably also varies depending on the situation. You might be anxious over your bank account, for instance, but you don’t really care how you did on your performance review. Pinpoint these concerns, then avoid what Michel Dugas, a professor of psychology at the University of Quebec in Outaouais, calls “certainty seeking behavior.”

    In order to improve our relationship with uncertainty, we need to get our of our comfort zone, and out of our heads.

    “Two ways to get comfortable with uncertainty, perhaps surprisingly, are reading fiction and multicultural experiences,” Holmes says. “Make reading short stories or novels a habit. Likely because it invites us inside the worlds and minds of characters unlike ourselves, fiction makes ‘otherness’ less threatening.” He adds that both fiction and multicultural experiences not only lower our need for closure and help us make better decisions, but they also make us more empathetic. Research, like this 2010 study, shows that multicultural experiences fuel creativity, too.

    Travel, reading, learning a new language, experiencing another culture — these all present new experiences to your brain, which force you outside of your comfort zone in rewarding ways. Also: They are fun. Sounds like a pretty certain win-win.

    I've actually read Holmes' book. I'm not sure whether it's because I'm a Philosophy graduate who's already done some work on ambiguity, but I found it underwhelming. It is worth, however, thinking about ways in which we can all deal with uncertainty.

    Source: The Cut (via Stowe Boyd)

    Alexa for Kids as babysitter?

    I’m just on my way out if the house to head for Scotland to climb some mountains with my wife.

    But while she does (what I call) her ‘last minute faffing’ I read Dan Hon’s newsletter. I’ll just quite the relevant section without any attempt at comment or analysis.

    He includes references in his newsletter, but you’ll just have to click through for those.

    Mat Honan reminded me that Amazon have made an Alexa for Kids (during the course of which Tom Simonite had a great story about Alexa diligently and non-plussedly educating a group of preschoolers about the history of FARC after misunderstanding their requests for farts) and Honan has a great article about it. There are now enough Alexa (plural?) out there that the phenomenon of "the funny things kids say to Alexa" is pretty well documented as well as the earlier "Alexa is teaching my kid to be rude" observation. This isn't to say that Amazon haven't done *any* work thinking about how Alexa works in a kid context (Honan's article shows that they've demonstrably thought about how Alexa might work and that they've made changes to the product to accommodate children as a specific class of user) but the overwhelming impression I had after reading Honan's piece was that, as a parent, I still don't think Amazon haven't gone far enough in making Alexa kid-friendly.

    They’ve made some executive decisions like coming down hard on curation versus algorithmic selection of content (see James Bridle’s excellent earlier essay on YouTube, that something is wrong on the internet and recent coverage of YouTube Kids' content selection method still finding ways to recommend, shall we say, videos espousing extreme views). And Amazon have addressed one of the core reported issues of having an Alexa in the house (the rudeness) by designing in support for a “magic word” Easter Egg that will reward kids for saying “please”. But that seems rather tactical and dealing with a specific issue and not, well, foundational. I think that the foundational issue is something more like this: parenting is a very personal subject. As I have become a parent, I have discovered (and validated through experimental data) that parents have very specific views about how to do things! Many parents do not agree with each other! Parents who agree with each other on some things do not agree on other things! In families where there are two parents there is much scope for disagreement on both desired outcome and method!

    All of which is to say is that the current design, architecture and strategy of Alexa for Kids indicates one sort of one-size-fits-all method and that there’s not much room for parental customization. This isn’t to say that Amazon are actively preventing it and might not add it down the line - it’s just that it doesn’t really exist right now. Honan’s got a great point that:

    “[For example,] take the magic word we mentioned earlier. There is no universal norm when it comes to what’s polite or rude. Manners vary by family, culture, and even region. While “yes, sir” may be de rigueur in Alabama, for example, it might be viewed as an element of the patriarchy in parts of California.”

    Some parents may have very specific views on how they want to teach their kids to be polite. This kind of thinking leads me down the path of: well, are we imagining a world where Alexa or something like it is a sort of universal basic babysitter, with default norms and those who can get, well, customization? Or what someone else might call: attentive, individualized parenting?

    When Alexa for Kids came out, I did about 10 seconds' worth of thinking and, based on how Alexa gets used in our house (two parents, a five year old and a 19 month old) and how our preschooler is behaving, I was pretty convinced that I’m in no way ready or willing to leave him alone with an Alexa for Kids in his room. My family is, in what some might see as that tedious middle class way, pretty strict about the amount of screen time our kids get (unsupervised and supervised) and suffice it to say that there’s considerable difference of opinion between my wife and myself on what we’re both comfortable with and at what point what level of exposure or usage might be appropriate.

    And here’s where I reinforce that point again: are you okay with leaving your kids with a default babysitter, or are you the kind of person who has opinions about how you want your babysitter to act with your kids? (Yes, I imagine people reading this and clutching their pearls at the mere thought of an Alexa “babysitting” a kid but need I remind you that books are a technological object too and the issue here is in the degree of interactivity and access). At least with a babysitter I can set some parameters and I’ve got an idea of how the babysitter might interact with the kids because, well, that’s part of the babysitter screening process.

    Source: Things That Have Caught My Attention s5e11

    Getting on the edtech bus

    As many people will be aware, the Open University (OU) is going through a pretty turbulent time in its history. As befitting the nature of the institution, a lot of conversations about its future are happening in public spaces.

    Martin Weller, a professor at the university, has been vocal. In this post, a response to a keynote from Tony Bates, he offers a way forward.

    I would like to... propose a new role: Sensible Ed Tech Advisor. Job role is as follows:
    • Ability to offer practical advice on adoption of ed tech that will benefit learners
    • Strong BS detector for ed tech hype
    • Interpreter of developing trends for particular context
    • Understanding of the intersection of tech and academic culture
    • Communicating benefits of any particular tech in terms that are valuable to educators and learners
    • Appreciation of ethical and social impact of ed tech
    (Lest that sound like I’m creating a job description for myself, I didn’t add “interest in ice hockey” at the end, so you can tell that it isn’t)
    Weller notes that Bates mentioned in his his post-keynote write-up that the OU has a "fixation on print as the ‘core’ medium/technology". He doesn't think that's correct.

    I’m interested in this, because the view of an institution is formed not only by the people inside it, but by the press and those who have an opinion and an audience. Weller accuses Bates of being woefully out of date. I think he’s correct to call him out on it, as I’ve witnessed recently a whole host of middle-aged white guys lazily referencing things in presentations they haven’t bothered to research very well.

     It is certainly true that some disciplines do have a print preference, and Tony is correct to say that often a print mentality is transferred to online. But what this outdated view (it was probably true 10-15 years ago) suggests is a ‘get digital or else’ mentality. Rather, I would argue, we need to acknowledge the very good digital foundation we have, but find ways to innovate on top of this.

    If you are fighting an imaginary analogue beast, then this becomes difficult. For instance, Tony does rightly highlight how we don’t make enough use of social media to support students, but then ignores that there are pockets of very good practice, for example the OU PG Education account and the use of social media in the Cisco courses. Rolling these out across the university is not simple, but it is the type of project that we know how to realise. But by framing the problem as one of wholesale structural, cultural change starting from a zero base, it makes achieving practical, implementable projects difficult. You can’t do that small(ish) thing until we’ve done these twenty big things.

    We seem to be living at a time when those who were massive, uncritical boosters of technology in education (and society in general) are realising the errors of their ways. I actually wouldn’t count Weller as an uncritical booster, but I welcome the fact that he is self-deprecating enough to include himself in that crowd.

    I would also suggest that the sort of “get on the ed tech bus or else” argument that Tony puts forward is outdated, and ineffective (I’ve been guilty of it myself in the past). And as Audrey Watters highlights tirelessly, an unsceptical approach to ed tech is problematic for many reasons. Far more useful is to focus on specific problems staff have, or things they want to realise, than suggest they just ‘don’t get it’. Having an appreciation for this intersection between ed tech (coming from outside the institution and discipline often) and the internal values and culture is also an essential ingredient in implementing any technology successfully.
    This is a particularly interesting time in the history of technology in education and society. I'm glad that conversations like this are happening in the open.

    Source: Martin Weller

    Bootstraps

    "You can't pull yourself up by your bootstraps if you have no boots."

    (Joseph Hanlon)

    Space as a service

    This isn’t the most well-written post I’ve read this year, but it does point to a shift that I’ve noticed — perhaps because I work remotely.

    Increasingly we are moving to an almost post consumer world where we are less bothered about accumulating more stuff and much more interested in being provided with services, experiences and ephemeral pleasures.

    So Uber instead of Cars, Spotify instead of CD’s, Netflix instead of DVD’s: on-demand this, on-demand that. Why bother to own something you seldom use, that becomes out of date rapidly, or that you really cannot afford. Rent it when you need it.

    Some might think that these are things ‘Millennials’ do, but if that generation is defined as those born from 1980 onwards then some of those are almost 40 years old. It’s not a trend that’s going away.

    When you’re used to paying monthly for software, streaming music and films instead of buying them, and renting accommodation (because you’re priced out of the housing market), then you start thinking differently about the world.

    Just as it is now easy to buy almost any Software as a Service, so it will become with real estate. Space, as a Service, is the future of real estate. On demand and where you buy exactly the features, and services, you need, whenever and wherever you are.

    Key though is that this extends beyond spaces rented on-demand; regardless of tenure it will become important to be able to also rent or purchase on-demand all the services one might need to make the most of your space, or to enable the most productive use of that space.

    So for businesses who employ people who can do most of what they do from anywhere, the problem becomes co-ordination rather than office space. Former Mozilla colleague John O’Duinn makes this point in his upcoming book.

    We really do not NEED offices anymore, we really do not NEED shops anymore. In fact we really do not NEED an awful lot of real estate. That is not to say we don’t WANT these spaces, but what we do in them will change.
    So companies like WeWork are already huge, and continue to grow rapidly.
    So how will all this change supply?

    Well you have people who:

    • Prefer services over products
    • Don't need to go to an office to work
    • Are used to on-demand
    • And are uber connected with vast computing power in their pocket.
    The answer, to me, has to be #Space As a Service - space that takes account of these four trends. Space that is specifically designed to allow humans to do what they are good at.
    I think this is a hugely exciting time. I'm just hoping that we see a similar revolution around equity, both in terms of diversity within organisations and shared ownership of them.

    Source: Antony Slumbers

    Blockchain as a 'futuristic integrity wand'

    I’ve no doubt that blockchain technology is useful for super-boring scenarios and underpinning get-rich-quick schemes, but it has very little value to the scenarios in which I work. I’m trying to build trust, not work in an environment where technology serves as a workaround.

    This post by Kai Stinchcombe about the blockchain bubble is a fantastic read. The author’s summary?

    Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction.

    Fair enough, let's dig in...

    People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that google and facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into, it’s a specific data structure: a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions.

    It's funny seeing people who have close to zero understanding of how blockchain works explain how it's going to 'revolutionise' X, Y, or Z. Again, it's got exciting applicability... for very boring stuff.

    [H]ere’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data, and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.”

    Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?”

    This is the bit that really grabbed me about the post, the blockchain-as-metaphor section. People are sold on stories, not on technologies. Which is why some people are telling stories that involve magicking away all of their fears and problems with a magic blockchain wand.

    People treat blockchain as a “futuristic integrity wand”—wave a blockchain at the problem, and suddenly your data will be valid. For almost anything people want to be valid, blockchain has been proposed as a solution.

    It’s true that tampering with data stored on a blockchain is hard, but it’s false that blockchain is a good way to create data that has integrity.

    [...]

    Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy, they merely enable you to audit whether it has been tampered with. A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to their cronies. An investment fund whose charter is written in software can still misallocate funds.

    When, like me, you think that humanity moves forward at the speed of trust and collaboration, blockchain seems like the antithesis of all that.

    Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole.

    Source: Kai Stinchcombe

    Profit vs benefit

    “The difference between profit and benefit is that operations producing profit can be carried out by another in my place: he would make the profit, unless he was acting on my behalf. But the fact remains that profitable activity can always be carried out by someone else. Hence the principle of competition. On the other hand, what is beneficial to me depends on gestures, acts, living moments which it would be impossible for me to delegate.”

    (Frédéric Gros, A Philosophy of Walking)

    Issue #302: Read aloud for maximum effect

    The latest issue of the newsletter hit inboxes earlier today!

    💥 Read

    🔗 Subscribe

    What can dreams of a communist robot utopia teach us about human nature?

    This article in Aeon by Victor Petrov posits that, in the post-industrial age, we no longer see human beings as primarily manual workers, but as thinkers using digital screens to get stuff done. What does that do to our self-image?

    The communist parties of eastern Europe grappled with this new question, too. The utopian social order they were promising from Berlin to Vladivostok rested on the claim that proletarian societies would use technology to its full potential, in the service of all working people. Bourgeois information society would alienate workers even more from their own labour, turning them into playthings of the ruling classes; but a socialist information society would free Man from drudgery, unleash his creative powers, and enable him to ‘hunt in the morning … and criticise after dinner’, as Karl Marx put it in 1845. However, socialist society and its intellectuals foresaw many of the anxieties that are still with us today. What would a man do in a world of no labour, and where thinking was done by machines?
    Bulgaria was a communist country that, after the Second World War, went from producing cigarettes to being one of the world's largest producers of computers. This had a knock-on effect on what people wrote about in the country.
    The Bulgarian reader was increasingly treated to debates about what humanity would be in this new age. Some, such as the philosopher Mityu Yankov, argued that what set Man apart from the animals was his ability to change and shape nature. For thousands of years, he had done this through physical means and his own brawn. But the Industrial Revolution had started a change of Man’s own nature, which was culminating with the Information Revolution – humanity now was becoming not a worker but a ‘governor’, a master of nature, and the means of production were not machines or muscles, but the human brain.
    Lyuben Dilov, a popular sci-fi author, focused on "the boundaries between man and machine, brain and computer". His books were full of societies obsessed with technology.
    Added to this, there is technological anxiety, too – what is it to be a man when there are so many machines? Thus, Dilov invents a Fourth Law of Robotics, to supplement Asimov’s famous three, which states that ‘the robot must, in all circumstances, legitimate itself as a robot’. This was a reaction by science to the roboticists’ wish to give their creations ever more human qualities and appearance, making them subordinate to their function – often copying animal or insect forms. Zenon muses on human interactions with robots that start from a young age, giving the child power over the machine from the outset. This undermines our trust in the very machines on which we depend. Humans need a distinction from the robots, they need to know that they are always in power and couldn’t be lied to. For Dilov, the anxiety was about the limits of humanity, at least in its current stage – fearful, humans could not yet treat anything else, including their machines, as equals.
    This all seems very pertinent at a time when deepfakes make us question what is real online. We're perhaps less worried about a Blade Runner-style dystopia and more concerned about digital 'reality' but, nevertheless, questions about what it means to be human persist.
    Bulgarian robots were both to be feared and they were the future. Socialism promised to end meaningless labour but reproduced many of the anxieties that are still with us today in our ever-automating world. What can Man do that a machine cannot do is something we still haven’t solved. But, like Kesarovski, perhaps we need not fear this new world so much, nor give up our reservations for the promise of a better, easier world.
    Source: Aeon

    Escaping from the crush of circumstances

    “Today I escaped from the crush of circumstances, or better put, I threw them out, for the crush wasn’t from outside me but in my own assumptions.”
    (Marcus Aurelius)

    The benefits of reading aloud to children

    This article in the New York Times by Perri Klass, M.D. focuses on studies that show a link between parents reading to their children and a reduction in problematic behaviour.

    This study involved 675 families with children from birth to 5; it was a randomized trial in which 225 families received the intervention, called the Video Interaction Project, and the other families served as controls. The V.I.P. model was originally developed in 1998, and has been studied extensively by this research group.

    Participating families received books and toys when they visited the pediatric clinic. They met briefly with a parenting coach working with the program to talk about their child’s development, what the parents had noticed, and what they might expect developmentally, and then they were videotaped playing and reading with their child for about five minutes (or a little longer in the part of the study which continued into the preschool years). Immediately after, they watched the videotape with the study interventionist, who helped point out the child’s responses.

    I really like the way that they focus on the positives and point out how much the child loves the interaction with their parent through the text.

    The Video Interaction Project started as an infant-toddler program, working with low-income urban families in New York during clinic visits from birth to 3 years of age. Previously published data from a randomized controlled trial funded by the National Institute of Child Health and Human Development showed that the 3-year-olds who had received the intervention had improved behavior — that is, they were significantly less likely to be aggressive or hyperactive than the 3-year-olds in the control group.

    I don't know enough about the causes of ADHD to be able to comment, but as a teacher and parent, I do know there's a link between the attention you give and the attention you receive.

    “The reduction in hyperactivity is a reduction in meeting clinical levels of hyperactivity,” Dr. Mendelsohn said. “We may be helping some children so they don’t need to have certain kinds of evaluations.” Children who grow up in poverty are at much higher risk of behavior problems in school, so reducing the risk of those attention and behavior problems is one important strategy for reducing educational disparities — as is improving children’s language skills, another source of school problems for poor children.

    It is a bit sad when we have to encourage parents to play with children between the ages of birth and three, but I guess in the age of smartphone addiction, we kind of have to.

    Source: The New York Times

    Image CC BY Jason Lander

    You need more daylight to sleep better

    An an historian, I’ve often been fascinated about what life must have been like before the dawn of electricity. I have a love-hate relationship with artificial light. On the one hand, I use a lightbox to stave off Seasonal Affective Disorder. On the other hand, I’ve got (my optician tells me) not only pale blue irises but very thin corneas. That makes me photophobic and subject to the kind of glare on a regular basis I can only imagine ‘normal’ people get after staring at a lightbulb for a while.

    In this article, Linda Geddes describes an experiment in which she decided to forgo artificial life for a number of weeks to see what effect it had on her health and, most importantly, her sleep.

    Working with sleep researchers Derk-Jan Dijk and Nayantara Santhi at the University of Surrey, I designed a programme to go cold-turkey on artificial light after dark, and to try to maximise exposure to natural light during the day – all while juggling an office job and busy family life in urban Bristol.
    By the end of 2017, instead of having to manually install something like f.lux on my devices, they all started to have it built-in. There's a general realisation that blue light before bedtime is a bad idea. What this article points out, however, is another factor: how bright the light is that you're subjected to during the day.
    Light enables us to see, but it affects many other body systems as well. Light in the morning advances our internal clock, making us more lark-like, while light at night delays the clock, making us more owlish. Light also suppresses a hormone called melatonin, which signals to the rest of the body that it’s night-time – including the parts that regulate sleep. “Apart from vision, light has a powerful non-visual effect on our body and mind, something to remember when we stay indoors all day and have lights on late into the night,” says Santhi, who previously demonstrated that the evening light in our homes suppresses melatonin and delays the timing of our sleep.
    The important correlation here is between the strength of light Geddes experienced during her waking hours, and the quality of her sleep.
    But when I correlated my sleep with the amount of light I was exposed to during the daytime, an interesting pattern emerged. On the brightest days, I went to bed earlier. And for every 100 lux increase in my average daylight exposure, I experienced an increase in sleep efficiency of almost 1% and got an extra 10 minutes of sleep.
    This isn't just something that Geddes has experienced; studies have also found this kind of correlation.
    In March 2007, Dijk and his colleagues replaced the light bulbs on two floors of an office block in northern England, housing an electronic parts distribution company. Workers on one floor of the building were exposed to blue-enriched lighting for four weeks; those on the other floor were exposed to white light. Then the bulbs were switched, meaning both groups were ultimately exposed to both types of light. They found that exposure to the blue-enriched white light during daytime hours improved the workers’ subjective alertness, performance, and evening fatigue. They also reported better quality and longer sleep.
    So the key takeaway message?
    It’s ridiculously simple. But spending more time outdoors during the daytime and dimming the lights in the evening really could be a recipe for better sleep and health. For millennia, humans have lived in synchrony with the Sun. Perhaps it's time we got reacquainted.
    Source: BBC Future

    On the cultural value of memes

    I’ve always been a big fan of memes. In fact, I discuss them in my thesis, ebook, and TEDx talk. This long-ish article from Jay Owens digs into their relationship with fake news and what he calls ‘post-authenticity’. What I’m really interested in, though, comes towards the end. He gets into the power of memes and why they’re the perfect form of online cultural expression.

    So through humour, exaggeration, and irony — a truth emerges about how people are actually feeling. A truth that they may not have felt able to express straightforwardly. And there’s just as much, and potentially more, community present in these groups as in many of the more traditional civic-oriented groups Zuckerberg’s strategy may have had in mind.
    The thing that can be missing from text-based interactions is empathy. The right kind of meme, however, speaks using images, words, but also to something else that a group have in common.
    Meme formats — from this week’s American Chopper dialectic model to now classics like the “Exploding Brain,” “Distracted Boyfriend,” and “Tag Yourself” templates — are by their very nature iterative and quotable. That is how the meme functions, through reference to the original context and memes that have come before, coupled with creative remixing to speak to a particular audience, topic, or moment. Each new instance of a meme is thereby automatically familiar and recognisable. The format carries a meta-message to the audience: “This is familiar, not weird.” And the audience is prepared to know how to react: you like, you respond with laughter-referencing emoji, you tag your friends in the comments.
    Let's take this example, that Owens cites in the article. I sent it to my wife via Telegram, which an instant messaging app that we use as a permanent backchannel). 90s kids

    Her response, inevitably was: 😂

    It’s funny because it’s true. But it also quickly communicates solidarity and empathy.

    The format acts as a kind of Trojan horse, then, for sharing difficult feelings — because the format primes the audience to respond hospitably. There isn’t that moment of feeling stuck over how to respond to a friend’s emotional disclosure, because she hasn’t made the big statement directly, but instead through irony and cultural quotation — distancing herself from the topic through memes, typically by using stock photography (as Leigh Alexander notes) rather than anything as gauche as a picture of oneself. This enables you the viewer to sidestep the full intensity of it in your response, should you choose, but still, crucially, to respond). And also to DM your friend and ask, “Hey, are you alright?” and cut to the realtalk should you so choose to.
    So, effectively, you can be communicating different things to different people. If, instead of sending the 90s kids image above directly to my wife via Telegram, I'd shared it to my Twitter followers, it may have elicited a different response. Some people would have liked and retweeted it, for sure, but someone who knows me well might ask if I'm OK. After all, there's a subtext in there of feeling like you're "stuck".

    Owens goes on to talk about how that memetic culture means that we’re living in a ‘post authentic’ world. But did such authenticity ever really exist?

    So perhaps to say that this post-authentic moment is one of evolving, increasingly nuanced collective communication norms, able to operate with multi-layered recursive meanings and ironies in disposable pop culture content… is kind of cold comfort.

    Nonetheless, author Robin Sloan described the genius of the “American Chopper” meme as being that “THIS IS THE ONLY MEME FORMAT THAT ACKNOWLEDGES THE EXISTENCE OF COMPETING INFORMATION, AND AS SUCH IT IS THE ONLY FORMAT SUITED TO THE COMPLEXITY OF OUR WORLD!”

    Amen to that.

    Source: Jay Owens

    The résumé is a poor proxy for a human being

    I’ve never been a fan of the résumé, or ‘Curriculum Vitae’ (CV) as we tend to call them in the UK. How on earth can a couple of sheets of paper ever hope to sum up an individual in all of their complexity? It inevitably leads to the kind of things that end up on LinkedIn profiles: your academic qualifications, job history, and a list of hobbies that don’t make you sound like a loser.

    In this (long-ish) article for Quartz, Oliver Staley looks at what Laszlo Bock is up to with his new startup, with a detour through the history of the résumé.

    “Resumes are terrible,” says Laszlo Bock, the former head of human resources at Google, where his team received 50,000 resumes a week. “It doesn’t capture the whole person. At best, they tell you what someone has done in the past and not what they’re capable of doing in the future.”

    I really dislike résumés, and I’m delighted that I’ve managed to get my last couple of jobs without having to rely on them. I guess that’s a huge benefit of working openly; the web is your résumé.

    Resumes force job seekers to contort their work and life history into corporately acceptable versions of their actual selves, to better conform to the employer’s expectation of the ideal candidate. Unusual or idiosyncratic careers complicate resumes. Gaps between jobs need to be accounted for. Skills and abilities learned outside of formal work or education aren’t easily explained. Employers may say they’re looking for job seekers to distinguish themselves, but the resume requires them to shed their distinguishing characteristics.

    Unfortunately, Henry Ford’s ‘faster horses’ rule also applies to résumés. And (cue eye roll) people need to find a way to work in buzzwords like ‘blockchain’.

    The resume of the near future will be a document with far more information—and information that is far more useful—than the ones we use now. Farther out, it may not be a resume at all, but rather a digital dossier, perhaps secured on the blockchain (paywall), and uploaded to a global job-pairing engine that is sorting you, and billions of other job seekers, against millions of openings to find the perfect match.

    I’m more interested in different approaches, rather than doubling-down on the existing approach, so it’s good to see large multinational companies like Unilever doing away with résumés. They prefer game-like assessments.

    Two years ago, the North American division of Unilever—the consumer products giant—stopped asking for resumes for the approximately 150-200 positions it fills from college campuses annually. Instead, it’s relying on a mix of game-like assessments, automated video interviews, and in-person problem solving exercises to winnow down the field of 30,000 applicants.

    It all sounds great but, at the end of the day it’s extra unpaid work, and more jumping through hoops.

    The games are designed so there are no wrong answers— a weakness in one characteristic, like impulsivity, can reveal strength in another, like efficiency—and pymetrics gives candidates who don’t meet the standards for one position the option to apply for others at the company, or even at other companies. The algorithm matches candidates to the opportunities where they’re most likely to succeed. The goal, Polli says, is to eliminate the “rinse and repeat” process of submitting near identical applications for dozens of jobs, and instead use data science to target the best match of job and employee.

    Back to Laszlo Bock, who claims that we should have an algorithmic system that matches people to available positions. I’m guessing he hasn’t read Brave New World.

    For the system to work, it would need an understanding of a company’s corporate culture, and how people actually function within its walls—not just what the company says about its culture. And employees and applicants would need to be comfortable handing over their personal data.

    For-profit entities wouldn’t be trusted as stewards of such sensitive information. Nor would governments, Bock says, noting that in communist Romania, where he was born, “the government literally had dossiers on every single citizen.”

    Ultimately, Bock says, the system should be maintained by a not-for-profit, non-governmental organization. “What I’m imagining, no human being should ever look inside this thing. You shouldn’t need to,” he says.

    Hiring people is a social activity. The problem of having too many applicants is a symptom of a broken system. This might sound crazy, but I feel like hierarchical structures and a lack of employee ownership causes some of the issues we see. Then, of course, there's much wider issues such as neo-colonialism, commodification, and bullshit jobs. But that's for another post (or two)...

    Source: Quartz at Work

    OEP (Open Educational Pragmatism?)

    This is an interesting post to read, not least because I sat next to the author at the conference he describes last week, and we had a discussion about related issues. Michael Shaw, who’s a great guy and I’ve known for a few years, is in charge of Tes Resources.

    I wondered if I would feel like an interloper at the first conference I’ve ever attended on Open Educational Resources (OERs).

    It wasn’t a dress code issue (though in hindsight I should have worn trainers) but that most of the attendees at #OER18 were from universities, while only a few of us there worked for education businesses.

    Shaw notes he was wary in attending the conference, not only because it's a fairly tight-knit community:
    I work for a commercial company, one that makes money from advertising and recruitment services, plus — even more controversially in this context — by letting teachers sell resources to each other, and taking a percentage on transactions.
    However, he found the hosts and participants "incredibly welcoming" and the debates "more open than [he'd] expected on how commercial organisations could play a part" in the ecosystem.

    Shaw is keen to point out that the Tes Resources site that he manages is “a potential space for OER-sharing”. He goes on to talk about how he’s an ‘OER pragmatist’ rather than an ‘OER purist’. As a former journalist, Shaw is a great writer. However, I want to tease apart some things I think he conflates.

    In his March 2018 post announcing the next phase of development for Tes Resources, Shaw announced that the goal was to create “a community of authors providing high-quality resources for educators”. He conflates that in this post with educators sharing Open Educational Resources. I don’t think the two things are the same, and that’s not because I’m an ‘OER purist’.

    The concern that I, and others in the Open Education community, have around commercial players in ecosystem is the tendency to embrace, extend, and extinguish:

    1. Embrace: Development of software substantially compatible with a competing product, or implementing a public standard.
    2. Extend: Addition and promotion of features not supported by the competing product or part of the standard, creating interoperability problems for customers who try to use the 'simple' standard.
    3. Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors that do not or cannot support the new extensions.
    So, think of Twitter before they closed their API: a thousand Twitter clients bloomed, and innovations such as pull-to-refresh were invented. Then Twitter decided to 'own the experience' of users and changed their API so that those third-party clients withered.

    Tes Resources, Shaw admitted to me, doesn’t even have an API. It’s a bit like Medium, the place he chose to publish this post. If he’d written the post in something like WordPress, he’d be notified of my reply via web standard technologies. Medium doesn’t adhere to those standards. Nor does Tes Resources. It’s a walled garden.

    My call, then, would be for Tes Resources to develop an API so that services such as the MoodleNet project I’m leading, can query and access it. Up until then, it’s not a repository. It’s just another silo.

    Source: Michael Shaw

    Image: CC BY Jess

← Newer Posts Older Posts →